You’ve probably heard the narrative before. At some point, we will invent an artificial intelligence that is more intelligent than we are. The superhuman intelligence will then have the capability to either build an improved version of itself, or engineer upgrades that improve its own intelligence. This will set off a process where the system upgrades itself, with its greater intelligence come up with new ways to enhance itself, and then upgrade itself again, looping in a rapid runaway process, producing an intelligence explosion.
Given that we only have human level intelligence, we have no ability to predict what happens next. Which is why Vernor Vinge coined the phrase “the technological singularity” in 1993. The “singularity” part of the label refers to singularities that exist in math and science, points at which existing theories or frameworks break down. Vinge predicted that this would happen “within 30 years” and would mark the “end of the human era.”
Despite our purported inability to make predictions, some people nevertheless make predictions about what happens next. Where they go with it depends on whether they’re a pessimist or an optimist. The pessimist doesn’t imagine things turning out very well for humanity. At best, we might hope to hang around as pets. At worst, the machines might either accidentally or intentionally wipe us out.
Most people who get excited about the singularity fall into the optimist camp. They see it being a major boon for humanity. The superhuman intelligences will provide the technology to upload ourselves into virtual environments, providing immortality and heaven on Earth. We will be taken along on the intelligence explosion ride, ultimately resulting, according to Ray Kurzeil, in the universe “waking up.” This religious like vision has been called “the rapture of the nerds.”
The modern singularity sentiment is that it will happen sometime in the 2040s, in other words, in about 20-30 years. Note however that Vinge’s original essay was written in 1993, when he said it would happen in about 30 years, a point that we’re rapidly approaching.
(Before going any further, I can’t resist pointing out that it’s 2019, the year when the original Blade Runner happens! Where is my flying car? My off world colonies? My sexy replicant administrative assistant?)
Human level artificial intelligence is almost always promised to be 20 years in the future. It’s been 20 years in the future since the 1950s. (In this way, it’s similar to fusion power and human exploration of Mars, both of which have also been 20 years in the future for the last several decades.) Obviously all the optimistic predictions in previous decades were wrong. Is there any reason to think that today’s predictions are any more accurate?
One reason frequently cited for the predictions are the ever increasing power of computer processing chips. Known as Moore’s Law, the trend of increasing computational power was first noted by Gordon Moore in the 1960s. What Moore actually noticed was the doubling of the number of transistors on an intergrated circuit chip over a period of time (originally one year, but later revised to every two years).
It’s important to understand that Moore never saw this as an open ended proposition. From the beginning, it was understood that eventually fundamental barriers would get in the way and the “law” would end. In fact, Moore’s Law in recent years has started sputtering. Progress has slowed and may halt completely between 2020 and 2025 after transistor features have been scaled down to 7 nanometers, below which quantum tunneling and other issues are expected to make further miniaturization infeasible, at least with doped silicon.
Undeterred, Kurzeil and other singularity predictors express faith that some new technology will step in to keep things moving, whether it be new materials (such as graphene) or new paradigms (neuromorphic computing, quantum computing, etc). But any prediction on the rate of progress after Moore’s Law peters out is based more on faith than science or engineering.
It’s worth noting that achieving human level intelligence in a system is more than just a capacity and performance issue. We won’t keep adding performance and have the machine “wake up.” Every advance in AI so far has required meticulous and extensive work by designers. There’s not currently any reason to suppose that will change.
AI research got out of its “winter” period in the 90s when it started focusing on narrow relatively practical solutions rather than the quest to build a mind. The achievements we see in the press continue to be along those lines. The reason is because engineers understand these problems and have some idea how to tackle them. They aren’t easy by any stretch, but they are achievable.
But building a mind is unlikely to happen until we understand how the natural versions work. I often write about neuroscience and our growing understanding of the brain. We have a broad but very blurry idea of how it works, with detailed knowledge on a few regions. But that knowledge is nowhere near the point where someone could use it to construct a technological version. If you talk to a typical neuroscientist, they will tell you that level of understanding is probably at least a century away.
To be clear, all the evidence is that the mind is a physical system that operates according to the laws of physics. I see no good reason to suppose that a technological version of it can’t be built…eventually. But predictions that it will happen in 20-30 years seem like overly optimistic speculation, speculation that is very similar to the predictions people have been making for 70 years. It could happen, but confident assertions that it will strike me as snake oil.
What about superhuman intelligence? Again, there’s no reason to suppose that human brains are the pinnacle of possible intelligence. On the other hand, there’s nothing in nature demonstrating intelligence orders of magnitude greater than humans. We don’t have an extant example to prove it can happen.
It might be that achieving the computational complexity and capacity of a human brain requires inevitable trade offs that put limits on just how intelligent such a system can be. Maybe squeezing hundreds of terabytes of information into a compact massively parallel processing framework operating on 20 watts of power and producing a flexible intelligence, due to the laws of physics, requires slower performance and water cooled operation (aka wetware). Or there may be alternate ways to achieve the same functionality, but they come with their own trade offs.
In many ways, the belief in god-like superhuman AIs is an updated version of the notions that humanity has entertained for tens of thousands of years, likely since our beginnings, that there are powerful conscious forces running the world. This new version has us actually creating the gods, but the resulting relationship is the same, particularly the part where they come in and solve all our problems.
My own view is that we will eventually have AGI (artificial general intelligence) and that it may very well exceed us in intelligence, but the runaway process envisioned by singularity enthusiasts will probably be limited by logistical realities and design constraints and trade offs we can’t currently see. While AGI is progressing, we will also be enhancing our own performance and integrating with the technology. Eventually biological engineering and artificial intelligence will converge, blurring the lines between engineered and evolved intelligence.
But it’s unlikely to come in some hard take off singularity, and it’s unlikely to happen in the next few decades. AGI and mind uploading are technologies that likely won’t come to fruition until several decades down the road, possibly not for centuries.
I totally understand why people want it to happen in a near time frame. No one wants to be in one of the last mortal generations. But I fear the best we can hope for in our lifetime is that someone figures out a way to save our current brain state. The “rapture of the nerds” is probably wishful thinking.
Unless of course I’m missing something. Are there reasons for optimism that I’ve overlooked?