Are we headed for a Singularity? Is it imminent?
I write relatively near-future science fiction that features neural implants, brain-to-brain communication, and uploaded brains. I also teach at a place called Singularity University. So people naturally assume that I believe in the notion of a Singularity and that one is on the horizon, perhaps in my lifetime.
I think it’s more complex than that, however, and depends in part on one’s definition of the word. The word Singularity has gone through something of a shift in definition over the last few years, weakening its meaning. But regardless of which definition you use, there are good reasons to think that it’s not on the immediate horizon.
via The Singularity Is Further Than It Appears – Charlie’s Diary.
Ramez Naam is guest blogging on Charlie Stross’s site. The main point of this article is that the singularity isn’t twenty years away, or likely to be as much of a nerd rapture as many people assume.
Naam did a follow up article on the timing and rate of a singularity takeoff, which is also very much worth checking out.
I’ve made similar arguments myself, so these articles resonated with me. The singularity is unlikely to be a hard takeoff in our lifetimes, and AIs are unlikely to be as god-like as many singularity enthusiasts (or alarmists) assume.
5 thoughts on “The Singularity Is Further Than It Appears – Charlie’s Diary”
Possibly, but Naam’s own example of Intel as a greater-than-human intelligence seems to contradict his argument – Intel’s collective intelligence (human plus CPUs) is growing rapidly as the Intel corporation develops more powerful CPUs and improved methods of working. So the collective intelligence of Intel (to use his metaphor/analogy) *is* growing exponentially.
In any case, I think the idea of the Singularity as an event is as misguided as believing that the Big Bang was an event that happened 14 billion years ago. The Big Bang model describes how we see the universe today, and accelerating progress is what we have been experiencing throughout human history.
Nobody will ever be able to look back and say that the Singularity happened on “that” date. They might say that the first human-equivalent AI was created on such and such a date, but I’m far from sure that a human-equivalent AI will ever be created. What would be the point? We already have human-equivalent intelligences – they are called humans. Better to create an AI that can complement and amplify our abilities, just like computers, smartphones and calculators are already doing.
I agree. I do suspect someone will create a human-like AI, although it’s unlikely to see mass production for the reasons you discuss.
Machines lack one critical property to drive progress: emotion. What would motivate a machine create a better machine? What would motivate a machine do anything that it is not programmed to do by humans? And when machine does something that it’s not supposed to do, it’s not called “intelligent”. It’s called “broken”. Intelligence is using skills gained from riding a bicycle to play tennis (after inventing a game of tennis).
Machines don’t feel pain when the plug is pulled. When scientists create a machine that fights to prevent a human from pulling the plug or actively seeks the source of energy when the battery is low, or avoids acid etching its circuits, even though it had no previous experience with acids, I may consider AI seriously. Performing billions of logic operations per second isn’t intelligence. Lawn mower can spin blades a few thousand times faster than humans can. That doesn’t make a lawn mower superior to humans in any area except grass mowing.
“a machine that fights to prevent a human from pulling the plug” … Skynet 🙂
Exactly. It’s why I’ve argued in the past that an AI revolt is unlikely. AIs won’t care about their own survival or wellbeing until we program them to. And outside of an occasional research project, it’s hard to imagine anyone finding that useful.