In this Big Think video, Steven Pinker makes a point I’ve made before, that fear of artificial intelligence comes with a deep misunderstanding about the relationship between intelligence and motivation. Human minds come with survival instincts, programmatic goals hammered out by hundreds of millions of years of evolution. Artificial intelligence isn’t going to have those goals, at least unless we put them there, and therefore no inherent motivation to be anything other than be the tools they were designed to be.
Many people concerned about AI (artificial intelligence) quickly concede that worry about it taking over the world due to a sheer desire to dominate are silly. What they worry about are poorly thought out goals. What if we design an AI to make paperclips, and it attacks its task too enthusiastically and turns the whole Earth, and everyone on it, into paperclips?
The big hole in this notion is that the idea that we’d create such a system, then give it carte blanche to do whatever it wanted to in pursuit of its goals, that we wouldn’t build in any safety systems or sanity checks. We don’t give that carte blanche to our current computer systems. Why should we do it with more intelligent ones?
Perhaps a more valid concern is what motivations some malicious human, or group of humans, might intentionally put in AIs. If someone designs a weapons system, then giving it goals to dominate and kill the enemy might certainly make sense for them. And such a goal could easily go awry, a combination of the two concerns above.
But even this concern has a big assumption, that there would only be one AI in the world with the capabilities of the one we’re worried about. We already live in a world where people create malicious software. We’ve generally solved that problem by creating more software to protect us from the bad software. It’s hard to see why we wouldn’t have protective AIs around to keep any errant AIs in line and stop maliciously programmed ones.
None of this is to say that artificial intelligence doesn’t give us another means to potentially destroy ourselves. It certainly does. We can add it to the list: nuclear weapons, biological warfare, overpopulation, climate change, and now poorly thought out artificial intelligence. The main thing to understand about this list is it all amounts to things we might do to ourselves, and that includes AIs.
There are possibilities of other problems with AI, but they’re much further down the road. Humans might eventually become the pampered centers of vast robotic armies that do all the work, leaving the humans to live out a role as a kind of queen bee, completely isolated from work and each other, their every physical and emotional need attended to. Such a world might be paradise for those humans, but I think most of us today would ponder it with some unease.
Charles Stross in his science fiction novel Saturn’s Children, imagined a scenario where humans went extinct, their reproductive urge completely satisfied by sexbots indistinguishable from real humans but without the emotional needs of those humans, leaving a robotic civilization in its wake.
None of this strikes me as anything we need to worry about in the next few decades. A bigger problem for our time is the economic disruption that will be caused by increasing levels of automation. We’re a long way off from robots taking every job, but we can expect waves of disruption as technology progresses.
Of course, we’re already in that situation, and society’s answer so far to the effected workers has been variations of, “Gee, glad I’m not you,” and a general hope that the economy would eventually provide alternate opportunities for those people. As automation takes over an increasingly larger share of the economy, that answer may become increasingly less viable. How societies deal with it could turn out to be one of the defining issues of the 21st century.