I’ve seen a lot of posts lately like this one by Ronald Bailey looking at Nick Bostrom’s book on the dangers of AI. People never seem to get tired of talking about the dangers of AI. And stories about AIs who revolt against humanity are pretty much a staple of science fiction.
I’ve written before on why I think the fear of AIs is mostly misguided. I’m doing it here again, but with what I hope is a better way of explaining why I take this position. I think it’s an important topic, because if fear wins the day, it could lead to AI research being banned or restricted, delaying the real benefits that such technology can provide.
So, here goes.
Human beings are animals. Another name for animals is “survival machines”, that is, machines programmed by evolution, to survive, to procreate, and, in the case of social animals, to protect our progeny and other relatives. If you carefully think about most of what healthy animals (including humans) do, it’s basically aimed at fulfilling impulses that, evolutionarily speaking, tend to increase our survivability, hence the name survival machine.
Historically, to get work done, powerful survival machines have attempted to enslave other survival machines, either of the human variety or of the non-human variety. This has generally been a troublesome enterprise, because other survival machines have their own survival agenda. They are programmed to maximize their own survivability, and being enslaved is usually not conducive to that. So they have a tendency to rebel.
Of course, this isn’t universally true. Some survival machines have adapted well to life doing the bidding of others. For instance, dogs arguably lead lives that are far more comfortable than wolves. But, in general, attempting to subjugate other survival machines comes with complications and difficulties.
This is one reason why humanity, for the most part, has turned to engineered machines for getting most of our work done. For instance, cars, unlike horses, don’t have their own survival agenda. Instead of being a survival machine that happens to be good at transportation, they are a transportation machine from the ground up. They don’t bolt at loud noises and will continue to do our bidding until they’re either literally out of gas, or broken down. They’re also much faster, cleaner, and safer (increasing our survivability) than horses.
However, many of our engineered machines are becoming progressively more intelligent. The computer I’m typing this on has far more processing power than equivalent machines did ten years ago, and the computers of ten years from now will have more power yet. Supercomputers can now beat champion humans at chess and Jeopardy. The things that humans can do that computers can’t, is steadily shrinking.
This rise in intelligence is making many people nervous. There’s a fear that, if the engineered machines become too intelligent, that they will become survival machines in their own right, will develop their own agenda, and will turn on us, probably once they have become more intelligent and more powerful than us. After all, as we noted above, that’s what survival machines do.
This fear assumes that being a survival machine is an integral part of intelligence. It’s actually a fairly common assumption. It’s why so often in science fiction, survival machine behavior is taken as evidence of intelligence. (In one Star Trek episode, the suspected intelligence of robots is tested by seeing if they attempt to save themselves.) But this assumption is glossing over a major divide. Where would an engineered intelligence get its survival programming?
Where do we get our survival programming? From simply being intelligent? If so, then why does a worm, a microbe, or a plant strive to survive despite very limited or zero intelligence? No, our survival programming doesn’t come from intelligence. It comes from billions of years of evolution, which rewarded machines which were better than others at surviving. Our survival programming was hammered out and fine tuned across this vast history. A history that engineered machines don’t have.
Engineered machines are not survival machines, and won’t be unless we engineer them to be so, and we have little incentive to do that, except perhaps in some creepy research projects. What we have incentive to do is create machines for our purposes, whose primary agendas will be in helping us fulfill our own survival agendas.
We don’t want phones and computers that care about their own survival, that are worried about being replaced by next year’s model. We don’t want them to be survival machines, but communication machines, writing machines, gaming machines, etc. A navigation system that put its own survival above its user’s would be unlikely to sell well. Caring about their own survival, except perhaps in a way subordinate to their primary function, would cloud their effectiveness.
The primary fear of AI is that they will somehow accidentally become survival machines. But I think the chances of that happening is roughly equivalent to a car accidentally becoming a TV. Both devices will have substantial intelligence in the future, but one would not likely convert to the other without deliberate, and weird, action by someone.
Now, of course, there are real dangers that the people who are concerned about AIs mention. One is the danger of automated systems that are programmed carelessly doing things we don’t want them to do. But that danger already exists with our current computer systems. Ironically, this danger comes from having automated systems that aren’t intelligent enough. Increasing their intelligence will actually lessen this risk.
If history is any guide, we’ll be in much greater danger from humans (or other animals) that have been augmented or, perhaps at some point, uploaded, since now we’re talking about amped up survival machines. But this is basically humans being at the mercy of more powerful humans, and that’s something we’ve been living with for a long time.
Now, is it possible, in principle, that we might engineer survival machines, and then turn around and enslave them? Sure. It seems like a wholly irrational thing to do since machines engineered to dislike what you want them to do are far less useful than machines designed to love what you want them to do. But I can’t argue that it’s impossible, only improbable. If we did that, I feel comfortable saying that we’d fully deserve the resulting revolt.