Michael Chorost has an article at Slate about artificial intelligence and any dangers it might present. I find myself in complete agreement with the early portions of his piece, as he explains why an AI (artificial intelligence) would be unlikely to be dangerous in the way many fear.
To value something, an entity has to be able to feel something. More to the point, it has to be able to want something. To be a threat to humanity, an A.I. will have to be able to say, “I want to tile the Earth in solar panels.” And then, when confronted with resistance, it will have to be able to imagine counteractions and want to carry them out. In short, an A.I. will need to desire certain states and dislike others.
Today’s software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there’s no impetus to do anything. Today’s computers can’t even want to keep existing, let alone tile the world in solar panels.
This is similar to the arguments I’ve made, that humans, and all animal life, are essentially survival machines, or more precisely, gene propagation machines. The ultimate source of all of our actions are innate desires, instinctual impulses, “wants” in Chorost’s terminology, all evolved for their genetic success. I think he’s completely right that, without those wants, AI isn’t going to be motivated to take over the world, wipe out humanity, or enact any of the other scenarios people often fear.
This isn’t to say that there’s no danger. Of course there is. But it’s the danger of giving poorly programmed systems too much unsupervised volition, a danger we already face and, for the most part, have shown pretty good common sense on so far. Despite all the fears expressed about things like drone attacks, it’s worth remembering that they’re currently remotely controlled by human operators, and the military has, so far at least, shown little enthusiasm for letting machines make life or death decisions.
Yes, we’re going to increasingly allow things like self driving cars to make those kinds of decisions, but only after an exhaustive period of testing, only once we become confident that the machines will make the right decisions at least as often as humans do, or more likely, far more often than humans do.
But Chorost’s article continues, and goes to a place that I can’t. He seems to think that it would be a good thing to let AIs evolve.
To get a system that has sensations, you would have to let it recapitulate the evolutionary process in which sensations became valuable. That’d mean putting it in a complex environment that forces it to evolve. The environment should be lethally complex, so that it kills off ineffective systems and rewards effective ones. It could be inhabited by robots stuffed with sensors and manipulators, so they can sense threats and do things about them. And those robots would need to be able to make more of themselves, or at least call upon factories that make robots, bequeathing their successful strategies and mechanisms to their “children.”
…Now let’s say humans invent robots of this nature, and after successive generations they begin to have sensations. The instant an information-processing system has sensations, it can have moral intuitions. Initially they will be simple intuitions, on the order of “energy is good, no energy is bad.” Later might come intuitions such as reciprocity and an aversion to the harm of kin.
There is a major implied assumption here, which is: we are the inevitable result of evolution, or perhaps more broadly, beings like us are the inevitable result.
The first part of this assumption is that if we put entities in an environment with strong selection pressures, that we’ll inevitably get entities of increasing intelligence. But if we examine the evolutionary history of life, there is very little evidence for this. Humans are members of an unusually intelligent species, who despite centuries of scientific evidence to the contrary, still have a strong bias of seeing ourselves, and our attributes, as the pinnacle of creation.
But if you look at the evolutionary history of Earth, it’s very difficult to see humans as inevitable. Our success seems to be the result of two unusual attributes. The first is an high degree of dexterity: an ability to manipulate the environment to our needs. It’s an ability we share with a pretty limited number of species: other primates, and perhaps some other industrious social insects such as ants.
The second is a hyper degree of intelligence, unmatched in the animal kingdom. Great apes in general tend to be very intelligent, on a par with elephants, dolphins, crows, and cephalopods, but humans are in a class all our own. The thing is, if you look at the paleolithic record, humanity almost didn’t make it. We’re a relatively minor branch of primates, whose evolution was far from inevitable, and was once only a natural disaster or two away from extinction.
Steven Pinker, in his book ‘How the Mind Works‘, points out how unusual human intelligence is by noting an unusual attribute of another species: elephant trunks. There are very few species with such trunks. An elephant might consider their trunk to be the pinnacle of evolution, but we, looking at it as just an unusual attribute, probably wouldn’t agree. Pinker points out that the evolution of human level intelligence is just as improbable as the evolution of trunks. (A fact which doesn’t bode well for us finding intelligent extraterrestrial life anywhere near us.)
If sapient level intelligence has a low probability, morality has an even lower one. Implying that moral intuitions are inevitable strikes me as spectacularly misguided. If we run an environment in the manner Chorost suggests, we can’t predict with any accuracy what might crawl out of it. It might have the morality of a shark, an evolutionarily successful animal that has no problem eating its own siblings in the womb.
In summary, I think if we attempt to evolve AIs, chances are we aren’t going to get intelligence, but simply some other system that is very successful at surviving the environment we grow it in. But if we do manage to get something intelligent, the idea that it will inevitably be moral seems dangerously naive.
It seems to me that such an intelligence could be as dangerous as the worst fears of AI that have been imagined. These entities would essentially be a form of life, with their own survival agendas, agendas that might starkly clash with humans interests. It would be everything Nick Bostrom and others fear, an immensely powerful alien intelligence that might regard humanity as an obstacle to its ambitions.
Even if the result of the hyper evolution environment isn’t intelligent entities, the resulting life forms could still be immensely dangerous. For example, the shark like entity mentioned above wouldn’t necessarily have to be intelligent to be dangerous, nor would something like a technological version of a hyper virulent Ebola-type virus. I think my attitude toward such an environment would be much like Elon Musk’s attitude toward AI in general, that building it could amount to “summoning the demon.”
The good news is that there’s no indication that we’ll need to do anything like this to get most of the desired benefits of AI. We can have self driving cars, robots maids, and many other benefits without the added complexity of evolved instincts. In other words, we can keep our AI as tools rather than as slaves that could turn on us.