Why an AI revolt is unlikely

From time to time, articles or blog posts appear expressing anxiety about what will happen when we finally achieve artificial intelligence.  The thinking goes that such a mind would quickly be able to design and build a better version of itself and in no time we’d be facing an overwhelmingly superior intelligence which may or may not have our best interests in mind.  The extinction of the human race might not be far behind, or so the reasoning goes.

Books written about the singularity talk about the dangers that humanity could face.  Our AI servants may become our overlords.  This anxiety has a long history in science fiction, going back at least as far back as Frankenstein, and probably much further.  HAL from 2001 A Space Odyssey, Skynet from the Terminator series, and Agent Smith from the Matrix are relatively recent examples.

This has become such a staple of modern thought that it is rarely examined too closely.  However, unpacking the concern shows much of it to be questionable.  First, let’s define clearly what that anxiety is.  An artificial intelligence, with superior processing and memory capabilities, would come to value its own existence over ours, turn on us, and even enslave or eradicate us for not fitting within its plans.  The assumption is that when an intelligence reaches a certain level of sophistication, it will not only be self aware, but self preserving.  It will have a survival instinct.  Perhaps even an instinct to propagate.  And it would see us, possibly, as its competitors.

Let’s stop for a minute and consider what a computer does.  Without software, without programming, it does nothing.  It has no motivations on its own.  Unless programmed to have a goal, it has none.  Would that change as its power and sophistication increases?  Consider that the laptop I’m typing this post on has more processing power than the brain of an worm, or an ant, or perhaps even a bee.  But my laptop has never shown any inclination to act like any of these animals.  Why?  Because it’s not programmed to.

But wait.  Animals aren’t programmed, so how do they have motivations and impulses to do the things they do?  Well, actually, they are programmed.  That programming, which we usually refer to as instinct, is the result of hundreds of millions of years of evolution.  Natural selection insured that only animals with a survival instinct passed on their genes.  Evolution has programmed animals to seek food, to procreate, for mammals to protect their young, for social animals to function within their social order, and to survive as long as possible.

Humans, of course, are animals and have this same base programming.  It is so central to our being that we tend to equate it with intelligence.  But isn’t it true that we are more flexible with it than many other animals?  Don’t humans also have reason, the ability to think analytically?  We do, but as David Hume pointed out:

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

Without passion, without emotion, without intuition, without instinct, reason is impotent.  Reason is the analytical engine without any motivation of its own.  It’s the computer without programming.  Your programming comes from your instincts.  Without those instincts, you would have no motivation to get out of bed in the morning, to eat, to mate, to seek to better yourself, or to do anything that you do.  You may use reason to determine intermediate goals, but those goals will ultimately be in service of some instinct, or combination of instincts.  Even when you override an instinct, such as when you go on a diet,  you do so for a longer term goal to satisfy some other instinct, such as maybe finding a mate, or improving your health (survivability).

The probability that a machine, without the benefit of millions of years of evolution, is going to accidentally acquire these instincts is probably in the realm of the probability of the proverbial room of monkeys producing Shakespeare.  Someone would have to program that machine to have those instincts.  And if we can program it to have the instincts we fear, we can also program it to have the ones that would keep us safe.

Of course, the most likely scenario is that we will program it to accomplish the task we created it for.  It’s hard to imagine a market demand for a navigation AI that is concerned about its own wellbeing.  Indeed, a navigation AI’s impulses would all involve navigation, and success at it might give it as much gratification as we get from having intercourse, or eating.  A navigation AI would regard the idea of its disassembly with none of the dread that those us with survival instinct programming would.

Of course, there is a danger that the programming we give AIs could have unintended consequences, but that is a danger we already face with modern automation.  The most common result of unintended consequences in these systems are simply non-functional malfunctions.  There is also the danger that someone could purposely build a Skynet, but again, destroying ourselves is a danger we already face.

8 thoughts on “Why an AI revolt is unlikely

  1. Thanks for commenting on my poem about CAPTCHA. I too was once an IT manager although my wife now claims that I have become incorrigibly luddite. I follow with interest developments in AI

    Like

  2. The problem is that we don’t want an AI that can change a light bulb or predict the weather – we want an AI that can accomplish a wide variety of unstructured tasks. That means that it wouldn’t be programmed in the way that we currently program our laptops. It would need a capacity for independent thought and, yes, motivation. That would make it like us, and make it unpredictable.

    However, as I see it, we would not create machines in exactly that way. If we could create super AIs I wouldn’t want it to do jobs around my house, I would want to use it to augment my own intelligence. I would want to become an artificially enhanced intelligence, or a transhuman.

    Then the problem goes away, or at least is diminished. We may also create completely autonomous intelligences as well, but they would live alongside us.

    Like

    1. I agree that we’ll want to give them a lot of leeway. However, and maybe I’m just missing something, self preservation and valuing one’s self above others, a trait we have from evolution, isn’t something I can see us needing to add to an AI whose mission is life is to drive a delivery truck. Actually, I would think we’d program it with very strong instincts to preserve human life above its own existence or anything else.

      I definitely agree that we will seek to augment ourselves. This gets back to our discussion the other day on mind uploading and all the rest. I suspect we’d be in more danger from an uploaded or augmented human psychopath than from an AI.

      I’d also be worried if someone uploaded or uplifted a non-human predator of a non-social species. Think what a shark, suddenly given intelligence and resources, might do.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.