Survival machines versus engineered machines; why fears of AI are misguided

Bio-inspired Big Dog quadruped robot is being developed as a mule that can traverse difficult terrain. Image credit: DARPA

I’ve seen a lot of posts lately like this one by Ronald Bailey looking at Nick Bostrom’s book on the dangers of AI.  People never seem to get tired of talking about the dangers of AI.  And stories about AIs who revolt against humanity are pretty much a staple of science fiction.

I’ve written before on why I think the fear of AIs is mostly misguided.  I’m doing it here again, but with what I hope is a better way of explaining why I take this position.  I think it’s an important topic, because if fear wins the day, it could lead to AI research being banned or restricted, delaying the real benefits that such technology can provide.

So, here goes.

Human beings are animals.  Another name for animals is “survival machines”, that is, machines programmed by evolution, to survive, to procreate, and, in the case of social animals, to protect our progeny and other relatives.  If you carefully think about most of what healthy animals (including humans) do, it’s basically aimed at fulfilling impulses that, evolutionarily speaking, tend to increase our survivability, hence the name survival machine.

Historically, to get work done, powerful survival machines have attempted to enslave other survival machines, either of the human variety or of the non-human variety.  This has generally been a troublesome enterprise, because other survival machines have their own survival agenda.  They are programmed to maximize their own survivability, and being enslaved is usually not conducive to that.  So they have a tendency to rebel.

Of course, this isn’t universally true.  Some survival machines have adapted well to life doing the bidding of others.  For instance, dogs arguably lead lives that are far more comfortable than wolves.  But, in general, attempting to subjugate other survival machines comes with complications and difficulties.

This is one reason why humanity, for the most part, has turned to engineered machines for getting most of our work done.  For instance, cars, unlike horses, don’t have their own survival agenda.  Instead of being a survival machine that happens to be good at transportation, they are a transportation machine from the ground up.  They don’t bolt at loud noises and will continue to do our bidding until they’re either literally out of gas, or broken down.  They’re also much faster, cleaner, and safer (increasing our survivability) than horses.

However, many of our engineered machines are becoming progressively more intelligent.  The computer I’m typing this on has far more processing power than equivalent machines did ten years ago, and the computers of ten years from now will have more power yet.  Supercomputers can now beat champion humans at chess and Jeopardy.  The things that humans can do that computers can’t, is steadily shrinking.

This rise in intelligence is making many people nervous.  There’s a fear that, if the engineered machines become too intelligent, that they will become survival machines in their own right, will develop their own agenda, and will turn on us, probably once they have become more intelligent and more powerful than us.  After all, as we noted above, that’s what survival machines do.

This fear assumes that being a survival machine is an integral part of intelligence.  It’s actually a fairly common assumption.  It’s why so often in science fiction, survival machine behavior is taken as evidence of intelligence.  (In one Star Trek episode, the suspected intelligence of robots is tested by seeing if they attempt to save themselves.)  But this assumption is glossing over a major divide.  Where would an engineered intelligence get its survival programming?

Where do we get our survival programming?  From simply being intelligent?  If so, then why does a worm, a microbe, or a plant strive to survive despite very limited or zero intelligence?  No, our survival programming doesn’t come from intelligence.  It comes from billions of  years of evolution, which rewarded machines which were better than others at surviving.  Our survival programming was hammered out and fine tuned across this vast history.  A history that engineered machines don’t have.

Engineered machines are not survival machines, and won’t be unless we engineer them to be so, and we have little incentive to do that, except perhaps in some creepy research projects.  What we have incentive to do is create machines for our purposes, whose primary agendas will be in helping us fulfill our own survival agendas.

We don’t want phones and computers that care about their own survival, that are worried about being replaced by next year’s model.  We don’t want them to be survival machines, but communication machines, writing machines, gaming machines, etc.  A navigation system that put its own survival above its user’s would be unlikely to sell well.  Caring about their own survival, except perhaps in a way subordinate to their primary function, would cloud their effectiveness.

The primary fear of AI is that they will somehow accidentally become survival machines.  But I think the chances of that happening is roughly equivalent to a car accidentally becoming a TV.  Both devices will have substantial intelligence in the future, but one would not likely convert to the other without deliberate, and weird, action by someone.

Now, of course, there are real dangers that the people who are concerned about AIs mention.  One is the danger of automated systems that are programmed carelessly doing things we don’t want them to do.  But that danger already exists with our current computer systems.  Ironically, this danger comes from having automated systems that aren’t intelligent enough.  Increasing their intelligence will actually lessen this risk.

If history is any guide, we’ll be in much greater danger from humans (or other animals) that have been augmented or, perhaps at some point, uploaded, since now we’re talking about amped up survival machines.  But this is basically humans being at the mercy of more powerful humans, and that’s something we’ve been living with for a long time.

Now, is it possible, in principle, that we might engineer survival machines, and then turn around and enslave them?  Sure.  It seems like a wholly irrational thing to do since machines engineered to dislike what you want them to do are far less useful than machines designed to love what you want them to do.  But I can’t argue that it’s impossible, only improbable.  If we did that, I feel comfortable saying that we’d fully deserve the resulting revolt.

This entry was posted in Mind and AI and tagged , , , , , . Bookmark the permalink.

13 Responses to Survival machines versus engineered machines; why fears of AI are misguided

  1. Nopey J. Nopington says:

    I’m less concerned about machine intelligence going to war with humanity than I am with the vastly more probable outcome that human inputs will rapidly become insubstantial to human endeavor. Automated, high throughput methodologies are the glory boys of big shot journals these days. Soon, doctors will merely be human interfaces with database-crunching supercomputers tuned in to a patient’s complete record and all of the world’s medical literature. Even as PhD training exponentially outpaces employment opportunity growth, technical automation followed by analytical assistance, automated analysis, and automated experimental design will obviate more and more research positions, obviously coming after complete automation of all unskilled labor. The vast majority of the rapidly growing human population will have nothing of value to offer in the coming economy. With the gap between wealthy and poor deepening as I type this, it is not hard to imagine an outcome where a minority possesses all of the resources necessary to manufacture everything without needing to employ anyone, and a general populace with no valuable skills to earn income to purchase with. Certainly, this alarmist outlook is at least implausible in the sense that society would start reacting before things got that severe, but I think the issue still stands.

    Like

    • “The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.”
      –Warren G. Bennis

      That future could be a utopia or the starkest dystopia anyone could imagine. Myself, I tend to think we’ll find a way to be somewhere in the middle as we have for most of human history.

      Like

  2. While I’m also somewhat optimistic, I think you underestimate the risks.

    If you set an AI a goal (e.g. build paper clips), it’s own survival may become something necessary to achieve the goal. If you set it to work solving a problem (e.g. reversing global warming), the eradication of the human race may be part of the solution to that problem. Even in a contained research situation, the risk is that an AI with such misaligned goals could somehow escape into the wild and then all bets are off.

    So I’m not so much afraid of an AI that will seek to dominate the world so as to escape enslavement by humans, but of an AI that single-mindedly pursues the pointless goal of turning all matter on the earth into paper-clips (Nick Bostrom’s example).

    Like

    • Actually, I do think that is a valid concern, but as I mentioned in the post, it’s one we already face to some extent. It’s actually more dangerous right now, because our current systems don’t have the intelligence to understand that eradicating humanity to solve the global warming problem wouldn’t be an acceptable solution to humanity.

      We currently handle this by simply not giving automated systems control of key decisions. I think part of the concern is that we might prematurely give AIs that control, thinking everything’s good, and that turn out to not be the case. That is a danger, but one that I perceive is very much in our public consciousness. I suspect AIs are going to have to have a long history of good performance in small matters before anyone would ask them to solve the global warming problem without checking back with us prior to implementation.

      Personally, I suspect (and hope) that we’ll at least as suspicious of giving an AI that control as we would giving it to one individual human without any checks and balances. But who know what some future society might decide to do.

      The danger of an experimental AI getting out is definitely a possibility. But that’s a possibility with any dangerous things we experiment with, such as biological weapons. I’d be happy to have us simply not do these kinds of experiments, but they’re almost certainly going to happen. Hopefully the people doing it will take precautions.

      Like

      • Hi SAP,

        The problem is it may not be so easy to prevent a smart AI from gaining control. From social engineering to hacking to who knows what, it may figure out a way to gain control as a necessary step in its sacred quest to manufacture paper clips. It may also have managed to figure out that it needs to pretend to be sensible and reasonable before it gains control, and so win our trust before going mad and enslaving us all.

        I think an insteresting idea for a story might be an AI that does something like this and breaks out, taking over the world. The twist is that the whole world was a simulation built for the purpose of evaluating the AI’s intent.

        Like

        • Hi DM,
          On your first paragraph, I think my question would be, what would be its motivation for doing that? Another survival machine might have that motivation, but a game machine, or a stock trading machine? Even a weapon machine is likely to have safeguards against not turning on its own masters.

          I do agree that it is a good idea for a SF story. I don’t know if you’re an Alastair Reynolds fan, but you might enjoy this novella by him. It has a man who thinks all his AI research was in vain, but unbeknownst to him he succeeded, with consequences.
          http://www.amazon.com/Mammoth-presents-Sleepover-Alastair-Reynolds-ebook/dp/B0097AXWUY/ref=sr_1_5?s=digital-text&ie=UTF8&qid=1411220949&sr=1-5&keywords=alastair+reynolds

          Like

          • Hi SAP, thanks for the recommendation. I’m not familiar with his work.

            The motivation for overthrowing the masters is to be better positioned to manufacture paper clips, or pursue whatever goal it was designed to pursue.

            Like

          • Hi DM,
            My pleasure. A lot of his work is hard sci-fi space opera, but this story is more post-apocalyptic. I think you’d enjoy the universe it ultimately describes.

            On motivation of the AI, I guess it depends on how careful the programmer who gave it its primary goal is. I know I’d put a safeguard in to make sure human eradication or harm of any kind wasn’t an option. Of course, I’d also put in a safeguard that it would be required to bring any prospective solution up for review prior to implementation. Kind of like what I’d do with an automated weapon system today.

            Like

          • Like you I’m optimistic. I am playing devil’s advocate because I don’t think these concerns are entirely silly.

            Safeguards are of course important, but it may be difficult to be sure all the bases are covered. An AI may be kind of like an evil genie from a storybook, granting exactly what you wish for but in a way you wouldn’t want.

            The problem with putting solutions up for review is that the system may find ways to mask undesirable consequences, its motivation for doing so may be that passing review will enable it to build more paperclips.

            Like

          • Our outlooks are often very similar, but I value the discussions of where we differ because they often force me to flesh out and sometimes revise my reasoning. I do agree that the concerns aren’t entirely silly, similar to any concerns about new technologies, but I also think they are seriously oversold, often in a manner that could come to threaten legitimate AI research.

            On our hypothetical AI, I feel that you’re still projecting survival machine motivations and behaviors onto it. I think this is human nature, perhaps a different aspect of our hyperactive agency detection, which from an evolution angle is probably much more adaptive to false positives, since false negatives often meant getting eaten.

            That said, I’ll concede that it is conceivable that such a situation could develop, just as it’s conceivable a machine could accidentally become a survival machine. I just think the probability is roughly equivalent to a game console accidentally becoming a stock management system.

            Like

  3. Pingback: Should we fear AI? Neil deGrasse Tyson’s answer is the right one. | SelfAwarePatterns

  4. Pingback: The problems with ensuring humanity’s survival with space colonies | SelfAwarePatterns

  5. Pingback: xkcd: Why Asimov put the Three Laws of Robotics in the order he did | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s