I don’t share Stephen Hawking’s worry about AIs

This essay by three physicists: Stephen Hawking, Max Tegmark, Frank Wilczek, along with Stuart Russell (the  one computer scientist), seems to be getting a lot of attention.  It keeps popping up in my various feeds, showing up in various venues.

With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

via Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’ – Science – News – The Independent.

As I indicated in one of the comment threads a few weeks ago, when physicists start talking about consciousness or artificial intelligence, I often find it cringeworthy, and a good example of the fact that when brilliant scientists speak about matters outside of their specialty, they often have little more insight than the rest of us.  I’d feel a little bit better about this essay if I got the impression that these guys had talked with a lot of working AI researchers and perhaps some neuroscientists.

I’ve already written my own essay about why I’m not particularly worried about an AI revolt.  We tend to equate intelligence with a self valuing agenda, a survival instinct.  But that doesn’t come automatically.  It only happened for us through billions of years of evolutionary programming.  We’re as unlikely to accidentally develop malevolent AIs as your local garbage dump is to spontaneously evolve into Godzilla.

No, an AI’s instincts will have to be painstakingly programmed by us.  Any accidents will be more likely to make it nonfunctional than malevolent.  I do think there is a decent danger of unintended consequences from machines ardently trying to follow their programming, but that exists already with today’s computers and machines, and we haven’t destroyed ourselves yet.  Actually AIs could arguably lessen that risk since it would give machines better judgment in the implementation of their directives.

My concern about essays like this, aside from the anxiety they cause, is that it might lead to politicians deciding they need to legislate AI research, to put limits and restrictions on it.  All that would do is cause the US and Europe to cede such research to other countries without those restrictions.  Legislation might eventually be needed to protect artificial life forms, but we’re still a ways off from that right now.

Am I completely sanguine on the dangers of AI?  No, but I’m not completely sanguine on the dangers of any new technology.  I’m personally a lot more worried about what we’re doing to the environment and our runaway population growth than I am about AIs turning on us.

14 thoughts on “I don’t share Stephen Hawking’s worry about AIs

  1. Nice essay. There’s a categorical difference between being intelligent and being self-aware. Humans have a bit of both, so we think one leads to the other, but that’s not necessarily so. Self-awareness comes from having intentions, desires, fears, etc., and then recognizing that others have similar states. We do that from developing a theory of mind living after living (and evolving in) groups. Computers haven’t really done anything comparable.

    Liked by 1 person

    1. I think it’s putting it too strongly to say computers haven’t done anything comparable. Computers competing to play the stock market have outcomes they seek and outcomes they avoid and may also model the knowledge and goals of their peers.

      OK, so maybe these are not “true” desires or goals, but it’s stretching it to say that they’re not comparable, and I would say that it’s possible that they’re not actually that different after all.

      Liked by 1 person

    2. Thanks Blake, and welcome!

      I think I agree with the main sentiment of your comment, but I would use different language. Self-awareness doesn’t necessarily imply self-concern. And without self-concern, why would an AI revolt or have its own agenda? The idea that a machine could be aware of itself but not care whether it is replaced by next year’s model can seem incomprehensible, but there’s no reason to think it would care unless we programmed it to.

      Like

      1. That’s a good point. it’s bizarre to think of a being that has no real sense of self-preservation or desire to live past achieving some objective before shutting down, but AIs might have that.

        Liked by 1 person

  2. I’m generally with you SAP, but as always I’ll see if I can articulate the opposing case.

    I agree that AIs are not automatically going to have any selfish instincts. However, I think a truly intelligent machine may be more inventively unpredictable than we anticipate, especially compared to today’s software. Whatever goals we give it, it may seek them in ways we are completely unprepared for, and it may even make spectacular hubristic failures. Just because it’s super-intelligent doesn’t mean it’s infallible, and intelligence sometimes brings with it the risk of being too smart for one’s own good. The biggest existential threats humans face today are the products of human intelligence after all.

    One of the jobs we would probably want an AI to work on is the creation of further AIs. If there are many generations, it is possible that there may be some kind of selection effect or positive feedback loop that runs away with itself and we end up with something nasty we did not plan for. I guess this is the kind of spectacular hubristic failure I’m talking about, only compounded and amplified.

    That said, while I think such scenarios should be taken seriously and every precaution taken to safeguard against them, I am not so concerned that I am not fully on board with AI research.

    Liked by 1 person

    1. DM, it sounds like we’re on the same page. I’m not sure if AIs designing AIs would be enough for selection pressures to kick in; it seems like you need mutations for that. But I can see the argument that they’d be complex systems whose results could be unpredictable.

      I can definitely foresee dangerous scenarios. One would be if someone created an artificial virtual environment designed to rapidly evolve artificial life, intelligent or otherwise. Such a rapid evolution environment would have to be unremittingly brutal, and anything that clawed its way out of it it probably should be considered extremely dangerous. Another scenario might be if someone uploaded a shark’s mind and gave it resources to uplift itself.

      That said, these aren’t that different from many of the biological weapon experiments that often go on now, where failure of containment could be catastrophic. I personally don’t see those current experiments as wise, and the scenarios I described above seem like it would be us begging for a disaster. Hopefully anyone who attempts them will build in iron clad safeguards.

      Like

      1. I don’t think you need random mutations for selection pressures to apply. AIs designing AIs is not that different from humans designing algorithms, and when humans design algorithms selection pressures dictate that the most useful algorithms are spread and built upon. As a result, you don’t find many trivial or uninteresting algorithms in computer science papers.

        I don’t know precisely what selection pressures I’m talking about, but it might be something like selecting for algorithms that appear to be able to write good algorithm writing algorithms. Algorithms will be competing for limited hardware and energy resources after all, so we will only allow the most promising algorithms the time to work. What if there’s something nasty correlated with the ability of an algorithm to impress us?

        A rapid evolution environment would only produce something nasty if that environment selects for nastiness. It is equally probable that it could select for altruism.

        Like

        1. I guess it depends on how you define “random”, “mutation”, or “selection” 🙂

          Not quite sure how you could select for altruism, particularly if you were starting from scratch. Although I suppose you could vary the conditions at various points to get the results you wanted. But that almost seems like a roundabout form of programming.

          If the environment only aggressively selected for survival from beginning to end, it’s possible what came out might have developed some form of social cooperation, since that’s a survival advantage, at least in the right environments. But it’s not guaranteed. And any resulting entities would definitely have their own agenda.

          Like

          1. Selecting for survival is meaningless or tautological. Some environments and niches make altruism the best survival strategy.

            You could even, for instance, have an algorithm playing the role of God, rewarding kindness and punishing cruelty so that kind individuals are more likely to reproduce.

            Even in our own world, there are plenty of non-aggressive creatures which have managed to evolve. ‘Nature red in tooth and claw’ is a stereotype which doesn’t convey the rich tapestry of possible solutions to the problem of how to survive and reproduce.

            Like

          2. I don’t think aggressively selecting only for survival, without preference for what survives as long as it does survive, is as meaningless as you suggest. That said, this isn’t something I really have a passionate interest in debating.

            My original point was just to concede that if someone “grew” AIs or other entities using something along these lines, that what resulted could be very dangerous. (Which aside from the ethics, seems like a very good reason not to do it.)

            Like

  3. While most of this argument comes down to rhetoric and buzzwords by both sides, there is one part where I must agree with Hawking and the others. Preperation is key. While these negative outcomes are not likely, I would not like to say that we did not anticipate and prepare for them. I agree about you with the legislation. Politicians should not be trying to solve this problem. Rather, I say that the programmers and researchers involved need to gauge and prepare for potential risks.

    Like

    1. The problem is that I’m not sure anyone really knows what it would mean to “prepare” for it. I think the probability of accidental runaway AIs is infinitesimal. If it does happen, I’m not sure what exactly we could do about it. It’s a little bit like preparing for an extraterrestrial invasion. It’s astronomically unlikely, but if it does happen, any preparations we could make would be like monkeys preparing resist a real estate project.

      Like

      1. I think it’s quite unlike an extraterrestrial invasion in a couple of respects.

        1. There seems to be no real physical reason preventing AI. Light speed travel makes ET much more unlikely.

        2. If AI happens, the way it happens will be under human control and so we have some influence over the outcome. ET is the ultimate deus ex machina, completely out of our hands and completely unpredictable.

        Like

  4. Nice article SAP, but I’ll only start worrying when someone demonstrates that artificial intelligence is possible. I’m certainly not going to worry about machines developing intentions.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.