This essay by three physicists: Stephen Hawking, Max Tegmark, Frank Wilczek, along with Stuart Russell (the one computer scientist), seems to be getting a lot of attention. It keeps popping up in my various feeds, showing up in various venues.
With the Hollywood blockbuster Transcendence playing in cinemas, with Johnny Depp and Morgan Freeman showcasing clashing visions for the future of humanity, it’s tempting to dismiss the notion of highly intelligent machines as mere science fiction. But this would be a mistake, and potentially our worst mistake in history.
Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.
As I indicated in one of the comment threads a few weeks ago, when physicists start talking about consciousness or artificial intelligence, I often find it cringeworthy, and a good example of the fact that when brilliant scientists speak about matters outside of their specialty, they often have little more insight than the rest of us. I’d feel a little bit better about this essay if I got the impression that these guys had talked with a lot of working AI researchers and perhaps some neuroscientists.
I’ve already written my own essay about why I’m not particularly worried about an AI revolt. We tend to equate intelligence with a self valuing agenda, a survival instinct. But that doesn’t come automatically. It only happened for us through billions of years of evolutionary programming. We’re as unlikely to accidentally develop malevolent AIs as your local garbage dump is to spontaneously evolve into Godzilla.
No, an AI’s instincts will have to be painstakingly programmed by us. Any accidents will be more likely to make it nonfunctional than malevolent. I do think there is a decent danger of unintended consequences from machines ardently trying to follow their programming, but that exists already with today’s computers and machines, and we haven’t destroyed ourselves yet. Actually AIs could arguably lessen that risk since it would give machines better judgment in the implementation of their directives.
My concern about essays like this, aside from the anxiety they cause, is that it might lead to politicians deciding they need to legislate AI research, to put limits and restrictions on it. All that would do is cause the US and Europe to cede such research to other countries without those restrictions. Legislation might eventually be needed to protect artificial life forms, but we’re still a ways off from that right now.
Am I completely sanguine on the dangers of AI? No, but I’m not completely sanguine on the dangers of any new technology. I’m personally a lot more worried about what we’re doing to the environment and our runaway population growth than I am about AIs turning on us.