We’re finally starting to see some push back against the AI (artificial intelligence) alarmism that has been so prevalent in the media lately. People like Stephen Hawking, Elon Musk, Max Tegmark, and many others have sounded the alarm. Given my previous post from last night, I think these alarms are premature at best, and are generally misguided.
Now, Rodney Brooks, of Roomba fame, has a post up telling people to chill about AI.
Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill. This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.
…In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.
The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.
And this Edge discussion, titled ‘The Myth of AI‘, is getting shared out a lot. I found it a bit long winded and rambling, but it expresses a lot of important points.
About the only thing I disagree with on these posts is how much they emphasize how far away we currently are from having AGI (artificial general intelligence), as opposed to the specialized AI we have today. It’s totally true that we are very far away from an AGI, but I think comforting people with only that leaves out the main reason they shouldn’t freak out.
As I’ve written about multiple times, the fear of AI is the fear that it will have its own agenda, similar to how we and other animals typically have our own agenda. But our agenda is largely influenced by hundreds of millions of years of evolution. AIs aren’t going to have that history. The only agenda they will have, the only desires, impulses, etc, will be the ones they are engineered to have. The chance of them accidentally acquiring the self actualization agenda that most animals have is infinitesimal.
This is easier to conceive of if we called AIs “engineered intelligences” whose main agenda will be an engineered one, in contrast with “evolved intelligences” whose main agenda is typically survival, procreation, and anything that promotes those goals.
Of course, we might eventually have the ability to build an AI to have an agenda similar to ours. But if we do that, and treat them as anything less than a fellow being, I think we’d deserve whatever happened next. Luckily, we have no real incentive to design machines that would hate what we want them to do. We have every incentive to design machines that will love what we want them to do. As long as we do that, the danger from AI will be minimal.