HuffPost has an article up with quotes from various people on the dangers, or non-dangers of artificial intelligence. They include the usual suspects: Elon Musk, Stephen Hawking, Bill Gates, etc. Most of them express concern about the dangers. But I think Neil deGrasse Tyson’s is the only answer from this group worth listening to.
There are some people in the comments saying things to the effect, “Lack of emotions is what we’re afraid of,” but I think they’re mistaken. They’re not afraid of an emotionless being, they’re afraid of a being with only selfish emotions.
As I’ve written about before, emotions are the instincts, the programming that evolution codes into living organisms. Without that programming, we wouldn’t be motivated to do anything, just as an AI wouldn’t be. An AI’s motivations are going to come from whatever programming we give it. As long as we’re not idiotic enough to program animal emotions into it, it won’t have them. Without those emotions, what would motivate them to take over the world?
Now, I’m sure some researcher is going to build an isolated AI with those emotions, just to see if it will work. But use of those types of AI are unlikely to have much market appeal. Most of the AIs in pervasive use will be those whose primary motivations are to fulfill whatever purpose we engineered them for.
Are there dangers with AI, particularly in the realm of unintended consequences? Sure. But it’s the dangers of any powerful technology, and we’re already living with it. Ask anyone who has ever had to do a software update on a heavily used computer system. The chief dangers in that realm aren’t from systems that are too intelligent, but from ones that aren’t intelligent enough.
Incidentally, being an expert in computer technology or in theoretical physics does not make one an expert in how minds work. Most of the “experts” worried about the dangers of AI are lacking expertise on at least half the equation. It’s why predictions for achieving a human equivalent artificial intelligence are always 20 years in the future. Those predictions are often right about the technological details, but wrong on what will be needed to achieve human level intelligence.