
I caught James Barrat on CSPAN BookTV Saturday evening, talking about his book, ‘Our Final Invention’, the main theme of which appears to be that we’re in danger of designing intelligent machines, won’t be able to control them, and that will be the end of us.
One of my earliest posts on this blog was on why that fear is unwarranted. The TL;DR is that we often underestimate the amount of programming involved just to get an AI to have the same drives we do, such as concern for its own wellbeing or survival. It took billions of years for evolution to put that programming in us, and it will take a lot of careful programming by us to get an AI to that point. If we can put in it a desire for survival and wellbeing, we can also put in empathy, compassion, etc.
Now, will there be someone somewhere who, for some stupid reason, purposely designs a psychopath AI? I’m sure there will be. But their psychopath AIs will be in the minority. In other words, the evil AIs will likely be outnumbered by the good ones.
A more realistic concern is expressed by Selmer Bringsjord, who worries about what decisions autonomous amoral machines might make, not because they would revolt, but because they simply wouldn’t know any better, such as poorly programmed autonomous drones. This is basically the issue of unintended consequences, and it is the same one humans have faced with other weapons such as land and sea mines left around after wars. Although admittedly this one is far more complicated and is a valid concern.
Barrat, in the Q&A part of his talk, did mention another valid concern. He noted that many scientists think we will augment human beings faster than we will create AIs, particularly AGI (artificial general intelligences), and that there isn’t anything to worry about since humans are “safe”.
Barrat pointed out how dubious that assertion is, and that we might have as much to fear from augmented humans. I actually think the danger from augmented humans is a real one, but it’s basically the danger low tech equipped humans have always faced from high tech equipped ones, although this will rise to a new category. Still, this is humans in danger from other humans, and that has been a danger throughout human history.

I think that we will augment humans at least as fast as creating AIs, if not faster. Humans are far from safe of course, but we know that already.
LikeLike
I thought of you when Barrat made that comment 🙂
LikeLike
Good comments, SAP.
I think the potential benefits outweigh the risks. Anyway, If it can happen, it will happen, so we better hope the good guys get there first.
LikeLike
Incidentally, Tegmark has some incisive comments on this in his book too.
LikeLike
Thanks and agreed. I’m currently working my way through Michael Gazzinaga’s ‘Who’s In Charge?’, but might take a look at Tegmark’s afterward.
LikeLike
Speaking robots …
LikeLike
Meant to say “Speaking OF robots …”
LikeLike
Wow. Cool! Thanks.
LikeLike