Push back against AI alarmism

We’re finally starting to see some push back against the AI (artificial intelligence) alarmism that has been so prevalent in the media lately.  People like Stephen Hawking, Elon Musk, Max Tegmark, and many others have sounded the alarm.  Given my previous post from last night, I think these alarms are premature at best, and are generally misguided.

Now, Rodney Brooks, of Roomba fame, has a post up telling people to chill about AI.

Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill.  This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

…In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch.  It is going to take a lot of deep thought and hard work from thousands of scientists and engineers.  And, most likely, centuries.

The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.

And this Edge discussion, titled ‘The Myth of AI‘, is getting shared out a lot.  I found it a bit long winded and rambling, but it expresses a lot of important points.

About the only thing I disagree with on these posts is how much they emphasize how far away we currently are from having AGI (artificial general intelligence), as opposed to the specialized AI we have today.  It’s totally true that we are very far away from an AGI, but I think comforting people with only that leaves out the main reason they shouldn’t freak out.

As I’ve written about multiple times, the fear of AI is the fear that it will have its own agenda, similar to how we and other animals typically have our own agenda.  But our agenda is largely influenced by hundreds of millions of years of evolution.  AIs aren’t going to have that history.  The only agenda they will have, the only desires, impulses, etc, will be the ones they are engineered to have.  The chance of them accidentally acquiring the self actualization agenda that most animals have is infinitesimal.

This is easier to conceive of if we called AIs “engineered intelligences” whose main agenda will be an engineered one, in contrast with “evolved intelligences” whose main agenda is typically survival, procreation, and anything that promotes those goals.

Of course, we might eventually have the ability to build an AI to have an agenda similar to ours.  But if we do that, and treat them as anything less than a fellow being, I think we’d deserve whatever happened next.  Luckily, we have no real incentive to design machines that would hate what we want them to do.  We have every incentive to design machines that will love what we want them to do.  As long as we do that, the danger from AI will be minimal.

10 thoughts on “Push back against AI alarmism

  1. Killer Terminator robots are probably a long ways off, but there is some concern about limited AI being a deployed technology. Drone with limited capability to pick their own targets, for example. Great progress has been made in four-legged robot beasts of burden. There is some risk in just having powerful ambulatory machines that aren’t that smart or aware. It’s a short jump to the idea of arming them. Hey, walking drones!!

    Given that software can never be perfect, the combination of software and powerful machines isn’t to be taken lightly. Even auto-piloting of cars and planes will be an interesting experiment. As silly as it sounds, it’s not too early to start talking about this stuff (but, yeah, the media sure does love to run with a new toy — the new viral reality).

    Liked by 1 person

    1. That’s a good point. What’s interesting about it isn’t the artificial intelligence part, but that the intelligence isn’t good enough yet. The danger is that we might assume the intelligence is ready before it actually is. Right now, that’s generally handled by having humans involved in any important actions. It’s always been a danger, from the first automated systems, that we might get carried away and let too much be automated.

      That said, I’m really looking forward to my self driving car, when it’s ready. The ability to read, sleep, or blog 🙂 on the way to or from work or somewhere else will be awesome!

      Liked by 1 person

  2. I like the distinction between “engineered” and “evolved” intelligence a lot. I think it also makes the case for developing specific AI over AGI – why bother creating machines that are like us (good at a lot of things, generally, but evolved in a mainly reactive way) when we can engineer (in a deliberate and active way) specialist machines that can do one categories of things way better or at least faster than we can (like an OCR)?

    Like

    1. Exactly! I think what we’ll see is that those specialty intelligences increasingly become better at their tasks. Their deepest drives will be to accomplish that task. For them, it might be as satisfying as we find eating or reproducing. Of course, part of their intelligence will be to not get so carried away that they do anything undesirable in satisfying those drives.

      Liked by 1 person

  3. The problem is it’s hard to design a “deepest” drive that could not go very wrong in an AI system that is much more intelligent and resourceful than we are. A really powerful AGI could be a bit like the Literal Genie trope http://tvtropes.org/pmwiki/pmwiki.php/Main/LiteralGenie. Yes, it should be intelligent enough to realise that people ought to be happy with its choices, but even then it’s hard to prevent scenarios where the system lies to us, drugs us or plugs us into a Matrix to make us happy.

    I’m not saying it can’t be done, but it’s not obviously true that fears of a catastrophe are overblown. That said, I am confident there will be no such catastrophe, but as with Y2K, it is the very hype about the possibility of such a catastrophe that makes it unlikely. The hype serves an important function. Caution is warranted.

    Like

    1. I think the fear of unintended consequences is a valid one. It comes, paradoxically, from having systems that aren’t intelligent enough yet. Right now, we handle that by having human input necessary for critical actions, such as a drone attack. Might we misjudge and trust the machines too early? Sure. But it’s a danger we’re already living with.

      I do think that fear hype can have a useful purpose. Your Y2K example is excellent. But I’m not perceiving that people need much hype to have this fear. It seems to have existed at least since Mary Shelly wrote Frankenstein, and in innumerable science fiction tales since then.

      However, I can see some downsides if it’s taken too seriously, such as lawmakers putting a lot of silly restrictions on research. There might eventually be a need for an IRB or IACUC type process, but we’re still a long way from even that.

      Like

  4. The only thing I see when I look at arguments from both sides is the complete and utter poverty of concrete principles and evidence to draw from. The alarmist camp argues some highly conjectural and fundamentally emotional point, the enthusiasts respond with equally conjectural and emotional counterpoints. You’re all really just spinning your wheels and doing nothing whatsoever to advance the science of artificial intelligence.

    Like

    1. So, you’re saying that we can’t know and it’s pointless to speculate? I don’t agree (and I’ve given my reasons), but fair enough. But I find it curious why people feel the need to enter a conversation just to tell others they shouldn’t engage in it.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.