xkcd: Why Asimov put the Three Laws of Robotics in the order he did

Source: xkcd: The Three Laws of Robotics

The Three Laws from the Wikipedia article:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[1

Not sure I’d agree that the third xkcd scenario inevitably leads to Killbot Hellscape.  This is fortunate since it’s probably the closest to what will be.  Indeed, its somewhat already the case with existing systems (well, minus the third law).

Personally, I’m wondering about the wisdom of even having the third law (the robot should protect itself), except strictly in support of obeying humans.  As far as I can see, if there are ever any flaws, any unintended loopholes in the first two laws, the third law can only cause trouble.  This comes down to not creating survival machines, particularly enslaved ones.

Critics of the Three Laws often point out that the devil is in the details.  What exactly does it mean to avoid harming humans?  Or if the orders from humans are contradictory, which ones should take precedence?

In reality, the Three Laws seem more like guidelines for engineers, who would have to work on the details of what they mean in each context.  Of course, eventually the engineers will likely be robots.

This entry was posted in Zeitgeist and tagged , , , , , , . Bookmark the permalink.

23 Responses to xkcd: Why Asimov put the Three Laws of Robotics in the order he did

  1. James Pailly says:

    I seem to remember the logic behind the third law was that robots are expensive pieces of equipment, so you don’t want them to damage themselves if they can avoid it. For me, a lot of the fun of Asimov’s stories was seeing how the three laws come into conflict with each other, and how robo-psychologists help resolve the robots’ “mental health” dilemmas.

    Like

  2. Steve Morris says:

    Self-driving car: first law is impossible to achieve in every situation.

    Like

    • Good point. It raises the question of how a robot is supposed to respond to something like the Trolley Dilemma.

      Like

    • Speaking of…

      Like

      • Steve Morris says:

        Very interesting. Technology is forcing us to address ethical conundrums explicitly, and to discuss them in public.

        If I drive a car around a bend and suddenly find a group of children crossing the road, I am faced with a split second decision. After the event, I must try to justify what I did, and people may well be sympathetic to whatever action I took, provided that I was not speeding or under the influence of alcohol. But the actions of a self-driving car can be discussed and planned in advance. The problem is not “don’t harm humans” but “how to minimise the harm?”

        Like

        • In the absence of some kind of regulation, I suspect most cars are going to favor their passengers, if for no other reason than they’re the one owed a fiduciary duty and would have the best chance of suing the manufacturer. Which raises the disturbing possibility that expensive cars may do better at it than cheap ones. (Of course, that’s already the paradigm we live in, but with the cars currently only being passive tools, for the most part.)

          Like

          • Steve Morris says:

            It would be hard to justify programming a car to self-destruct and deliberately kill its passengers. A more likely outcome is that the car would do its best to stop, or avert danger to other road users, even if this is known to be an impossibility due to braking distances, etc.

            Liked by 1 person

  3. ratamacue0 says:

    if the orders from humans are contradictory, which ones should take precedence?

    Mine.

    Liked by 2 people

  4. Wyrd Smythe says:

    One problem with the first law is identifying humans in the first place.

    Interesting question about driverless cars. It’s going to get really interesting once the first fatality occurs. Can the cars even distinguish between obstacles that are human as opposed to animals or inanimate objects? Can it distinguish (per the trolley problem) numbers of humans?

    Like

    • Reportedly they’re doing a good job detecting bikers and pedestrians (admittedly I’m basing that on what Google chooses to reveal), but I wonder how they do if a small child or animal runs into the road.

      I think all of the major players know that if that first fatality happens too early, it will set the adoption back for decades. I suspect these things are going to be far more conservative than even the most conservative driver ever dreamed of conserving.

      Like

      • Wyrd Smythe says:

        My question is are they detecting objects or discriminating them as bikers or pedestrians? There are certainly many things to discriminate.

        As for being conservative, as the trolley problem discussion gets at, there are double-bind situations that present only bad choices. How does the system pick among such choices might be very tricky.

        Like

        • From what I understand, they “know” the difference between them, at least at an expected behavioral level. Although somewhere I read (or watched, I can’t remember) that one of the cars that was behind a biker at an intersection kept getting confused by the way the biker was fidgeting while sitting at the light. The car kept thinking the biker was going forward, started to follow, then realized what was happening and stopped itself, repeatedly, which of course disturbed the biker, although there was no harm done. I’m sure the engineer along for the ride made sure the incident was noted and addressed.

          Like

  5. Wyrd Smythe says:

    And now a word from an AI researcher…

    Like

    • The devil is definitely in the details, and the three laws are really little more than thought ideas. But by this guy’s reasoning, no goals for any system are worth thinking about because there would be details to work out. About the typical quality I’ve seen from his other videos.

      Like

      • Wyrd Smythe says:

        ” But by this guy’s reasoning, no goals for any system are worth thinking about because there would be details to work out.”

        I think you’re reading way too much into what he said. He was specifically talking about the difficulties of defining “human” and “harm” which are, indeed, extremely difficult to define even for us humans, let alone doing it in code!

        But I have designed systems with goals, so I know it to be trivially true, so I know it’s worth thinking about.

        “About the typical quality I’ve seen from his other videos.”

        ❓ Are you saying good or bad?

        Like

        • I’ve developed my share of systems. Occasionally I had people carping that we weren’t ready to develop system X because DETAILS, yet we always succeeded. I’m not saying they don’t sometimes have a point, just that it’s virtually never the showstopper they portray it to be.

          “Are you saying good or bad?”
          I haven’t been impressed with them, at least not the few I’ve seen.

          Like

          • Wyrd Smythe says:

            “…just that it’s virtually never the showstopper they portray it to be.”

            We (programmers) have all been in that position, but that’s not what he’s talking about in the video.

            Like

          • Well, obviously my impression was that was indeed what he was talking about. Just rewatched portions and that impression remained. I’m afraid I’ll to leave it at that. I don’t really want to go over the video enough to debate it.

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s