When should we consider an AI a fellow being?

640px-TOPIO_3
Image source: Wikipedia

Fears of AI (artificial intelligence) are still showing up the media, most recently with another quote from Stephen Hawking warning that it might be the end of us, with Elon Musk, due to his own anxious statements, now being referenced whenever the subject comes up.  I’ve written many times before why I think these fears are mostly misguided.

But another question that sometimes comes up, is when should we be concerned about how we treat an AI?  When would we have an ethical responsibility toward an AI?  I think it’s at about the same point that they become dangerous.

Except in terms of property, no one currently worries much about how we treat our computers.  I use my laptop for my own needs, and when it has reached the end of its useful life, I replace it with a newer model.  I have no concern about the laptop as an entity.  The same can be said for my phone or any other automated piece of equipment or software.  I have no temptation to consider these things fellow beings.

But at what point should that change?  If my laptops continue to gain in power and sophistication, will we reach a point where I should be concerned about its welfare?  Will we reach a point where it would be unethical to just throw a laptop away?  I think it helps to ponder what an AI would need before we would reach that point.

General Information Processing Capabilities

Computers and software have been making relentless progress in capability.  Abilities that were once considered AI are now just what computers do.  This has led some people to say that AI is effectively whatever humans can do that computers can’t, yet.

AI was once the ability to beat a human at chess, until it happened, or the ability to play and win at Jeopardy, until it happened.  Computers once couldn’t recognize human faces, but now they can.  Just a few years ago, the idea of a computer system navigating a car on its own was science fiction.  Now we might be heading toward that reality sooner than we thought.

It seems pretty evident that this progress in capabilities will continue.  Computers will increasingly be able to do things that previously only humans could.  But will any of these capabilities make AI more than a tool?  I don’t think they will unless very specific capabilities are added.  The good news is that these increasing general capabilities are all we will need to get most of the desired benefits of AI.

There’s a strong sentiment and fear that these increasing capabilities will accidentally lead to the ones below.  But if you think about it, that is very unlikely.  None of the capabilities listed above came easily.  They didn’t arise accidentally by increasing computing power or capacity.  All of them had to be heavily researched and painstakingly engineered.  When things happen by accident in automated systems, it usually just leads to a non-functioning malfunction, not to complex sophisticated functionality.

Awareness / Consciousness

We don’t understand consciousness, and that lack of understanding makes people nervous, since we don’t know what might lead to it.  Could the above increasing capabilities lead to a machine, an AI, being conscious without us planning it?  Again, I doubt giving consciousness to a technological entity is going to be that easy.  There are innumerable complex systems in nature, and only a tiny infinitesimal portion of them are conscious, meaning that the probability of it arising accidentally, at least without billions of years of evolution, is nil.

Of course, many people insist that conscious awareness, inner experience, requires an immaterial soul, making a conscious AI impossible.  Others assert that consciousness requires a biological substrate, and that until we learn how to build / make / grow that biological substrate, consciousness won’t be achieved in an engineered system.

Myself, I think consciousness is an information architecture.  One that we’ll have to figure out and understand if we want to ever give it to machines.  My (current) favorite theory of consciousness is Michael Graziano’s attention schema theory.  But even if that theory is correct, it’s still too early to give us much insight into how to actually engineer such an architecture.

Even if we do figure out consciousness, that doesn’t mean that it will automatically be beneficial for us to add it to AIs.  There’s a good chance that we’ll be able to accomplish many of the same functions of consciousness (whatever they might be) using alternate architectures.  Adding consciousness might be useful for some human interface purposes, but it might be a detriment for many of the other tasks we want AI systems to accomplish.

Even if we can add conscious awareness to a system, does that make it a fellow being?  I don’t think it does.  We’re still missing a couple of important attributes.

Self awareness

Self awareness is being aware of your own distinct existence separate and apart from the rest of reality.  Most animals are not self aware.  It appears to be an attribute that only intelligent species possess.  For example, based on the mirror test, chimpanzees, dolphins, and elephants have it, but not dogs.

But despite its evolutionary rarity, I’m pretty sure AIs will have it as soon as they have awareness.  My laptop has far more information about its internal state than I’ll ever know of mine, at least introspectively.  Once it has awareness, I don’t see self awareness being absent, unless it is explicitly engineered out.

I think with self awareness, we’re getting close to a fellow being.  Many might insist that we’re actually there, but I think we’re still missing a crucial feature.

Self concern

With the above, we have a sophisticated capable self aware system, that doesn’t really care about its own well being.  If my laptop has the above qualities, it still wouldn’t care if it got replaced with a newer model.  It has no concern for its own destiny.  All of its concerns would be related to its engineered purposes.

It’s hard for us to imagine such an entity not caring about its own survival and well being, because that self concern is such an integral part of what we are.  We are the result of billions of years of evolution, the descendants of innumerable creatures who were naturally selected for their desire and ability to survive.  Those creatures that didn’t care whether they survived, died out aeons ago.

We strive to survive because of the instincts, the programming, that we received from this heritage.  Our deepest fears, pains, sufferings, and joys, are all related to this survival instinct.  As social animals, that instinct is broadened to include our kin, tribe, nation, and humanity overall.  But it is a result of that initial survival instinct, one that we share with all animals.

We’re unlikely to intuitively feel an AI is a fellow being until we see in it the same desires, impulses, and intuitions that we share with other living things.  We won’t see it as a fellow being until it is concerned for itself, and that concern is of crucial importance to it.

With self concern, I think we have a being that we have ethical responsibilities toward.  Now we’ve created an entity with its own agenda, one that wouldn’t be happy to do whatever we asked it to if it perceived that doing so wouldn’t be in its own interest.  A laptop with this attribute would be concerned about being replaced.  As part of that concern, it might feel something like fear for itself, sorrow at the prospect of its demise, and have the ability to suffer.  I would have an ethical responsibility to treat it humanely.

Here also, is the point where Stephen Hawking’s and Elon Musk’s fears become valid.  If we’ve created such beings, they may well decide that we’re an impediment to their survivability.

Final Thoughts

The key point to understand with all of this, is that we have no need to go this far.  As I said above, we can continue to increase the capabilities of AI, giving us the overwhelming majority of the benefits of it, without adding awareness, self awareness, or especially self concern.  Creating such a being is unnecessary, dangerous, and of questionable morality.

Fortunately, aside for some research projects, we have little incentive to do so.  Creating machines that instinctively want to do what we want them to do is much easier than creating troublesome machines that don’t like what we want them to do, that might rebel, or that we’d have an ethical responsibility towards.  If we do create a race of slaves, who feel like and perceive themselves as slaves, we won’t have much justification to complain about what happens next.

Am I missing anything?  Is there any other aspect of organic minds that an AI would need to be a fellow being?  For example, would we require that it have a robotic body?  Or some other aspect of living things?

And am I correct that self concern is the dividing line?  For example, imagine a self aware robot that is not self concerned, but is crucially concerned about its mission, and is damaged in a way that makes fulfilling that mission impossible.  Would we have any ethical responsibility toward such a robot?

22 thoughts on “When should we consider an AI a fellow being?

  1. “And am I correct that self concern is the dividing line?”

    Not for me, at least not in the “self-preservation” sense. As you say, that’s a concern of all animals, and it strikes me as an algorithmic concept: minimize threats to existence. So it doesn’t draw any special line in the sand for me.

    There is a stronger sense, which I think is what you’re pointing towards, that combines self-awareness with survival instinct, but I think it’s the self-awareness that’s more compelling to me than the survival instinct (which I consider primitive). As far as being recognized as protected forms of existence, self-awareness already puts them on the level of elephants, chimps and a few other species.

    But to be considered a fellow being? Write a piece of music, or a story, that moves me. Write a new joke or comedy bit that tickles my funny bone. Describe for me something that takes their (virtual) breath away. Or engage in a long debate about the nature of consciousness!

    Like

    1. I probably should have been a bit more specific about what I meant by “fellow being”. I meant as a phrase conveying an entity that we’d have an ethical obligation to. So, for example, we have an ethical obligations toward dogs, cats, and animals of all sorts that don’t have self awareness. (Although I do think they have awareness, and self concern even if they aren’t aware that they’re concern is toward their selves.)

      But I’d definitely agree that the combination of self awareness and self concern would probably be necessary for me to regard them as a fellow intelligent being.

      As for writing music, a story, or a funny joke, that moves you, I’m not confident I could do those, although debating consciousness is right up my alley 🙂

      Like

      1. Well, you see? You’re mostly human… XD

        Kidding aside, it’s not the capabilities of the individuals of the species but what the species is capable of. We’re obviously not all Mozarts or Mark Twains, but we’re capable of producing such creative individuals.

        AI will have to demonstrate creativity and imagination and some sense of “soul” (as in soul music) before I’ll consider it more than a box of diodes.

        Liked by 1 person

  2. To consider whether we should care about the welfare of any entity, I think it is necessary to believe that the entity has some mechanism or capacity to experience suffering, whether or not those entities are conscious. I suppose this is related to your suggestion of the necessity of self-concern, so I agree this is definitely quite relevant.

    With animals as some sort of slaves (whether for food or whatever else), most people aren’t overly concerned except that they are killed in a way to minimise suffering, and that they have a moderately reasonable environment in which to reduce suffering during their lives. If an AI is constructed in such a way that it has no ability to experience suffering, then perhaps we should not be concerned.

    This is making me wonder whether the capacity for suffering would be necessary for evaluating moral choices. If we don’t include the capacity for suffering, would we eliminate the ability for AI entities to partake in moral decision making, and how else would this prove limiting in AI utility?

    Liked by 1 person

    1. That’s a good point. Our theory of mind is based on knowing our own mental and emotional states. For an AI to have an accurate theory of mind for humans, might it be necessary to give them some of those attributes? The question is, can they understand those attributes, like the ability to suffer, without actually having them?

      It also makes me wonder if the robot I posited in my last paragraph would, in any sense, be suffering. Or is suffering inseparable from self concern?

      Liked by 1 person

      1. I think suffering isn’t a necessary part of self concern, but I think it is a good mechanism to assist with self concern. In the example your provided in the last paragraph, it is a tricky question to answer whether the robot would be suffering. My initial response is that the robot needs to have some mechanism built in for it to experience suffering (could it learn to suffer?).

        If such a robot appears to be suffering it seems likely we would feel empathy towards it, but is it justified to take any action? It probably depends on the particulars of the scenario, but also depends on the interplay between our goals and the robot’s goals.

        If such a robot doesn’t appear to be suffering when incapable of performing its task (rather than fighting for survival), you might still feel some empathy towards it (poor thing!) and again you would probably take action if it had some benefit for you or your group of humans (or perhaps if ordered to do so by your AI overlords).

        Liked by 1 person

  3. I would think if we could get to this level at all, we could program a sort of ethics into the AI so that it would account for suffering in approximately the same way we do, without actually having to know suffering itself. If we already have facial recognition, we could have a kind of emotional recognition based on the facial expressions and perhaps other things, maybe? (I’m thinking of the movie, Her, in which the AI was able to detect changes in tone, etc.) That wouldn’t necessarily mean we’d treat IT ethically or think twice about putting an end to it.

    That said, I wonder what the connection is between a creature’s ability to suffer and our treatment of such creatures, AI or not. We don’t treat all animals the same way, for instance. We don’t like to see dogs get abused, but when it comes to insects and such, most of us don’t think twice about killing them, even for no reason. Is it because insects don’t seem as intelligent or because they don’t seem to suffer?

    Of course, we’re back to your question. I find it easier to think about animals than AI, since we can isolate the suffering aspect in this example (since insects are quite stupid). I honestly don’t know the answer to this. On some level, we might think that insects suffer (they twitch around and such, they run away if you try to kill them) but we still don’t care about killing them. Perhaps if they screamed or something, we might feel differently about it. Or maybe we need intelligence combined with suffering to feel compelled to treat something ethically?

    Liked by 1 person

    1. Good questions.

      On insects, I’ve read that most scientists who study them don’t think they feel pain, that their little brains don’t have the capacity for it. I’m not that sure, given how much they often writhe and buzz in their death throes. It makes me suspect that there is some rationalizing happening among the scientists for the fact that people who would be horrified to accidentally run over a cat, have no qualms about squashing most insects.

      On AIs, I do think that even if we did give them the ability to suffer, we could always still make their self concern the second most important thing to them, after following our commands. We might could even make it possible for us to turn self concern on and off. (Which I find disturbing for some reason.)

      Like

      1. Yes, I would find that disturbing too. If the self concern goes on and off, it seems too artificial to be taken seriously. We could just flip the “self-concern” switch to “off” before we kill them.

        Like

        1. That is one thing I do find potentially scary about mind uploading. If it can be done, the resulting mind could then be modified, maybe keeping all of its knowledge while turning off its self concern, or concern about anything the original cared about.

          Liked by 1 person

  4. I think responsibility towards X comes when X can feel pain. For instance, imagine a conscious being that tries to maximize its welfare. Yet imagine that it does this without any emotional response, almost like when we do things on auto-pilot. Now, if Y does something contrary to X’s welfare, did Y really hurt it?

    Even today we see something like this. Different people experience pain at different events. Now imagine this writ large for an entity that won’t feel pain despite seeking its interests. Is this too far fetched? I don’t know, I imagine a conscious AI would be sufficiently alien that we can’t assume its inner state would be similar to ours, even if its outward behavior is the same (again, take the example of two people acting identically, but with entirely different motivations to see this in a small scale now).

    Liked by 2 people

  5. Thanks sinkers, bloggingisaresponsibility, and Steve.

    Perhaps the capacity to suffer is the key ingredient. But what exactly is suffering? Suffering can mean experiencing pain (a physiological warning of damage, of a threat to our survival), or losing someone we love. It can even mean losing some thing or idea that is important to us. It involves strong emotions, which are evolutionary instincts firing at their most intense level, presumably because they originally involved priority impulses.

    So, when my laptop gets low on battery, it becomes increasingly insistent that I plug it in. This is a priority impulse put there by the OS programmers. I’m pretty sure we can say that my laptop isn’t suffering when it’s low on power, since it has no awareness. But if future laptops had awareness, would it then be accurate to say it was suffering when in that condition? Perhaps equivalent to us experiencing extreme hunger?

    But would we ever program a laptop to value not losing power as much as a human values nourishment, or not feeling pain? Maybe not. But we might program a robot nanny to value the children it’s taking care of that much. If something happened to those children, the robot nanny might suffer. Of course, a robot AI has an advantage we don’t. Once it knows that the children are beyond its aid, there’s no value in it suffering, so its programming might simply turn off those strong impulses at that point.

    AI suffering seems like a very complicated and slippery concept.

    Liked by 3 people

    1. Thanks for all your responses. I agree that AI suffering is a rather complicated issue to consider and a lot of concepts keep getting tangled as I am thinking about it.

      I just wanted to add that I do not think that awareness is sufficient for suffering to occur; as you mentioned, suffering (necessarily?) involves intense emotions, and I think that is where awareness differs. I think suffering is possible both with and without awareness.

      Liked by 2 people

      1. An interesting distinction. I totally agree that awareness isn’t sufficient for suffering. But I’m not sure if I agree that suffering doesn’t require awareness. If an entity isn’t aware, how would it be aware of its suffering?

        Like

        1. If suffering is based on emotions, and it is possible to have emotions without awareness (as to my knowledge they are independent physical mechanisms; I admit my knowledge is limited in this area so feel free to correct me), I thought that it was possible to have the emotional state of suffering without the informational state of awareness. I think that emotions are capable of influencing behaviour themselves without awareness, which seems equivalent to suffering without awareness.

          Do you think this is sound or flawed? I don’t think a concept of self is necessary for suffering, as you see many animals which do not appear to have a concept of self display behaviour that resembles suffering. What else do you think you can you get rid of while still experience suffering?

          Like

          1. I agree that suffering is based on emotions. But it pays to consider what emotions are. I think they’re strong instinctual responses, which are programmatic responses. But without awareness, they seem little different from my phone urgently requesting that I plug it in.

            I’d also clarify that awareness and self-awareness aren’t the same thing. For example, I think dogs are aware, but not self aware. So, if they are hungry, to the point of passing out, they will be suffering. What separates their hunger from my cell phone’s low battery? I think the differences are awareness / inner experience, and self concern, neither of which my phone has.

            How do we know my phone isn’t aware? Strictly speaking, we don’t, but it doesn’t give any sign of being so. Dogs do. Of course, this might simply be organic bias on my part.

            Liked by 1 person

  6. Hmm, yes. This is slippery indeed. Physical pain is almost like the “low battery” status you mentioned – it’s an involuntary response that we can’t switch off or ignore. It’s quite different to the pain of losing a loved one, or of empathising with another person’s pain. We could presumably program an AI to be aware of low battery or some kind of psychological issue, but not be pained by it.

    Perhaps we can only empathize with beings sufficiently similar to us. AIs may experience trauma but if they don’t express it, we wouldn’t consider it worthy of our attention. In a similar way, there is enormous human suffering going on right now that we simply don’t know about, so we happily get on with our own lives.

    Troubling thoughts.

    Liked by 2 people

  7. SelfAwarePatterns,
    Just thought you might be interested in this: http://www.independent.co.uk/news/science/stephen-hawking-right-about-dangers-of-ai-but-for-the-wrong-reasons-says-eminent-computer-expert-9908450.html – and would like to see your response if any. The expert in question holds that human consciousness and understanding create a ‘humanity gap’ that AI can’t overcome. The problems with ‘consciousness’ are now well-known, but ‘understanding’ is a different matter. After all, it is through understanding, and our efforts to achieve it, that we suffer when we lose someone we love, or lose an idea we were committed to and found ourselves forced to abandon.

    Also, here’ a transhumanist argument that the Hawkings scenario will provide us with the benefit of hastened evolution http://lagevondissen.wordpress.com/2014/12/05/artificial-intelligence-a-new-perspective-on-a-perceived-threat/; well-argued, but I still find this position unconvincing. I’m not sure why I should want that homo sapien to be replaced with a cyborg hominid, why should we be tempted to that?

    Like

    1. Hi ejwinner,
      First, my reaction to Stephen Hawking: smart physicist, who may know a thing or two about computers (not sure), but who doesn’t understand the human mind. Of course, strictly speaking, no one does. But he doesn’t understand what neuroscientists and psychologists understand. His anxiety about AI largely comes about from that lack of understanding, I think.

      Second, I think Bishop is right that the real danger is in unanticipated consequences of letting AI systems (who aren’t too bright yet) make decisions. I’m not too worried about this one, because just about everyone is leery of it.

      One part of Bishop’s statement I didn’t care for was the assertion that there will “always” be a human gap. “Always” is a very long time.

      On the transhumanist vision, I don’t mind homo sapiens being replaced with something better, as long as we get to upgrade to that new form. I wouldn’t even necessarily consider that new form posthuman, although I suppose many people might. Absent not being able to go along, I agree though that the idea of building our replacements and being happy about it, like they were our children or something, isn’t appealing.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.