What would it mean for a machine to suffer?

Image credit: Holger.Ellgaard via Wikipedia

Image credit: Holger.Ellgaard via Wikipedia

One of the dividing lines I often hear in discussions about whether we should regard an artificially intelligent machine as a fellow being is, does it have the capacity to suffer?  It’s an interesting criteria, since it implies that what’s important is that there be something there for us to empathize with.  But it raises an interesting question.  What would it mean for a machine to suffer?

First, what exactly is suffering?  The Buddhists and Stoics thought this through millennia ago.  Suffering is about desire, or more accurately, unsatisfiable desire.  We desire scenario X, but are faced with scenario Y.  When we can’t fulfill our desire to bring about X, the continued discrepancy leads to suffering.

For example, say I’m experiencing pain, electro-chemical signals from some part of my peripheral nervous system that my brain’s body model indicates could mean injury.  I desire (intensely desire) not to be injured.  In the short term, the discrepancy between the signals indicating injury and my desire for there to be no injury, motivates me to move in such a way as to avoid it.  For instance, if I receive signals from my hand that my brain’s model interprets it as a burning sensation, I move my hand.

We might say that I’m suffering during the period between noticing the discrepancy and rectifying it, but that’s not what we generally mean by the word “suffer.”  If after I move my hand, I continue to experience a burning sensation, and it lasts long enough to be considered chronic pain, then we’re getting to a situation that we might commonly consider to be suffering.

I’m receiving signals of damage, but I desire not to be damaged.  Consequently, portions of my brain induce a state of wanting to correct the situation.  But it can’t be corrected.  Intellectually I might realize this, but the more primal levels of my brain do not.  Despite the intellectual knowledge, I can’t just stop the urges and desires from the lower level aspects of my mind.  Hence, I suffer.

The above example is for physical pain, but it applies to any scenario that is at variance with my primal desires.  A friend gets hurt.  I desire that my friend not be hurt.  But it has happened and can’t be changed.  Nonetheless I continue to intensely desire that my friend didn’t get hurt, leading to mental anguish, aka suffering.

Or perhaps I experience some severe loss of social status: a mate leaves, my reputation is damaged for some reason, something causes my friends and/or family to ostracize me, etc.  All of these scenarios can be at variance with deeply held primal desires.  Even if I know intellectually that they can’t be fixed, the emotional part of me won’t know, at least not for a period of time.  During that period, we could call my experience of the discrepancy, “suffering.”

If you think about the stages of grief as they’re often described in the Kübler-Ross model: denial, followed by anger, bargaining, depression, and finally acceptance, it makes sense to see this as our emotional selves coming to accept something our intellectual selves might have understood very early on, perhaps our brainstem and limbic system coming to accept what our thalama-cortical system already understands.

I’m reminded of something a friend told me at the funeral of one of his children.  When I asked if there was anything I could do for him, he replied, “No, there’s nothing that can be done.  When something like this happens,  you have no choice but to put in your time.”  In other words, to put in the time suffering grief, of coming to terms with the new reality.

Of course, the Stoics and Buddhists claim that there is something you can do.  You can change your desires.  As I understand it, both philosophies include techniques for attempting to do this.  I have no idea what the success rate is on these endeavors, but it seems like some desires are easier to change than others.  For example, I can probably let go of a desire for, say, a job I didn’t get, or maybe even for a mate I couldn’t attract, much easier than I can change my desire not to experience chronic pain, hunger, or other threats to survival.

Still, the insight that suffering is intimately associated with desires is a powerful one.  We won’t always succeed in banishing desires that cause suffering, but being aware of the relationship can help.

(It’s interesting to note that some opioids alleviate pain by disrupting this relationship, by working not so much to inhibit the painful nervous signal, but make you not care that you’re feeling it, by essentially removing or lessening the desire to not be in pain.)

So, what would it mean for a machine to have this experience?  Does it actually make sense to ask this question?  It implies that the machine would have layers of functionality similar to humans, that would be in differing, possibly contradictory states.  But why exactly would we build a machine like that?

Wouldn’t it be more likely that a robot caregiver is highly motivated to take care of its patient right up until that patient is gone?  Once the patient is gone, once the robot is sure they are beyond its ability to help them, is there any utility in the robot still receiving that motivation, that unsatisfiable desire?  Along the same lines, once a robot realized that it was damaged, wouldn’t it make sense for it to simply log and then ignore the damage signals, at least until an opportunity for repair arose?

The discrepancy in humans results from the way we evolved, from adaptive emotions arising through random mutation and then being selected for.  Evolution isn’t a precise engineer.  Having our emotional desires be able to quickly adjust to logical realities probably never provided a strong survival advantage, so it was never selected for.

But for machines, it seems more likely that we’d design their intellectual and primary programming (their version of desires) to take into account their logical understanding of reality.  Now I could see us being very conservative on when we want a robot nanny, automated nurse, or self driving car to come to this type of conclusion, but it doesn’t seem like it would be months or years afterward, as it often is in humans.

This raises the question of whether these machines would ever be entities we could really empathize with, if we’d ever see them as fellow beings.  Even if we eventually come to have a perfect understanding of the neuroscience of suffering in humans and animals, would it make sense to impose that on artificial intelligences?  It would seem like a cruel and unnecessary move.  I don’t doubt someone will do it as an experiment, but on a mass scale, it seems like an unproductive strategy.

Unless of course there are aspects of this that I’m missing.  Could suffering perhaps serve a purpose in automated systems that I’m not seeing?

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

47 Responses to What would it mean for a machine to suffer?

  1. Steve Ruis says:

    Let’s say there is an AI whose job it is to run four manufacturing machines. And the AI’s connection to one of the four is disrupted and it cannot deal with the problem itself, so it sends a message to whoever/whatever effects repairs. The entity that effects repairs (human or robotic) doesn’t respond. So the AI just sends repeated requests down channels it has been programmed to use and nothing happens.

    If we wanted the AI to take things into its own “hands” then we might program it to more and more efforts to fix its problem. We might consider this goad to greater efforts to fix its problem as pain. Over time the pain “ramps up” and the AI is goaded to use its own creativity to lessen it.

    Suffering can be programmed … but can it be felt? Aye, there’s the rub.

    Liked by 2 people

    • Indeed. I’m reminded of how my phone or laptop gets more insistent as its battery charge gets close to critical.

      But this raises the question, what is “felt” or “feeling”? My elbows are resting on the desk as I type this. I feel them against the desk. The pressure against my forearms and elbows triggers nerves in my skin, which transmit electrical signals to my spinal cord and up to the brain, where my brain assimilates the information into its model of both my body and the environment.

      If a robot did that, would it make sense to say it was “feeling” its elbows (or appendages or whatever) against the desk? Or does it make sense to say my computer is “feeling” each keystroke as I type on the keyboard? If not, why not?

      Liked by 1 person

  2. keithnoback says:

    The characterization of suffering as unrequited desire misses the mark. That’s not suffering, it’s just flawed insight.
    Suffering is an enforced loss of motivation. How do you give your machine, motive to lose?

    Liked by 3 people

    • Hariod Brawn says:

      Surely suffering is, as Mike says, ‘mental anguish’, Keith? It isn’t physical pain per se, and it isn’t lack of intentionality (‘motivation’) alone, is it?

      Liked by 1 person

      • keithnoback says:

        Though the two are wrapped up in each other, I think that there is a meaningful distinction between motive and intention.
        If I am cold, then there is a whole raft of factors which prompt me to form my particular notion of cold, before I am able to reflect upon it. These are somewhat different than my regard of coldness as an intentional object. Maybe like the motor which turns the grinder making the intentionally inexistent sausage.
        For instance, if I am cold while standing at the base of an ice climb, my experience of that coldness is motivated by the circumstance. I desire to be warm. I won’t get to be warm. But I don’t suffer.
        My tolerance of that coldness transcends the power of delayed gratification, too. I may feel horribly cold, but I am horribly cold in context. The sensation exists within my field of consciousness, and is part of my ongoing consciousness project.
        The same cold, when experienced while buried in an avalanche comes with suffering, because there is no context to shift about around the sensation. It becomes my field of consciousness and, having crowded out all else, cannot move – and so I cannot move.
        That’s suffering. Not fear. Not unrequited desire. Suffering.
        The other characterization is basically an eliminative view and thereby demeans suffering. It allows the Stoic to gaze down upon others’ suffering with a satisfied understanding, while he deploys the elimination of desire as a pat psychological trick which allows him to shift from contextualizing his discomfort to enduring it more readily.

        Liked by 2 people

        • It sounds like you’re saying that suffering can never be voluntary. If you’re experiencing pain or discomfort because you elect to experience it, then it isn’t suffering.

          I can see the distinction, although I’m not sure it matches common use of the term “suffering” or “suffer”. How often do people say they “suffer” through their workout. Is that just metaphor or hyperbole? Maybe. But then I don’t know too many people who exercise just because they enjoy it. Most people do it because they feel they need to do it.

          I also think about military officers who end up in situations where they make decisions that cost some of their men’s lives. Technically, their grief, feelings of guilt, and other anguish were something they chose, although in reality the situation demanded it of them.

          It seems like the voluntary vs involuntary distinction isn’t a sharp one.

          I do have to admit that I find some the attitude of some Stoics annoying. Stoicism has its insights. But the attitude that you’re only suffering because you can’t get your expectations right can be carried too far.

          Liked by 3 people

          • keithnoback says:

            Yes. Hyperbolic metaphor, in fact. Just like I love the smell of coffee. I don’t really love the smell of coffee. It isn’t so much a voluntary/involuntary distinction as it is a functional one.
            It doesn’t really matter how you get to the suffering. To offer an apophatic definition: Suffering is that which must be endured.
            I am a weirdo, but I like to run. I don’t enjoy it the way that I enjoy a piece of chocolate, but I am motivated to run.
            “Man is not motivated by pleasure, only the English-man.” – Nietzsche’s response to Hedonism. I think he was right. His characterization of motive was more accurate.

            Liked by 2 people

        • I think it’s possible to endure suffering but actually like it because you know it is meeting some longer term goal. Exercise is the prime example. If I’m good at it and it’s a normal practice for me, the discomfort of it might eventually form a sort of pleasure in a Pavlov’s dogs type of associationism.

          The same is true of black coffee. For decades I drank it and would have said that I enjoyed it. It was always bitter, but I associated that bitterness with a desired effect. But after giving it up for a few years, I can’t stand it without sweetener and cream, although I know if I endured black coffee for a while, I’d eventually grow to like it again. The taste never changes, just the association my emotional self makes with it, which I think it the overall point you’ve been making.

          In some ways, this acquiring of a taste, this acclimatization, is our more primal selves becoming convinced that a sensation it initially considers noxious is a longer term good. It seems like a similar journey to the one we go through with grief, of the primal self coming to terms with something the intellectual self already knows.

          Thinking this through, I’m starting to see how someone can become a masochist. That’s a disturbing thought.

          Liked by 1 person

    • Keith,
      I wonder if you could elaborate by what you mean by “enforced loss of motivation”. On the face of it, it sounds like something closer to despair than suffering necessarily.

      Liked by 1 person

  3. Steve Morris says:

    One of the problems in discussions of God is how he could allow evil into the world. Perhaps the true question is how a God could permit suffering. If we are creating an AI in a world that we know contains evil (whatever that means), could it ever be moral to allow that AI to suffer?

    Liked by 2 people

  4. Hariod Brawn says:

    Leaving aside the fact that I think the notion of non-biological replicants to be so wildly hypothetical as to be absurd (forgive me, Mike, but you may recall this is my view in any case), then there perhaps may be a purpose in designing in emotional anguish. As higher primates, our own emotional states are in part governed by empathic readings of others. [see: Frans de Waal] Taking a care-giving situation, of the infirm or elderly, then what comforts us and aids healing is not simply the ministering of medication, the attending to our bodily comfort, the monitoring of our condition, and so forth, but also the palpable sense of being cared for in the psychological and empathic sense of the phrase. It’s a vital part of the care-giving equation, without which we, in some felt sense, become dehumanised ourselves – a mere organism to be sustained.

    I wonder if there needs to be a hierarchy established in the machine such that it understands the distinction between preferences and desires. If it can’t distinguish between the two (as healthy, self-aware humans can), then a heated emotional quality gets added unhelpfully to the former, and the machine’s task fulfilment becomes impaired. It may seem a subtle distinction, but I think it’s worth making. The Buddhist goal of desirelessness doesn’t connote a state beyond any opting for preferences, even if those preferences can be substituted or their indulgence postponed. So, preferences give us a largely non-emotive means of accommodating innate (pre)dispositions, and which the machine itself must surely have too if it is to be (effectively) a human replicant.

    “Even if we eventually come to have a perfect understanding of the neuroscience of suffering in humans and animals, would it make sense to impose that on artificial intelligences?” – Well, we ‘impose’ it on our offspring! 😉

    Liked by 3 people

    • Steve Morris says:

      I haven’t heard Buddhists talk about preference vs desire before. Isn’t desire just a stronger version of preference?

      Liked by 2 people

    • Understood on your skepticism of non-biological intelligence. I’ll leave that one alone since it’s a discussion all in itself.

      I totally understand and agree with the rest of your second paragraph, but I think we have to back up and remember that all of that has value because we suffer. If we didn’t suffer, the need for a lot of it seems like it would evaporate.

      On hierarchies, I definitely think they would need to be there. We have our own complex hierarchies of preferences and desires.

      On the preference / desire distinction that you’re making, would it be accurate to say that preferences are scenarios we want without deeply visceral intensity, while desires are scenarios we want with that intensity? Just making sure I understand the distinction. If so, certainly preferences are easier to change than desires.

      “Well, we ‘impose’ it on our offspring!”
      Hmmm. Well, nature imposes it on our offspring, but I suppose you could argue that by deciding to have children, we’re essentially imposing it on them. Of course, if we don’t have children, there’s no one to avoid imposing on.

      Liked by 1 person

    • Sorry, missed you and Steve’s exchange on preferences and desires, and that you had already answered that question.

      Liked by 1 person

  5. john zande says:

    The Nuffield Report on Suffering describes suffering as: A negative emotional state which derives from adverse physical, physiological and psychological circumstances, in accordance with the cognitive capacity of the species and of the individual being, and its life’s experience.

    Provided it has a level of autonomy, I think a machine can meet all these requirements.

    Liked by 2 people

    • Interesting. I hadn’t heard of Nuffield until now. Thanks!

      I think the key question I’d have for their statement would be, adverse to what? For living organisms, it’s understood that it’s adverse to their survival, wellbeing, and other instinctive needs.

      But a machine wouldn’t have any of that (unless we engineer it to be there). So would seeing their engineered goals unfulfilled constitute “suffering”? Particularly if they don’t have a lower level aspect of themselves continuing to fire intense motivation impulses that can’t be fulfilled?

      Liked by 2 people

      • john zande says:

        A machine can experience damage (ruined processors, loss of memory) and still function, so it can be argued that it is capable then of recognising (experiencing) negative circumstances. Can we equate this with something akin to anxiety, or even fear? Perhaps. I guess it depends entirely on its processing power.

        Liked by 2 people

        • I think the distinction is that I can’t turn off pain or sorrow, even if I’m in a situation where they’re not useful. For example, if I’ve just broken my arm, it’s adaptive for me to feel it breaking so I know that serious damage has happened. But if I’m in a situation where it might take awhile to get it tended to, I can’t stop feeling the intense pain. It will continue to cause an aroused state in my mind, even though I can’t act on it, interfering with my ability to think clearly.

          If a machine sustains damage, it makes sense for its central processors to receive signals about that damage. But then, once it understands that it is damaged, and marked that limb as unusable for the time being, it’s processing from that point until repairs can be effected would be relatively free of the ongoing alarm and resource depletion that biological entities experience. Unless of course we engineer it to work like biological entities.

          Liked by 1 person

          • john zande says:

            All true, but if the damage was sufficient enough to affect its capacity to perform tasks, then a bottleneck of undone work would build up, and could we not call this mounting backlog a cause for anxiety?

            Liked by 2 people

          • john zande says:

            We could even quantify this anxiety by observing the varying priority the machine assigns to each unfinished task by the order it deals with the backlog when it can.

            Liked by 2 people

        • We might could call the prioritization of resources a type of suffering.

          I guess where things might be different is when the robot concludes that it will never be able to complete its tasks. It might patiently keep on the lookout for an opportunity to eventually complete those tasks, but it doesn’t seem like it would remain in an aroused state, in a higher energy state, since it would be useless. While a human might stay in that state for some time, assuming the task are something they care deeply about, even if they have no foreseeable opportunity to actually do them.

          Liked by 1 person

  6. Tom W. says:

    This is a tough post to answer to – I’ve tried three times and still am not quite satisfied with what I’m trying to convey, but I have to let it stand and hope you catch my drift:

    Chosen randomly (don’t have to watch the whole thing, only a minute or two):

    The airplane is suffering, calling-out to its caretakers “woop woop!”

    Ok, maybe not such a good example, but one nevertheless:
    One can be rendered utterly oblivious of the suffering of others – just look at those who work in the meat industry and mass-poultry farms – Until we recognize suffering in all living creatures, I’m not so convinced we’ll recognize it in machines. My point? “Speciesism” – a ‘new-fangled’ word that does put the finger on a unique behaviour we’re capable of displaying – the idea that anything too different from us is somehow unable to suffer as much as we can, and so relatively-speaking ‘doesn’t really suffer’.
    The trouble with machines, whether ‘truly’ AI or not, is that if we built them then we have the most ‘legitimate’ reason (the most excusable excuse) to deny the fact that they suffer. (I’m reminded of Monty Python’s Quest for the Holy Grail when the knights leave Camelot disappointed: “It’s only a model…”).

    Some will say a machine is suffering, and those who deny that machine’s suffering will say of the others that they’re “just projecting” or some other quip.

    It’s like that philosophical thought-experiment where I tell you I’ve got a fire-breathing dragon in my garage, and you come see and say it isn’t there, so I tell you it’s an invisible fire-breathing dragon. You say you can’t hear it. I say it’s because it’s sleeping very quietly (this fire-breathing dragon turns invisible when it sleeps as a protective measure it evolved). You stomp through my garage and say you didn’t bump into it. I tell you I never claimed it was a big dragon, that you merely walked around it, etc. It could go on and on. (better version here? http://www.godlessgeeks.com/LINKS/Dragon.htm)

    The idea is that the one who affirms and one who denies machine suffering could be on either side of the ‘dragon’ argument.

    So while I can’t state my final position on whether or not machines could ever suffer, I will state that I don’t think such a phenomenon is a ‘good measure of sentience’.
    (sorry, realized I now I don’t have much on the ‘utility’ of suffering in machines – but imitating Nature has served us well in many areas. If there’s a reason for ‘natural suffering’ then there may be an unseen ‘utility’ to artificial suffering too?)


    • Thanks Tom. Always appreciate your thoughts.

      Wow, I’ve never seen that movie so that video sequence was intense. No way I wasn’t going to watch the whole thing after the first minute.

      I definitely agree that there’s a lot of speciesism when it comes to recognizing, or failing to recognize, suffering. For instance, most people don’t think roaches or similar insects can suffer. All I can say is that when I spray one, it sure seems to suffer before it dies, to the extent that I usually try to quickly put them out of their misery. I don’t doubt that their suffering is less developed than ours, but it sure seems to be there.

      It seems like for us to recognize suffering in a machine, it will need to have desires and drives similar to ours, along with an inability to just let those desires go when they’re unsatisfiable. I do think it will eventually be possible to build a machine like that, but I’m having trouble seeing much pragmatic use for one. (The one exception might be machines designed to be as human-like as possible, which might include at least appearing to suffer when an actual human would.)

      Liked by 1 person

      • Tom W. says:

        “which might include at least appearing to suffer when an actual human would.” I rest my case.

        Yeah that movie was intense, and I haven’t seen it either – as I said, i just searched for ‘in flight engine failure scene’ and picked one where there was some semblance of ‘frantic’ beeping from the plane.

        Just watched another episode of “The Walking Dead” and funnily enough there’s the ‘simulated suffering’ of the actors, the ‘actual suffering’ of their characters, and the utter lack of suffering of the zombies. Quite a bit to parallel this discussion. If the series had been matrix-like with a sort of robotic armageddon – or ‘Terminator’ style even – the differences would have been minimal: either way, the ‘inhuman’ would be lumbering ‘sensationless’ creatures. A lack of suffering does de-empathize us from them certainly.
        So maybe, like why actors ‘fake’ suffering, robots faking suffering could have some kind of ‘social’ purpose. But again, that’s ‘fake’ suffering. I wonder: is the ‘real’ suffering is an unknowable-because-properly-subjective qualia?

        Liked by 1 person

        • “I wonder: is the ‘real’ suffering is an unknowable-because-properly-subjective qualia?”

          Eventually this gets to the problem of other minds. How do we know that everyone but us isn’t a philosophical zombie? We can’t really, not with absolute certainty. We take it as highly probable that other people do because they look and act so much like us, and animals give off enough signals for us to conclude that they are at least sentient (as opposed to sapient).

          But ultimately, if a non-sentient creature had, as an adaptation, the ability to fake sentience to garner sympathy from us, we may never be able to know the difference. Even if we learn the neuroscience of our sentience, we can’t be sure another creature didn’t get there by a different architecture (see cephalopods).

          Liked by 1 person

  7. I’m with you on this. I don’t see why we’d program suffering, as such, assuming we could. I can see that a great part of what makes us human is the ability to suffer, but I don’t see any reason why we’d need to have that in AI. It could be that suffering is inextricably linked to other human qualities that we might want—I can see that argument—but suffering as such seems, as you’ve pointed out, sometimes unreasonable and unnecessary to our survival.

    That said, I can’t really see the benefits of hard AI creation, so I’m a bit biased. Cool robots. Let’s end it there. I’m not convinced of the possibility of hard AI either, but I keep an open mind on that.

    Liked by 1 person

    • Thanks Tina. I do think hard AI will eventually be possible but it would require that we understand how human and animal minds works far more thoroughly than we currently do (which could be centuries down the road).

      I’m open to the possibility that it might turn out to be impossible, but it would require that the human mind operate in some manner that will forever be beyond technology, such as some variation of substance dualism or according to some unique physics, quantum or otherwise. But I think the probability that this will turn out to be the case is slim.

      That said, I totally agree that if hard AI is possible, by the time we know how to do it, we’ll probably have little incentive to create it, at least on any mass scale. I think we’d be more motivated to enhance our own minds, maybe replace failing portions of our brain, and eventually move our entire mind to a technological substrate.

      Liked by 1 person

      • I can see hard AI working in combination with biological components rather than having something computer-like, machinery in a metallic sense. 🙂 But who knows. Maybe I’m just being materialistic.

        Mind enhancement sounds pretty good…I could use that, for sure!

        I saw some Sci-Fi TV show about memory enhancement, but I can’t remember the name of the show. It was a series with entirely separate stories, almost like mini-movies, and if I remember correctly, there were six episodes available on Netflix at the time. Anyways, the episode I saw involved getting a brain enhancement that allowed you to access your remembered experiences in a sort of video playback. The plot revolved around a paranoid guy who felt his wife was cheating on him. She didn’t have the enhancement, but he did, and so he used that to access their arguments and scrutinize her every minute gesture, every aspect of her behavior. Not good, as you can imagine.

        Liked by 1 person

        • There are people who say that a mind requires a biological substrate. It’s possible. Moore’s Law appears to be petering out, so continued performance and capacity improvements will require exploration of radical new architectures, something that hasn’t had to happen for several decades. We may find that building something able to run a mind requires “wetware”, a substrate similar to evolved brain tissue.

          Maybe I’m being overly optimistic, but I tend to think we would eventually be able to reproduce that substrate. It may be centuries, millennia, or maybe even millions of years in the future, but there shouldn’t be anything fundamental preventing it. At that stage, we’d be talking more about engineered life rather than what we think of as robots.

          That TV show sounds like ‘Black Mirror’. Someone at work told me about it and I’ve been meaning to check it out. He described the episode you mentioned and another where a woman whose mind has been uploaded into a house computer is forced to anticipate and cater to the needs of her original biological self. Reportedly, Netflix has ordered a new full season of episodes.

          Liked by 1 person

          • Tom W. says:

            Hi Rung2diotmasladder: That show is called Black Mirror (https://en.wikipedia.org/wiki/Black_Mirror_%28TV_series%29). Very dark collection of stories. Hah! You beat me to it Mike!

            Liked by 1 person

          • Black Mirror—that’s it!

            I can see the engineered life, but I tend not to hear about that as much as robots or some sort of computer. I can imagine raging debates over whether the “wetware” counts as artificial or biological.

            Liked by 1 person

          • Definitely you mostly hear about metal robots. But I think that’s a relatively short term view. There’s a reason life evolved using carbon, oxygen, and hydrogen. Those elements are far more common in the universe, more readily available, than metallic one. What would a more advanced civilization use?

            “Artificial” is a bit of a misnomer to me. It implies that artificial intelligence wouldn’t be real intelligence. I tend think the real distinction may eventually be between evolved versus engineered intelligence. Although over time I could see that distinction becoming blurred.

            Liked by 1 person

  8. agrudzinsky says:

    I agree with Tom W. Humans must be able to identify with the suffering being to be able to empathize. The being must be similar to humans – have a similar appearance, facial expressions, etc. The less a creature resembles a human, the less empathy we feel. Mammals have bodies, limbs, facial expressions. We can empathize with a suffering dog or a horse. But who can feel empathy to a starfish, for instance? If I accidently step on a slug, I see it shrink, apparently, in pain. But my empathy does not go farther than just an intellectual recognition. I don’t feel compelled to ease its suffering. The airplane in the video does experience signals that can be called “suffering” by logical definitions, but I doubt anyone can truly empathize with an airplane or a car.

    Liked by 2 people

    • I think you’re right, although I think behavior is important as well. I actually can have a little empathy for slugs. They flee or curl up into balls when attacked, and have other behavior that intuitively make me feel that something is having an experience, however limited. Although I’ll admit that my empathy for them is much lower than for mammals. I can’t say I have any empathy for starfish.

      A few years ago, there was an experiment where human test subjects were allowed to play with cute robots. Afterward, the subjects were asked to smash them. They refused. Eventually one person did, but only when the researchers threatened all the robots unless at least one got destroyed.

      Liked by 2 people

      • Tom W. says:

        Reminds me of a seminal article I’d read in the New Scientist (seminal for me, that is, it turned me on to robotics and electronics when I was uh, 17 apparently – the issue date):
        I tore out the pages and still have them somewhere. That article awoke my passion/drive for physically realizing AI as opposed to some intangible software thing.
        Anyway, in that article Mark Tilden tells a story about how his little ‘critters’ would freak-out his Mom and he had to stuff them all in a box and put them under his bed before she came to visit his place. She swore is last words would be “No! Stop! Back!” Hahah.
        More seriously though, this relates to the discussion in that simple ‘behaviours’ actually (one could say) generate empathy in the observers: Another story was a very ‘dumb’ robot that Tilden made and tested for the military in a minefield. It was designed to step on the mines. It had really long ‘elbows’ so it could continue through the field if it got flipped-over. Anyway, 1, 2, then 3 of its legs got blown off, and it dragged itself along on its last leg and the soldiers watching this ‘demonstration’ actually asked Tilden to ‘put it out of its misery’.
        Empathy is a tricky thing: How much of it is projection onto ‘the other’? How much of it is ‘genuine’ recognition of phenomena in ‘the other’?

        Liked by 1 person

        • Empathy is definitely tricky. It looks like our instincts for it can be hijacked fairly easily. Determining whether an entity can or is suffering is a difficult matter. But I think the criteria I laid out in the post still applies. Did the mine clearing robot have any desire to survive beyond its mission to clear mines? I doubt it. Without that desire, plus its ability to completely compartmentalize the data of its damage and continue to function, makes me think “suffering” is the wrong word. (This is before even getting to the issue of consciousness.)

          That said, I might feel differently if I actually watched it struggling to reach the next mine, my empathy and sympathy instincts hijacked by what looked like a creature in agony.

          Liked by 1 person

          • Tom W. says:

            I don’t think one could even argue that the mine-clearing robot had any desires whatsoever if given a minute to consider how it concretely operated (the ‘brain’ is just a delayed pulse going around a loop, which triggered each leg to move in a clockwise gait: front-right, rear-right, rear-left, front-left, rinse, repeat). But the fact that it kept moving after having lost 3 limbs was probably quite impressive to the human soldier witnesses.
            Heck, cartoonists and animators explore the realm of sympathy quite often in some of their creations – with some very surreal shapes and creatures yet they have to be ‘sympathetic’ somehow. Or even look at the very basic ‘bit’ from Tron – what elements/traits were used to make it sympathetic for the viewers?

            Liked by 1 person

          • You just made me think of the way Disney often portrays animals in its kid movies. ‘Honey I Shrunk the Kids’ has an ant come up and lick someone in a very puppy like manner. Elements/traits for sympathy indeed!

            Liked by 1 person

      • agrudzinsky says:

        I find the very experiments with morality somewhat inhumane. But this seems to confirm my thought. People can emphasize with something looking
        like a human or some “cute” creature that we can relate to.

        Liked by 1 person

      • agrudzinsky says:

        BTW, to feel empathy, the thing does not have to “process” any signals. It’s enough for the thing to just resemble a human in appearance. People don’t feel comfortable tearing up dolls or mutilating corpses that obviously cannot suffer or feel any pain or make any “me scared!” sounds.

        Liked by 2 people

        • Not sure I really see it with dolls, but I think mutilation of corpses is considered desecration because it feels too much like attacking the person (or pet or whatever) that they once were, but worse since they’re now incapable of defending themselves. A lot of this is simply our emotional selves being slow to let go of our feelings for the deceased, in other words, grief. (On the flip side, if the deceased was hated, desecrating their body emotionally feels like a blow against them, even if they never feel it.)

          Liked by 1 person

  9. Tom W. says:

    Talk about serendipity! I just came across this:

    (haven’t read it yet, but thought it might be pertinent!)

    Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s