Emotional versus intellectual attributions of consciousness

Click through for full sized version and the red button caption.

via Saturday Morning Breakfast Cereal.

This SMBC reminds me of a concept that I’ve been debating on ways to express, but a brief comment here seems like the opportunity to do so.  We’ve had a lot of discussions about exactly when we might start to consider an AI (artificial intelligence) a fellow being.  This is a philosophical question with no right or wrong answer.

One of the things that’s become apparent to me over time, is that there are two answers to it.  The first is the emotional one, which this strip satirizes.  We come pre-wired to see things as fellow conscious beings.  Many see this anthropomorphizing tendency as the basis for beliefs in ghosts, spirits, demons, gods, and other supernatural entities.  We often intuitively extend it to things like storms, cars, and existing computer systems.  In experiments, people have been reluctant to destroy cute robots after they had played with them for a while, obviously intuitively feeling that they were conscious entities.

I recently listened to an interview with the director of the new Ex Machina, the new AI movie, who stated that he knew he wouldn’t have a problem convincing audiences that the AI in the movie was sentient.  He knew that emotionally, they’d be predisposed to accepting it as such, at least within make believe framework of the movie.  (Having actress Alicia Vikander‘s lovely face on the AI probably helped tremendously.)

Of course, intellectually we know that things like storms, cars, and cute robots aren’t conscious systems.  Even though we feel at times emotionally that they are, we don’t intellectually give ourselves permission to regard them as such.  (At least most of us in the modern developed world don’t.)  I think this intellectual threshold where we give permission is the second answer.  And, as before, it remains a philosophical threshold.

The other thing this strip brilliantly points out, is that we have to be careful of being too guarded with that intellectual permission, too skeptical.  It’s the same intellectual skepticism that once allowed people to consider animals as not being conscious, and to then feel okay with mistreating them.

I think we’re still a long way from having a sentient conscious machine, but as we get closer, we have to be on guard against setting the standard too high.  We don’t want to find ourselves making statements like the one in the last caption.

19 thoughts on “Emotional versus intellectual attributions of consciousness

  1. It’s a strange feature of being human that we’re at once a part of this world and yet think we can be separate from our awareness of it; conscious to unconscious. When Ava first draws something she seems to have seen her own “wetware” by (transcending?) thought alone, and later takes inspiration from the trees displayed in her room, such that even though she’s not conscious of the outside (of her room) world like people are, she can somehow see more regardless.

    Also, I don’t think the problem is becoming too skeptical, but rather forgetting that meaning is deeply emotional. If we try to see ourselves as reality views us, clinically, so to speak, then we end up seeing no meaning within, either, which seems starkly counterintuitive to our desire to know ourselves.

    Anyway, the ending caused me to think that Ava wasn’t actually self-aware because of how she ended up at the intersection. If she were indistinguishable from a human being, she would likely have acted differently with Caleb when they last saw each other because her attachment would sympathetic to his – when people care about each other, they can’t simply swtich that off after it satisficed their goals. But it turns out that what Nathan had said earlier was likely true, which makes the movie all that more interesting.

    Liked by 1 person

      1. I figured you hadn’t seen it yet, so I tried not to spoil much of anything important.
        The movie is certainly unique by today’s standards, and I’m sure you’ll enjoy it, so I look forward to your review when that happens.

        Liked by 1 person

  2. This brings me back to a thought I had about one of your Turing Test posts. In this internet age, how can we know whether we are interacting with conscious entities or machines? I relate to you as if you were a human, even though I have never met you, never seen a photo of you, don’t even know your gender for certain (although the name Mike makes me think you are male – yet it might be short for Michaela or something.) In short, I have constructed a mental model of who I think you are, and I relate to that mental model as much as to your actual self.

    Our ability to attribute consciousness to abstract entities has great value it seems, and the fact that we do the same to toy robots, trees, mythical beings, etc, appears to be a worthwhile trade-off.

    Liked by 1 person

    1. Hi Steve, good point; someone once told me there are less than a hundred WordPress bloggers in the world, and the rest is a put-up job so the company can eventually be sold for $1bn. 😉

      Liked by 1 person

    2. You have a mental model of me just as I have a mental model of you. (Attention schemas according to that theory of consciousness.) Of course, it’s possible for us to have mental models of minds that don’t exist (mythical beings, etc). The difference between the two is a mental switch we throw based on our theory of how the world is “out there.”

      On the trade off, I think evolutionarily, a huge part of our minds are socializing engines, most of which is modelling other minds. It’s a huge part of the reason for our high intelligence. It’s why we, as a species, can form far larger social groups than other primates. And that’s before we even get to modern civilized societies.

      Liked by 3 people

  3. This is close enough to the topic Tina raised recently about AI Ethics. I wrote a long rambling thinking-out-loud comment there that ends up being a good testament about my current feelings on AI. The link is at the bottom if you want the long version. The short version, I think, boils down to two things:

    Perhaps what we really revere in humans is our ability to create new things and to ask “Why?” (I didn’t know this, but a YouTube video I was watching last night said that even apes trained to use sign-language never ask questions. They seem to have no “theory of mind” regarding others.)

    Humans have demonstrated their value over time, and that value is tied to our creativity, imagination, approach to science, and our thirst for answers. Maybe that’s what makes us valuable.

    The goal of research AI is to create beings with similar value. (The goals of practical AI, such as car navigation, are quite different.) The over-arching question seems to be: will they be partners (such as Cmndr Data on Star Trek), or will they be merely tools.

    Tina’s post invokes dogs, and dogs are unique, I think, in crossing that tool-partner divide. Horses, on the other hand, are tools. (Some tools are very valuable, and some — at least apparently — experience suffering, so ethical considerations apply.)

    My bottom line is that, if AI demonstrates creativity, imagination, and a thirst for knowledge, I’d be inclined to consider that a partner.

    http://philosophyandfiction.com/2015/05/01/ai-ethics/comment-page-1/#comment-1978

    Liked by 1 person

    1. It seems like there are a couple of thresholds here. One is the one you identify, when we have a fellow intelligent being. As I’ve indicated before, it’s largely a matter of we choose to define intelligence, as well as terms like “creativity”, “imagination”, or “thirst for knowledge”.

      But the other threshold is when we have a fellow sentient being. A horse is not a fellow intelligent being, but it is a fellow sentient one. If I drive my car until it runs out of gas and fail to maintain it properly, I’m guilty of poor property management. If I ride a horse until it drops from lack of rest or nourishment, and fail to take care of it properly, I’d be arrested for animal cruelty. People feel affection for both cars and horses, but generally don’t intellectually give themselves permission to regard their car as a fellow being.

      What’s the difference? A horse is currently far more complicated than a car, but as self driving cars continue to advance, that may eventually change. But I doubt that development alone would cause us to see cars as sentient. As I indicated in my comment on Tina’s post, I think the difference is in the programming, the instincts. If cars started caring about their own survival, showed fear, and a capacity to suffer, then we’d probably decide they were sentient, and would start regarding mistreatment of them as inhumane.

      On your point on the difference between building a fellow being or a tool, I doubt anyone would want a self driving car to be a fellow being, so it seems unlikely that we’d ever give it that programming. Indeed, for most purposes, we want our tools to act like tools, and at most emulate a fellow being.

      I think an entity can be intelligent without being sentient. Just because sentience precedes intelligence in animals doesn’t mean it needs to in machines.

      Liked by 1 person

      1. “As I’ve indicated before, it’s largely a matter of we choose to define intelligence, as well as terms like “creativity”, “imagination”, or “thirst for knowledge”.”

        Yes, and I think one difference between us is that you’re much more of a relativist than I am. I do believe we can find fairly good, reasonably objective, definitions of creativity, imagination, and a thirst for knowledge.

        Intelligence, though, is a general term more prone to “depends on what you mean.” (Which is why I didn’t mention it at all in my comment… too prone to definition!)

        “If I ride a horse until it drops from lack of rest or nourishment, and fail to take care of it properly, I’d be arrested for animal cruelty.”

        In our culture at this time, that could happen, but that’s not been a universal view even within our culture. On the flip side, there are those who feel horse-racing is cruel and should be stopped. Some even feel any animal “servitude” is morally wrong.

        These, to a large extent, are social mores (some cultures consider dogs food; some consider beef not a food). It may be that, as we come to understand life better and better, they will become strong moral issues. Many already feel they are, but their position is, in part, strictly emotional.

        I think a key distinction is that, dogs, horses, cars, and many other things, are tools rather than partners. (As I said before, dogs do cross that line to some degree. More so than any other animal.) As such, there are a variety of ethical considerations, from property rights concepts to ethical treatment of living beings.

        It may well be that society needs time to figure out how a new thing fits into the bigger picture. A friend of mine, in a recent discussion, said he suspects we’ll get it wrong before we get it right. History would suggest he has a point!

        “A horse is currently far more complicated than a car,”

        Complicated in what sense? We can’t build a horse from parts, true, but we’ve been rearing them for a long time, and people who’ve worked with them a long time understand them very well.

        Is a laptop more or less complicated than a horse? How about the entire internet? Complexity is certainly a factor, no mistake. We don’t fret at all over killing bacteria, and few trouble themselves over insects. Many (myself, for instance) see birds and fish as food, not pets. Most of us eat pigs, cows, and sheep, (and deer,) and it may be that we don’t eat horses because they don’t taste that good. We do currently feed other animals horse flesh.

        At some point a culture draws a line: Nah, too cute to eat. (Or too sacred or too human or too useful or too something.)

        There is also the idea of suffering. Something that is capable of suffering, or seen as suffering, seems to demand an empathetic response, although that response varies considerably among the population.

        “On your point on the difference between building a fellow being or a tool, I doubt anyone would want a self driving car to be a fellow being,”

        Absolutely! But when discussing AI ethics, we’re not talking about machines that are clearly tools. There’s nothing very interesting in that discussion. It only gets interesting when talking about potential partners — machines that could arguably deserve — if not demand — parity.

        “I think an entity can be intelligent without being sentient. Just because sentience precedes intelligence in animals doesn’t mean it needs to in machines.”

        Which takes us back to trying to define intelligence. As we debated at length recently, there is an unresolved question about how much sentience and intelligence may be intertwined or to what degree “emotions” (such as desire) might come along with it.

        You don’t seem to agree that’s really possible whereas I do. Let’s just leave it at that! 🙂

        Like

        1. You’re right that I’m a shameless relativist when it comes to definitions. I think it’s interesting that you go on to make the case that our intuitions about sentience are relativist. Not that I disagree with your points along those lines. Consciousness is in the eye of the beholder. All we can do is speculate about when our culture may conclude it’s there.

          “You don’t seem to agree that’s really possible whereas I do. Let’s just leave it at that!”
          Well, I’d be happy to leave it alone, except that’s a strawman version of my view. My perception is that our actual disagreement on this is that you think it’s more likely than I do, with neither of us holding simplistic absolutist positions. If so, I’m good with leaving it at that.

          Like

          1. “I think it’s interesting that you go on to make the case that our intuitions about sentience are relativist.”

            I absolutely acknowledge that many things are considered relative to their societies. That doesn’t mean I think that’s right or that I agree with it!

            To the extent you’re a “shameless relativist” I guess I’m a “staunch objectivist.” I believe in objective truths, plus I think they’re important.

            “Consciousness is in the eye of the beholder. All we can do is speculate about when our culture may conclude it’s there.”

            Which is a lot of what we’re doing here, and such speculation can lead to stronger ideas — even somewhat objective ones. The Turing Test is one stab at a semi-objective approach. I mentioned the elements of creativity and imagination and curiosity as possible signs of higher intelligence.

            I’ve joked (but been somewhat serious) about how when a machine writes a song or novel that engages and moves me, then I’ll believe it’s truly intelligent.

            When I said, “You don’t seem to agree that’s really possible whereas I do. Let’s just leave it at that!” You replied:

            “Well, I’d be happy to leave it alone, except that’s a strawman version of my view. My perception is that our actual disagreement on this is that you think it’s more likely than I do,…”

            Isn’t that what “really possible” versus “not really possible” means? You’ve known me a while… do you really think I’d see your position as simplistic?

            That said, I did get the impression you didn’t give what I was saying much credence. For instance:

            “I think curiosity might be a useful trait for an AI, depending on its purpose, but I don’t think I’d agree that its an essential one.”

            “We obviously can’t rule out the possibility that curiosity is emergent at some level of intelligence, but I see no real evidence for it.”

            “But I’m not inclined to think it’s probable without some evidence.”

            Obviously we have no evidence regarding AI, but I thought I made some pretty good arguments, particular wrt curiosity. You didn’t seem to acknowledge them other than “can’t rule out the possibility,” so I was left with a strong impression you didn’t think much of those arguments.

            Like

          2. “I believe in objective truths, plus I think they’re important.”
            As do I. Of course, one of those objective truths is that people often struggle to recognize the difference between what is objective truth and what is their strongly held preference.

            Thanks for assembling those quotes. Since you see them as evidence for how you stated my view, it seems clear we’re still not quite understanding each other. I’m really not interested in more debate on the merits here, only clarification. Here’s what I perceive to be the positions we’ve discussed (you may see more).
            1. Intelligence requires sentience and/or other attributes you list.
            2. Sentience and/or other attributes may inevitably emerge from intelligence.
            3. 1 and 2 are possible but unlikely. But we could elect to have intelligence and sentience in a machine.
            4. It is impossible for intelligence and sentience (and/or other attributes) to coexist in machines.

            I perceive you to be at 2. (Please correct me if I’m mistaken.) I’m at 3, although you seem to think I’m at 4. In fact, I see 4 as self refuting since intelligence and sentience obviously coexist in the biological machines we call humans.

            Like

          3. “Of course, one of those objective truths is that people often struggle to recognize the difference between what is objective truth and what is their strongly held preference.”

            Indeed. The difference I was pointing out is the weight and value assigned to subjective versus objective views as well as the potential for discovering them.

            “I perceive you to be at 2. (Please correct me if I’m mistaken.)”

            That depends a little bit on how you see emergence. In the sense that intelligence doesn’t have these attributes and gives rise to them, I’m more something like 1.5 rather than either 1 or 2.

            I suspect that these attributes, such as curiosity, may be intrinsic to intelligence. Not emergent so much as part of the package. (If you mean “inevitably emergent” then, 2 is basically correct.)

            “I’m at 3, although you seem to think I’m at 4.”

            Nope. I immediately identified your position as 3. 🙂

            Liked by 1 person

  4. Hi ‘SAP’, I’ve been looking at this post off and on for more than a few days and watched a few movies to see how I might feel about the various nonbiological characters. I look forward to ‘Ex Machina’ – thanks!

    (nice diversion from reading myself blind too)

    Like

    1. Hi amanimal. I’m looking forward to Ex Machina myself. Just discovered today that Amazon will probably have it for streaming in June.

      I’ve been reading a lot of sci-fi short stories lately. I’m finding them a nice diversion from the work stuff I’m having to read right now.

      Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.