Zombies discussing philosophical zombies

Click through for full sized version, and philosophical explanation if you’re not familiar with David Chalmer’s and Daniel Dennett’s positions on philosophical zombies.

Philosophy Humans – Existential Comics.

I can’t say I’ve ever been too impressed with the idea of a philosophical zombie.  I could see maybe a zombie existing that behaves identically to a human being, but whose internals are simply designed to fool people.

But the classic concept is a zombie that is identical to a human being in every way, right down the neurological structure of the brain, but isn’t conscious.  In my mind, that concept only makes sense within the framework of substance dualism, of belief in a non-material soul.  If you have that belief, then the philosophical zombie is a meaningful concept to you.  If you don’t have that belief, then I can’t see how the concept has any coherence.

Perhaps the silliest part of the concept is the idea that, because we can conceive of it, it must exist.  Well, I can conceive of dragons, poltergeists, and perpetual motion machines, but I feel pretty comfortable that none of those things exist.  I think the entire history of humanity demonstrates beyond all question that we can conceive of all kinds of impossible things.

23 thoughts on “Zombies discussing philosophical zombies

  1. Ha! That’s Chalmers (pre-haircut) and Dennett (pre-heart attack) in the cartoon. Hasn’t Chalmers moved on to Panpsychism and the like now, or does he still bang on about zombies?

    Like

    1. Good question. I know Massimo Pigliucci recently responded to a paper he wrote supporting the idea of mind uploading, which didn’t strike me as the Chalmers of old, assuming of course I understood his long standing positions.

      Liked by 1 person

  2. Chalmers, at least, doesn’t argue that, because we conceive it it must exist, so much as that the concept is logically coherent. He readily says a “zombie world” isn’t likely in reality. He uses the example of a “mile high unicycle” — a logically coherent concept, albeit probably not a practical one. For example, you mention dragons. They don’t exist in our world (as far as we know), but are they a logically incoherent concept? Several SF writers have painted realistic portraits of “real” dragons (even a recent episode of Grimm gave them an interesting physiological basis), so they don’t seem logically incoherent.

    His point is that, if you disagree with the idea of a mile-high unicycle or a zombie world (or dragons), the onus on you to provide a precise explanation of what makes them logically incoherent. Chalmers suggests the idea of everyone in China participating in simulating your brain — one neuron to a a person. They use radios to link up, and each person perfectly simulates the function of their given neuron.

    Chalmers asks whether a “group consciousness” arises from this (it’s a similar thought experiment to the “Chinese Room” one — whether the functional process can give rise to experiential consciousness). When the Chinese people charged with simulating the visual cortex do their thing, is there any experience of “seeing red” and, if so, where is the seat of that experience?

    That said, like you, I’ve been askance at the idea of philo-zombies since I heard of it. The thing that strikes me as missing from the description is the evolution of such a society over time. I can — just barely — accept the idea as coherent as presented at any given instant, but not as a system that evolves over time. Art, for one example, seems highly based on our experience of reality, so I question how a zombie world could have art. Or gourmet food. Or music and dance.

    (This post reminded me that I need to get back to Chalmers’ The Conscious Mind. I set it down a while ago, got distracted, never got back to it, and it slipped out of mind, so thanks for the reminder!)

    Liked by 1 person

    1. Thanks for the clarification on Chalmers. In truth, I conflated Chalmers’s “can exist” with “exists.” My bad. Of course, this depends on what we mean by “can.” It may well be that a mile high unicycle can exist, but I can’t see that a philosophical zombie can, unless some variation of substance dualism turns out to be true.

      My answer to the Chinese Room is that the entire room is the mind, a group consciousness, which when I tell that to someone impressed by the CR, usually leads to an argument about the definition of “know,” as in who or what “knows” Chinese. Mary’s Room is a little more interesting. I think Mary lacks knowledge of the sensory experience of red until she leaves the room and sees it. I don’t think either of these thought experiments prove any of the ontological assertions some people make from them (i.e. disproof of physicalism).

      Despite liking his attitude, I’ve only read the occasional article by Chalmers. I’ve never tried any of his books. I’ve always found his thoughts about consciousness, when I’ve been exposed to them, more visceral than logical (my attitude toward Searle is about the same). But your insights and the fact that he apparently takes mind uploading seriously are making me reconsider.

      Liked by 1 person

      1. It may be that even “can exist” is too strong for what Chalmers is getting at. His only claim is that the idea is not logically incoherent. Those who find it so must account for the lack of coherency. (As I wrote earlier, for me it’s the idea of zombies over time. I think much of life is the way it is because we experience, so even if one willed a zombie world into existence, I can’t see it evolving as ours would.)

        Your answer to the CR… a characteristic of our consciousness is our ability to report on it, to report our experiences. If the CR does have an emergent “consciousness” capable of experiences, shouldn’t it be able to report those experiences? There seems no mechanism for such.

        What Mary’s Room tries to establish is that an experience of redness is a new fact in Mary’s existence and — hence — the facticity of experience. It’s also used to demonstrate that a perfectly detailed account of experience (Mary’s perfect expertise in visual systems and color) cannot substitute — or even lead to — the experience itself. I’m not sure it can really be used as any proof beyond that.

        If Chalmers really does believe in the idea of mind uploading, that’s a huge disappointment to me. It seems at odds with my impression of his stance. Even more reason to finish his book!

        Liked by 1 person

        1. Hmmm. If you asked the room for a description of its experience, why couldn’t it respond? You might say it’s just the person doing the responding, but they couldn’t do it in Chinese without the manuals, anymore than I could describe my experience without drawing on the language semantics stored in my brain. The only difference I can see is in where the language information is stored.

          Don’t get too disappointed in Chalmers yet. I haven’t actually read his views on the matter. I’m basing it on Massimo Pigliucci’s anti-MU response to Chalmer’s position. https://selfawarepatterns.com/2014/10/21/massimo-pigliuccis-pessimistic-view-of-mind-uploading/
          (Although this just reminded me that I have the book with Chalmer’s essay in it. Maybe bed time reading tonight.)

          Liked by 1 person

          1. So the question is, what would be responding? What if you asked the CR, “How do you feel today?” What would respond? What would have an opinion on its own feelings at that moment? What is it that emerges from the process that can report experiences and internal emotional state?

            Like

          1. Ah, well, ya can’t win em all. 🙂

            As we’ve said before, at least this is one question we will resolve one of these days (and possibly not even terribly long from now). I just hope I live long enough to see it!

            Liked by 1 person

        2. Oops, my bad, it’s not actually the same paper as the chapter in ‘Intelligence Unbound: The Future of Uploaded and Machine Minds’, but it looks like a very interesting picture of his views on the singularity and AI.

          Like

        3. “So the question is, what would be responding? What if you asked the CR, “How do you feel today?” What would respond? What would have an opinion on its own feelings at that moment? What is it that emerges from the process that can report experiences and internal emotional state?”

          I think this can be answered in multiple ways, depending on perspective. I could say the room is responding. I could insist that the person inside the room is responding, but if they are responding in Chinese, then they’re responding in a way that they couldn’t outside of the room.

          If you asked me the same question, who would be responding? Of course, I myself would be responding, but “I” am composed of several modules and processing centers. You could say that my intellect was responding, but it would be unable to without my language centers.

          If I were a split-brain patient, it would be the left side of my brain that responded. But the right side would still be there and would have heard the question, but without the developed language center in the left hemisphere, it could only respond non-verbally.

          I think those who see the Chinese Room as significant are recoiling from the reality that the borders of a mind are not as firm as we’d like to believe.

          Like

          1. No, but these examples (the CR and the philo-zombies) get at the idea of experience. You are mostly at the point of waving your hands and suggesting that something experiences, but since we don’t understand the nature of experience, we have no real understanding of how the CR could “experience” anything.

            There is a spectrum. Would we agree that my thermostat has no “experience” of reality, that it merely reacts per its “programming”?

            As the system becomes more complex, those on your side of the equation suppose that, at some point, there arises (as they say) “something it is like” to be that system. (Versus, there is not “something it is like” to be my thermostat.)

            Those on my side have a lot of trouble seeing exactly where that line could be. We have a suspicion that it may not involve mere complexity, but of course, we’re in the dark about WTF is going on here as much as anyone.

            Like

          2. I’d agree with the thermostat not having experience, at least not to any degree that we’d be tempted to call it that.

            But once a portion of a system has experience (such as a communication room with a human in it), does it make any sense to say that the system overall doesn’t have experience?

            One objection I could have some sympathy for is that the Chinese Room is not an enduring system. It lasts while the person is in it, and changes if a different person replaces them, and ceases to function if no person is in it. (This criticism also applies to many group-consciousness entities.)

            An answer to that objection might be that a human brain is constantly changing, and eventually ceases to function. It’s a time delimited pattern. Just longer than the time that the Chinese Room pattern exists. Does the length of time matter? Suppose someone stays in the room for a couple of days. At that point, the pattern is lasting longer than many insects live. Do insects have experiences? I don’t know.

            All of which is to say that I don’t perceive that we have to solve the experience problem to regard the CR as having experience, unless of course we want to do it without a human in it, then I would definitely agree that our lack of understand of inner experience becomes an issue.

            Like

          3. You know, I’m not sure we’re perceiving the CR the same way. In my view, the person is largely irrelevant — changing the person shouldn’t change the room’s behavior at all. Obviously, lacking a person “kills” the room, but the point of having a non-Chinese speaker in a Chinese Room is to remove the personality and experience of that person.

            The person is just the “energy” (for lack of a word) that drives the room. The person’s task is strictly mechanical.

            I’ve been thinking that something like Siri, or even Google itself, seems to be approaching the CR (and getting fairly close).

            Google, for example, is a mechanized process one can query for information. I assume Chinese speakers use Chinese. I also assume Google doesn’t experience anything emergent — it’s just a mechanized process that reacts according to its programming.

            At what point might something like Google begin to “experience” reality and be able to report its feelings?

            (I just asked Google: “How are you feeling today?” It just gave me a bunch of links. 🙂 )

            Like

  3. The Zombie-argument is circular. It states that “we can imagine a human-like being that acts like a human, but without consciousness”. At this point, the assumption that this is possible, that the consciousness is something ephemeral on top, that could just as well be missing, is built into the argument. Later on, it is derived as a result. So the dualism that comes out in the end is put into it in the beginning.
    What if the premise is wrong and a system like us must intrinsically be conscious. Then the whole argument collapses. It is a flawed argument.

    Liked by 2 people

  4. This is unexpectedly delightful. While I’m not actually familiar with the conversation in question, this does sound incredibly familiar to my in class discussions about artificial intelligence and empirical idealism. This is roughly how most of my student debates devolve, usually just in time for the end of class, when I run out of moderating steam.

    Liked by 1 person

      1. Whew…I don’t feel so dense now. I couldn’t make sense of that argument. I get the sense Nannus is right, something feels circular about it. But I can’t even figure out what the terms are so I just don’t know.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.