SMBC: Let’s ask the aliens to explain consciousness

Today’s Saturday Morning Breakfast Cereal comic is pretty good, and related to our recent discussions.

Click through for the original to see the hovertext and Red Button bonus caption

How would you have responded to Zorkrang’s initial question? (Assuming you weren’t more concerned about being naked and experimented on by an extraterrestrial.)

50 thoughts on “SMBC: Let’s ask the aliens to explain consciousness

  1. • If you have to ask, you already know.
    • If you’re asking, that’s the answer.
    • Pay no attention to your thoughts about thoughts, about you thinking about thinking, about questioning your existence with a thing that must exist for you to question.

    Liked by 1 person

        1. I thought I had already replied to this but it must have dropped somewhere.

          To explain in more detail.

          Consciousness is the model of the world. When you – or Zorkrang and the human – are in a room (or anywhere else), your consciousness has a model of the room or wherever you are. You are interacting the model of the room not the room in itself. So the phenomenal room is consciousness as much as memories, private thoughts, and other seemingly internal mentation. It is what you are experiencing even if it appears external to you. It doesn’t really matter if the model of room is an exact match to the actual room (if that means anything), you are still interacting with the model not the actual room.

          Consciousness permeates not just the internal world but the external world too. When Zorkrang asks if consciousness is in the room, he has already acknowledged he is having the experience of a room.

          Liked by 1 person

          1. It looks like the spam folder ate the first one for some reason. I’m checking it daily, but usually only once a day. If a comment disappears and you don’t want to wait, let me know.

            So because any gesture would be made from within a conscious world model, pointing anywhere would be pointing at consciousness. Nice. (Although I’d say we’re interacting with the room, with the model guiding how we interact with it.)

            But suppose Zorkrang replied that he does have a functional world model, which includes a model of us pointing, but it only involves predictive modeling, selective focusing mechanisms, and utilization for decision making. If he asked what “phenomenal” meant, how could we respond?

            Like

  2. Does the human literally have a chip on his shoulder?

    I’d split “consciousness” into multiple dimensions, like awake-asleep, access-nonaccess, and – this one’s different because the second of the pair is not generally unconscious – introspective-extrospective perceptions. (According to spell-check, I just invented the word “extrospective”.) Then I’d just point to examples. The man’s confusion is probably about the introspective stuff.

    Liked by 1 person

    1. Not sure on the shoulder. I was more wondering what’s sticking out of his butt.

      I’m not sure if it’s the same concept you’re reaching for, but there is the word “exteroceptive”, which refers to perception of the external world, as opposed to “interoceptive”, perception of the inside of the body. Although maybe what you mean is both of those together. Any sharp distinction can get complicated because the same mechanisms used for introspection are thought to be used for theory of mind perception of others.

      Like

      1. No, not exteroceptive/interoceptive; even interoceptive senses like thirst are subject to introspection revealing complexity that in some sense “shouldn’t” be there. For example, you can feel something that you might call thirst even when your body is not water-deprived. You could probably train somebody to recognize the difference between some instances of merely-subjective-“thirst” and genuine cases of being water-deprived. In which case, you could construct an analog of my thought experiment of the hot-adjusted hand and the cold-adjusted hand sensing the temperature of a new object.

        If you thought that sensations were singularly purposed to detect particular external (exteroceptive) or internal (interoceptive) features of the world, then the fact that a single real-world temperature feels different to the two hands is a “fault” of our biology. Likewise the fact that you can “feel thirsty” when you have plenty of water on board. But then, “If you thought…” what this paragraph began with, then maybe biology is smarter than you are.

        Liked by 1 person

  3. “I can’t really define it Zorkrang except to say that it is that source from which I launch every question, statement, command, or exclamation as I am doing right now when I say I can’t really define it. That is, it’s that source that convinces me that I am.”

    Liked by 1 person

      1. Yes, Jeremy’s caption 4 is a very good answer. And we’re not talking about the brain any more than talking about digestion is talking about the stomach.

        Like

          1. I knew that’d be you’re response. Yup! Consciousness is what the brain does. Consciousness is enough heavy lifting for my brain—I don’t need any more magic and that does not make me a functionalist. But it may explain my flirtation with Putnam’s liberal functionalism.

            Liked by 1 person

  4. Pre-covid I was visited by an advanced mobile AI who was on the hunt for consciousness (it claimed not to be consciousness) and asked me to define it. I said “Here’s an example often trotted out by philosophers: when we sustain damage to parts of our bodies, we often have an experience – a feeling, a phenomenal quality – that we call pain. When we are physically harmed, various neural and bio-chemical pathways are activated that produce behaviors such as withdrawal from the damaging stimulus and learning to avoid it – all things you are capable of, apparently. But, in addition, we have an experience that we want to avoid: pain. Now, looking at the neural goings on you wouldn’t (nor can we) see pain, but believe me, it’s real enough, at least for us. And we very much care about not undergoing it.”

    https://naturalism.org/philosophy/consciousness/pain-vs-propensities-conversation-with-a-zombie

    Liked by 1 person

    1. Hey Tom,
      Good to see you here. It’s been a while. Don’t know if you remember, but we use to talk on Peter Hankins’ Conscious Entities blog.

      Nice dialogue! I do wonder if an AGI robot would recognize in itself what it hears humans talking about. Of course, that would depend on just how much like us we made them.

      Like

      1. Nice to meet again! I wonder at what age the concept of experience as an inner private phenomenon first arises for us as we grow up, and whether it ever arises for those in certain cultures. It seems obvious to most of us that we can’t point to consciousness as we can a chair but that it’s as *real* as a chair – hence the commonsense mental/physical divide. Perhaps it’s the arrival of simulated perceptual episodes for a system that would make consciousness as an inner (not pointable at) phenomenon real for it – something it would have a name for and talk about. Seems to me an AGI at our level would have to have such episodes. That this smart alien doesn’t have a concept of conscious experience is thus counterintuitive to me.

        Like

        1. A lot depends on what we mean by “consciousness”. If we’re talking about functional awareness of self and the environment, then he obviously has to have that. But if we’re talking about phenomenal consciousness with all the theoretical commitments that imply non-physicality, then I could see him not having that concept. It might depend on intuitions unique to our nervous system.

          Like

          1. Agreed, so I can see the alien might have a functionalist notion of consciousness but like the illusionists deny they have any phenomenology (what my zombie visitor did in the dialog). But not to have *any* concept of internal processing that can be carried on independently of sensory input seems to me unlikely. Such processing seems a necessary capacity for a creature to be as smart as the depicted alien, and being that smart it would know it has that capacity and call it something.

            Liked by 1 person

          2. What would “functional awareness” of self and environment be like without phenomenal consciousness?

            In particular, how would it qualify as “awareness”? Isn’t this the p-zombie you say you don’t believe in?

            “It might depend on intuitions unique to our nervous system”.

            One of my cats injured a paw on a fence or a cat fight (not sure which). The cat limped, flinched and hissed when the paw was touched. Was the cat only “functionally aware” of the injured paw or did he experience pain in the paw? Did the cat’s intuition (mistakenly) tell him the paw was injured? Or, did his phenomenal consciousness correctly represent reality?

            Like

          3. I think the more important question is what does phenomenal consciousness (in the Block / Chalmers sense) add that’s missing from functional awareness? Functional awareness provides everything needed for cognition, making decisions, self report etc. What do we mean here by “phenomenality”? If just appearance, then that’s covered in the functional account. If we mean intrinsic, ineffable, metaphysically private, and infallible, then why should we think that exists?

            I don’t think there’s anything different between us and a functional zombie. In other words, you can’t get the functionality without the impression of phenomenality. If you take a realist attitude toward phenomenality (again in the Block sense) then as far as you’re concerned, I think we’re all zombies with internal models saying we’re more.

            I think your cat experienced functional and normative pain. Hope he’s doing better.

            Liked by 1 person

          4. The problem I’m having is that “awareness” as in “functional awareness” implies phenomenal experience. So I can’t even guess what “functional awareness” actually is since the way you use the term it is an oxymoron. Why not just say measurement, detection, reaction, or something other than “awareness”?

            To the paw example. A functional zombie cat would register some data in its brain about the paw then remind itself to limp, flinch, and hiss if somebody touches the paw. Its model of the world does not require that it feel pain and, in fact, the feeling of pain would be added, unnecessary overhead like all phenomenal experience would be. The actual cat would feel pain in the paw and limp, flinch, and hiss if somebody touches the paw because its model, which includes pain, is phenomenal experience.

            Like

          5. Stipulating that words like “awareness” or “pain” can only refer to phenomenality (in the philosophical sense that few people understand) is simply begging the question. It’s like a Ptolemaic astronomer refusing to consider that words like “Earth”, “planet”. or “star” might mean different things from what they’re familiar with. If you’re unwilling to consider the alternative, then I can’t see anything I do with the words making a difference.

            Like

          6. Just tacking the word “awareness” onto what is essentially a measurement doesn’t make it awareness. If it could then all sorts of things like thermostats and motion activated sensors would be aware. It really is just an indication that we find it difficult to speak of mentality in terms that don’t already have mentality built into them in some way.

            No, I’m not willing to consider tripping a switch as awareness or that consciousness is nothing more than some set of magical algorithms.

            Like

          7. Putting “magical” in front of “algorithms” implies consciousness requires magic. Normal algorithms obviously don’t have any magic, so the assumption seems to be we’d have to add some.

            But we don’t learn to do stage magic by learning to do magic. We learn to do stage magic by learning to do illusions. If we pre-decide that what’s happening on the stage is real magic, then we’re going to reject any explanation involving the actual techniques a practicing magician might discuss. No technique, no recipe, no algorithm will suffice to produce real magic.

            But illusions of magic don’t require actual magic, just the right techniques, the right causal processes.

            Like

          8. Without specifying the causal processes (for consciousness), your statement is meaningless. I could even agree with it and we are miles apart in our views on this topic. Is there anything in the world that happens without the right causal processes?

            You argue against a view I don’t have – that consciousness is “intrinsic, ineffable, metaphysically private, and infallible”. My view is that that consciousness is a physical model that represents the external world, the relationship of the an organism in the world, and the interaction of the organism with it. Its peculiar qualities arise because the model works with living organisms – structures which dynamically renew and self-modify.

            Like

          9. Getting at the causal processes is what the posts on various functional theories are about. It’s why I explore global workspace, attention schema, predictive coding, higher order thought, and other theories, Expecting all that laid out in a single comment arguing for the overall approach seems pretty unrealistic.

            I’m onboard with what you describe as at least part of the answer. But if you’re not arguing for phenomenal consciousness in the Block non-functional sense, then I wonder what led to this particular discussion.

            Like

          10. Causality is a difficult topic by itself. If A causes B which causes C, would it be correct to say both A and B causes C? And if X causes A and Y causes X? Eventually as Julian Barbour has pointed out, everything causes everything.

            Part of the disconnect might be that I think consciousness can be both an illusion and causative. It isn’t an either/or. To use the desktop interface analogy – the desktop icon for a file is not the actual file but it is causally connected to the actual file in such a way that actions on it also cause actions on the actual file. The icons of consciousness are there to simplify reality sufficiently that an organism can act on the information it has. Consciousness forms a physical system that participates in the causal chain.

            Like

          11. Definitely causality can be difficult. For example, in your sequence, suppose B is something like a logical OR gate, with A and D as inputs. If A is on, it causes B to be on, which causes C. But if A hadn’t happened, but D instead, then the sequence in that instance would’ve ended up being D causing B causing C. But suppose both A and D happen, then to which should we attribute C? In a firing squad, who actually kills the prisoner?

            The disconnect, I think, might be from a misunderstanding of what illusionists say. Even the most hard core proponents (Dennett, Frankish, etc) don’t say the illusion isn’t causal, or isn’t mostly beneficial. Dennett himself uses the desktop interface analogy, calling his view one of a “user illusion”. The UI icons are very useful, as long as they’re used for their designed purposes. It isn’t until someone tries to use the icons to understand the internal workings of the computer that they become misleading.

            Like

          12. Jim,

            It’s one thing for a functionalist to take a narrow slice of the world and call it the “whole” of reality, but it’s quite another to incorporate contradictions and paradoxes as an acceptable feature of that world view. Isn’t this dichotomy just another version of dualism?

            But in contrast, if one sees phenomenal consciousness with all of the theoretical commitments that come with the physicality of a quantum system, then one can take that quantum leap, grow up, mature intellectually and quit whipping a dead horse.

            Liked by 1 person

  5. I find the ongoing discussion highly stimulating and intriguing, but unfortunately I hardly keep up with it, because I don’t have any ready answers, but have to take a long time to figure out my positions.

    The answer to a question “What is X?” describes X by other entities A1, A2 etc., which are assumed to be known. One can now continue to ask “What is Ax?”, but at some point one must stop, otherwise one ends up in an infinite regress or a circle.

    There is nothing we know about more directly than consciousness. A definition of consciousness by the phrase “the thing that it is to be me” or “having subjective experience”, assumes that this phrasing is already understood. But to those who do not understand the concept of subjective experience, these paraphrases will be meaningless.

    One might resort here to German philosopher Arthur Schopenhauer, who wrote

    That which knows all things and is known by none is the subject. Thus it is the supporter of the world, that condition of all phenomena, of all objects which is always pre-supposed throughout experience; for all that exists, exists only for the subject. Every one finds himself to be subject, yet only in so far as he knows, not in so far as he is an object of knowledge. But his body is object, and therefore from this point of view we call it idea. For the body is an object among objects, and is conditioned by the laws of objects, although it is an immediate object. Like all objects of perception, it lies within the universal forms of knowledge, time and space, which are the conditions of multiplicity. The subject, on the contrary, which is always the knower, never the known, does not come under these forms, but is presupposed by them; it has therefore neither multiplicity nor its opposite unity. We never know it, but it is always the knower wherever there is knowledge. (WWI Vol.1, 1,§2)

    Liked by 1 person

    1. Thanks Karl.

      The statement, that there’s nothing we know more directly than consciousness, can be read in a couple of ways, and the difference seems important. One is that there’s nothing we know better than the impressions introspection gives us. The other is that introspection is a reliable source of information on the mind. The first seems undeniable. But cognitive science doesn’t appear to have left much room for the second to be true.

      Thanks for the Schopenhauer quote. I’m probably missing a lot in it, but it seems like the question is how much the knower really knows itself.

      Like

      1. Well, it depends on what kind of knowledge of the mind it is about. Cognitve sciences tell us a lot about what what accounts for the functional role of mental states within ourselves. Here, essentially, the question is about how the brain performs a task – how it distinguishes stimuli, integrates information, produces verbal reports of experience, and so on. As soon as neurobiology indicates suitable mechanisms and shows how these tasks are accomplished, these issues are settled.

        However, cognitve sciences tell us nothing about why these extremely complex processes of information processing are connected with experiences and mental experiences. One does not yet know how pain feels, if one can only state by what it is generated and what it causes.

        In your post about Chalmer’s theory of consciousness you too acknowledged that “After all, we only ever have access to our own subjective experience. Everything beyond that is theory. Maybe we’re letting those theories cause us to deny the more primal reality.”

        In the 18th century, the German philosopher Gottfried Wilhelm Leibniz put it this way in his famous mill parable:

        “[…] supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.” (Monadologie, §17)

        Like

        1. It is true that science has made a lot of progress on functionality, and continues to do so. And that it has made no progress on anything beyond that functionality. The question is what lessons to draw from that.

          One is that we just continue to be stymied by this mysterious indescribable unanalyzable metaphysically private and infallible version of experience, and should therefore seriously investigate property dualism, panpsychism, idealism, or new physics.

          But the other possibility is that this other thing is an artifact of limitations in our perceptual and introspective functionality, a distortion in the self reflective mirror we have to assess our internal processing. Given all the cognitive science that already demonstrates that introspection is an unreliable source of information, this seems to me like a scenario we should thoroughly explore before the exotic stuff.

          Don’t get me wrong. If it turns out that the exotic stuff is the only path, or the most likely one, I’m willing to go there. (See my posts on quantum mechanics.) But I think dismissing the more grounded options is a mistake.

          In terms of that quote from my post on Chalmers, I think the subsequent paragraph is important.

          Perhaps. In the end, all we can do is build theories about reality and see which ones eventually turn out to be more predictive.

          I think the mistake Leibniz makes, which is one just about everyone starts out making, is that the components of the mind will include mind-stuff. But the components of a cell aren’t alive. They are what collectively makes life. Likewise, explaining consciousness means reducing it to its non-conscious components. If you’re trying to understand consciousness, and you still have a part left labelled “consciousness”, your work isn’t done yet.

          Like

          1. But there is a difference. Reductionistic explanations deal with the observable behavior of physical objects, coming down to problems in the explanation of structures and functions. Complex processes can be explained by reduction to elementary processes following physical laws in a more elementary realm. Explaining life this way does not present irreconcilable difficulties of conceptual understanding, since it can be analyzed plausibly in terms of function; all that needs to be explained is how a system can reproduce and interact with its environment.

            It is different with the question of consciousness. For even if we knew everything about the facts of physics, and even the facts about dynamics and information processing in complex systems, the appearance of conscious experience would still remain unexplained. The point is that explaining consciousness is not just a matter of explaining structure and function.

            What Leibniz’s mill equation attempts to elucidate is that once we understand how the structures and processes in the brain factory work, we still do not know why the enlarged human being, the brain belongs to, is an experiencing subject with an internal perspective. One might answer that it is the factory as a whole that gives rise to consciousness, and not any particular part in the factory, but this does not necessarily follow from the physical processes in the brain factory. Brain mechanics, on the other hand, runs like clockwork: it can be explained by physical causal relations without resorting to mental vocabulary.

            Like

          2. Karl,
            One thing I’d ask you to consider is why you think consciousness is different. Most of us understand the fact that our outer senses are limited and can be wrong. But the idea that our inner senses might suffer from the same issue seems to be fiercely resisted. However, if those inner senses are just as subject to illusion and misdirection as the outer ones, then any explanatory gaps that seems to arise has a straightforward explanation.

            David Chalmers, of hard problem fame, has himself recognized what he calls the meta-problem, the problem of why we think consciousness is such a hard problem. He admits that if the meta-problem can be solved without reference to phenomenal consciousness, then phenomenal consciousness is redundant. Of course, being Chalmers, he also considers a lot of other options. http://consc.net/papers/metaproblem.pdf

            That still leaves functional consciousness of course, which seems undeniable, but also represents no intractable difficulties.

            Someone recently pointed out to me that solving consciousness requires questioning our assumptions. I agree with this. But the most central assumption we should question is the intuitive judgments we reach about the problem.

            Like

          3. As for your reply below, you have misunderstood me. That there is a difference referred to the fact that the explanation of the emergence of consciousness does not succeed in the same way as the explanation of how life comes about.
            Certainly I do not deny that our inner senses can deceive us and do not provide us with certain knowledge.

            Liked by 1 person

    1. But what is it to do those things? Maybe to perceive is to build a predictive model that is error corrected with sensory input, to feel is to have an automatic draft evaluation used by reasoning systems, and to know is to have a predictive model with a high degree of accuracy.

      Like

  6. Zorkrangs question was “what is consciousness?” (not “what does consciousness?”) Consciousness has perception, feeling, and knowledge of the doing.

    Like

      1. Yes. “Self”, “consciousness”, “being”, “existence”, “that which is”, etc. can be used as different words that designate the same “thing”.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.