Do qualia exist? Depends on what we mean by “exist.”

The cognitive scientist, Hakwan Lau, whose work I’ve highlighted several times in the last year, has been pondering illusionism recently.  He did a Twitter survey on the relationship between the phenomenal concept strategy (PCS) and illusionism, which inspired my post on the PCS.  (Meant to mention that in the post, but it slipped.)  Anyway, he’s done a blog post on illusionism, which is well worth checking out for its pragmatic take.

As part of that post, he linked to a talk that Keith Frankish gave some years ago explaining why he thought qualia can’t be reduced to a non-problematic version that can be made compatible with physicalism.  The video, which has Frankish’s voice but only shows his presentation slides, is about 23 minutes.

In many ways, this talk seems to anticipate the criticism from Eric Schwitzgebel that illusionists are dismissing an inflated version of consciousness, one that Schwitzgebel admits comes from other philosophers who can’t seem to resist bundling theoretical commitments into their definitions of it.  He argues for a pre-theoretical, or theoretically naive conception of consciousness.

Frankish discusses the problems with what he calls “diet qualia”, a concept without the problematic aspects that Daniel Dennett articulates in his attempted take down of qualia, a conception that in some ways resembles what Schwitzgebel advocates for.  But Frankish points out that diet qualia don’t work, that any discussion of them inevitably inflates to “classic qualia” or collapses to “zero qualia” (his stance).

Just to review, qualia are generally considered to be instances of subjective experience.  The properties that Dennett identified are (quoted from the Wikipedia article on qualia):

  1. ineffable; that is, they cannot be communicated, or apprehended by any means other than direct experience.

  2. intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.

  3. private; that is, all interpersonal comparisons of qualia are systematically impossible.

  4. directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

Illusionists usually point out that qualia can be described in terms of dispositional states, meaning they’re not really ineffable.  For example, the experience of red can be discussed entirely in terms of sensory processing and the various affective reactions it causes throughout the brain.  And doing so demonstrates that they’re not intrinsic or irreducible.

Privacy can be viewed in two senses: as a matter of no one else being able to know the content of the experience, or of no one being able to have the experience.  The first seems like just a limitation of current technology.  There’s no reason to suppose we won’t be able to monitor a brain someday and know exactly what the content is of an in-progress experience.  The second sense is true, but only in the same sense that the laptop I’m typing this on currently has a precise informational state that no other electronic device has, a fact that really has no metaphysical implications.

There’s a similar double sense for qualia being directly or immediately apprehensible.  In one sense, it implies we have accurate information on our cognitive states, something that modern psychology has pretty conclusively demonstrated is often not true.  In the second sense, it says that we know our impression of the experience, that we know what seems to be, which is trivially true.

So, seen from an objective point of view, qualia, in the sense identified by Dennett, doesn’t exist.  So the failure of the diet versions can seem very significant.

But I think there’s a fundamental mistake here.  The dissolving of qualia in this sense happens objectively.  But remember that qualia are not supposed to be objective.  They are instances of subjective experience.  This means that the way they seem to be, their seeming nature is their nature, at least their subjective nature.

Of course, many philosophers make the opposite mistake.  They take subjective experience and think its phenomenal nature is something other than just subjective, that its objective reality is in some way obvious from its subjective aspects.  But all indications are that the objective mechanisms that underlie the subjective phenomena are radically different from those phenomena, which is what I think most illusionists are trying to say.

Put another way, qualia only exist subjectively.  But they only need to exist subjectively to achieve the status of being instances of subjective experience.

And they only need to exist that way to be subjectively ineffable and subjectively irreducible.  Yes, the processing underlying qualia can be described in objective terms, but much of that description will involve unconscious processing below the level of consciousness, meaning that it won’t be describable from the subjective experience itself, or reducible from that experience.

Looking at it this way allows us to accept qualia realism, but in a manner fully consistent with physcialism.  In other words, there’s nothing spooky going on here.  In many ways, this is just an alternate description of illusionism, but one that hopefully clarifies rather than obscures, and doesn’t seem to deny our actual experiences.

Of course, a hard core illusionist might insist that subjective existence itself doesn’t count as really existing.  Admittedly, it comes down to a matter of how we define “exists.”  In other words, we’re back to a situation where there is no fact of the matter, just different philosophical positions that people can choose to hold.

Unless of course, I’m missing something?

91 thoughts on “Do qualia exist? Depends on what we mean by “exist.”

  1. Schwitgebel > Frankish, and Lormand >> Frankish. Over-inflating a concept leads to catastrophic failure. Inflate your tires properly and they will give you a safe, smooth ride.

    There are certain situations – for example when we feel the same bucket of water with a hand that has been cold and with another hand that has been warm – where we get different sensations from the same external reality. There are other situations where we get the same subjective sensation from different realities. That’s all we need to identify qualia. Privacy, in some strong sense? Reliability? Those are at least partly empirical questions. Some amount of verbal stipulation may also be required, but that is best postponed until after the empirical questions have been thoroughly explored.

    Liked by 3 people

    1. Thanks for the Lormand link! I haven’t heard of him, but I’ll give it a read when I get a chance.

      If by privacy in the strong sense, you mean the content never being available to outsiders, I think science is already nipping on its heels. There have been various experiments where brain scans allowed researchers to make very simple proximal predictions about what someone was perceiving, although it’s very early days, and the content gets progressively more abstract as we move away from the primary sensory regions and more difficult to map. I do think we have plenty of empirical evidence to rule out strong reliability. Of course, that doesn’t mean either of those don’t exist in the weaker tautological sense I noted in the post.

      Liked by 1 person

  2. As you can well imagine, my preference is for something far more exotic. I rather enjoy Donald D Hoffman. In this field we still have the luxury of believing more or less what we wish to, even if, as non materialists, we may bring down the wrath of Dismal Dennet fans upon us…..

    Liked by 2 people

    1. I know that would be your preference. At the risk of being labeled a DDF (dismal Dennett fan), I think that, in terms of believing what we wish to, to the extent that’s true, it’s untestable, but to the extent it is testable, it’s far more constrained than a lot of philosophers imply. Put another way, no one can disprove property dualism, panpsychism, or idealism, but no one can demonstrate them as propositions with meaningful consequences.

      Sorry! I’m unable to resist my inner skeptic. 🙂

      Liked by 1 person

  3. I agree with where your thinking is heading on this. Qualia are subjective in the sense that they exist to the thing that is experiencing them. Therefore instead of trying to argue whether qualia objectively exist from the viewpoint of an external observer, it is more useful to think about the nature of the conscious entity that subjectively deems qualia to exist, what criteria it uses to determine what exists to it and what it can do with that information. That in turn places requirements on the content and structure of qualia.

    Liked by 3 people

    1. I think that’s the thing. We can either say they exist or don’t exist, but the interesting question is why we experience them as existing? What is the cognition that leads to that step. It’s largely the point that Lau in his post finishes on.

      Liked by 1 person

  4. I think that we have not managed to leave square one quite yet. What we can say about qualia is that they seem to exist just as the part of consciousness of which we are conscious seems to us to exist. It has been stated that we have no access to what is really happening in the universe (to the true reality of it) but only to a user interface that allows us to twiddle the “controls” without incurring any significant damage or affect. If so, then qualia may just be our user interface into the underlying blips and bleeps of our neurons and glia. Maybe it will one day be possible to break our programming sufficiently to understand just how our consciousness is really programmed.

    Liked by 2 people

    1. I don’t think investigations are that paralyzed. Cognitive neuroscience is making a lot of progress. The trouble is fitting that progress into our pre-scientific ideas of how things are, and qualia lie at the heart of that tension. It’s why it’s so easy for some philosophers to conclude they’re just not there and other that they’re beyond science.

      But I like the user interface metaphor. I think it gets at a productive way to look at them.

      Liked by 1 person

  5. How does one ,”argue for”…”a pre-theoretical, or theoretically naive conception of consciousness”? One is ofcourse free to insist upon the self-evidency of the phenomenality of subjective experience. Adducing any reasons, however, for insisting upon that position, will traffic in theory, however ingenuously naive. If there are no objective measures for individuating discrete instances or discrete states of subjective experience, nor for operationally distinguishing one such instance or state from another, than even a minimalist embrace of qualia realism is sadly underdetermined.

    Liked by 2 people

  6. Hi Mike,

    I used to think qualia as Dennett conceives of them might not exist, but some diet version of qualia might be recoverable.

    I no longer think so. I think attempting to recover any concept of qualia inevitably leads to confusion and incoherence. It is at best a label for a common cognitive illusion that introspective people (and only people — not animals) are liable to fall for.

    I think your mistake is where you think illusionists are making their mistake by failing to acknowledge that “This means that the way they seem to be, their seeming nature is their nature, at least their subjective nature.”

    The mistake is that to a strict illusionist, “the way they seem to be” is an empty linguistic construction identifying nothing. To go a bit meta, while there might seem to be a way they seem to be, the claim of illusionism is that this is mistaken. To see red is just to see red (in the same way that a robot might). It is not to experience a qualia corresponding to redness. Humans fall for the illusion of qualia where robots and animals might not only because we are prone to introspection and unnecessary conceptualising. Animals and robots just perceive sense data and don’t worry about questions of “what it is like”.

    But not even I would speak so in ordinary circumstances. Such tortuous meta-statements about consciousness probably do more to confuse than enlighten. So I think the best way to illustrate the central claim of illusionism is by reference to the inverted spectrum thought experiment.

    As I’m sure you know, the idea here is that there could be some difference between how you experience colours and how I experience colours, and yet we could never know, because we cannot express our qualia. Where you experience green and call it “green” I might experience what you would call “red”, and yet I would agree with you that it is green, because that’s the word I use for this experience.

    The claim of illusionism is that this hypothesis is not even false — it is incoherent. There is nothing to compare between your experience and mine, because the very concept of qualia is incoherent and meaningless. All we can say is that you see green and I see green, and presumably there is some analogy to be drawn between your mental state and mine. There is no “what it is like”. The danger of talk of “what it seems to be” is that it elides this point by making “what it seems like” a synonym for qualia.

    “In many ways, this is just an alternate description of illusionism”

    So I suspect this is not correct, although it depends. If you agree that the thought experiment is confused and meaningless, then you are an illusionist. If you think it is coherent, you are not.

    “of no one being able to have the experience. The first seems like just a limitation of current technology”

    I don’t think so. To an illusionist, there is no content to the experience that could be transferred or read, while to a phenomenalist the experience is genuinely ineffable and so no way to infer it from objective facts.

    To the illusionist, there is only the functional state. While we could say “the experience just is the functional state”, that would mean that the only way to communicate or read an experience from one mind to another would be to put the receiving mind in the same functional state, which essentially means becoming a copy of the original mind. So Nagel was right in a sense — we can never know what it is like to be a bat, without becoming a bat at least. But then we’re not “we” any more, so it’s still not “we” doing the knowing, it’s just another bat knowing what it’s like to be a bat.

    Liked by 2 people

    1. Hi DM,
      “The mistake is that to a strict illusionist, “the way they seem to be” is an empty linguistic construction identifying nothing. ”

      So, you’re zeroing in on a specific part of the language and perhaps missing the main point. Let me put it another way. Let’s say that it is an illusion. Is the illusion experienced? If so, then how is it different from the subjective experience itself? We might distinguish real subjective experience (RSE) from illusionary subjective experience (ISE). What makes ISE different from RSE? What would we have to add to ISE to make it RSE?

      “To see red is just to see red (in the same way that a robot might). It is not to experience a qualia corresponding to redness.”

      I suspect there’s some definitional variances here, but I think your statement, even for a strict illusionist, is wrong. To experience red is more than just to have the visual sensory input of red. It’s also to have a galaxy of associations triggered. For example, red is vivid not because of anything having to do with that part of the spectrum, but because of our affective reactions to it. So, you may not want to call it “qualia”, but it does feel like something to see red. (Unless perhaps you have a brain injury leaving you an akinetic mute.)

      On inverted qualia, I agree that it’s not a coherent thought experiment. It assumes our sensory experience of color can be separated from our memories of color and all the dispositional states that come from it. But that just means that qualia don’t exist separate and apart from the information processing, that it is information.

      You might say that’s not qualia. But then you’re saying that qualia is something more than just instances of subjective experience, that some non-physical property is an essential part. Many dualists, panpsychists, and idealists would agree with you. But my challenge to all of you is to define exactly what you’re then talking about.

      “If you agree that the thought experiment is confused and meaningless, then you are an illusionist. If you think it is coherent, you are not.”

      So, I agree the thought experiment is wrong. And I agree with the illusionists ontologically. I just think their terminology is counter-productive. Keith Frankish seems to spend most of his time trying to explain exactly what he means, and Dennett equivocates in ways that come across as vacillation. I don’t consider that a productive way to communicate, so I’m averse to the illusionist terminology. I think it’s obscurant and pointlessly polemical.

      “that would mean that the only way to communicate or read an experience from one mind to another would be to put the receiving mind in the same functional state,”

      We actually communicate our experiences all the time. And as I noted in the post, it’s not that hard to imagine future technologies that allow us to determine the content of an experience.

      But I think what you mean here is that the only way to have that experience is to become a copy of the original mind. I agree. But as noted in the post, that’s the same as saying that the only way for my laptop to hold the precise information state of whatever device you’re using to read this is for it to become a duplicate of that device. It sounds profound and metaphysical when we talk about it between us and bats, but it’s really the same thing. (And really just as meaningless as inverted qualia.)

      All of which is to say, I think we’re on the same page. The difference is I don’t think the illusionist terminology is productive. We can accomplish the same thing by pointing out that subjective experience is a construction of objective mechanisms that look very different from that experience, and do without people thinking we’re telling them they have no experiences.


    2. Qualia relate to our own internal mental experience. That puts them inside the strange, dark, complex world of neural firings and chemical fluxes in the brain, unique to each individual. That doesn’t make them not real, or an illusion. It just means that their point of reference, the dimensions in which they operate are crisply grounded in which sets of neurons fire in what patterns and with what future neural consequences, in this particular brain. That is every bit as solid a grounding as is real world physics of an individual physical object. The mental processes running in the brain are trying to make sense of their own operation, and partially succeeding.

      Liked by 1 person

  7. Hi Mike,

    Good points, and I agree that we are probably not that far apart.


    “Let’s say that it is an illusion. Is the illusion experienced?”

    I think there is some confusion about what is the illusion. The illusion is not exactly the perceptual/sensory illusion that you are experiencing red. The illusion is the cognitive illusion that qualia is a meaningful concept. So it’s an illusion the way the Monty Hall problem is an illusion — a trap for thinking that humans tend to fall for.

    “To experience red is more than just to have the visual sensory input of red. It’s also to have a galaxy of associations triggered”

    For a human. Not for a robot, presumably. So the functional states associated with your seeing red are more complex, agreed. But it’s still just a functional state, is the point.

    “But my challenge to all of you is to define exactly what you’re then talking about.”

    Keith Frankish does a pretty good job in the video you linked. He discusses Zero Qualia, Diet Qualia, and Classic Qualia, and argues that Diet Qualia reduce to either Zero Qualia or Classic Qualia. If you’re really on the same page with what Frankish actually believes (as opposed to what he says) then your qualia are Zero Qualia, which is to say “that which disposes us to judge that we are experiencing classic qualia”. But most people would say those are not qualia at all. If those are qualia, then philosophical zombies have qualia, which is an oxymoron. I think it’s better to say that qualia are nonsense, and there is nothing to distinguish us from philosophical zombies (which is equivalent to saying philosophical zombies are not in fact conceivable).

    “We actually communicate our experiences all the time.”

    I don’t think we do, or at least we can’t know if we do so successfully. All we can do is refer to experiences we hope are common. Communication relies on that commonality. It is not possible to communicate a novel or unfamiliar experience, except by imperfect analogy to other common experiences.

    “it’s not that hard to imagine future technologies that allow us to determine the content of an experience.”

    Sure, technology could in principle infer objective facts about the content of a mind — “This bat is aware of a moth it is detecting by ultrasound — it is located approximately 20 metres away at 20 degrees left of center).

    But I find it impossible to imagine a technology that would successfully communicate unfamiliar experiences or sensastions about what it is actually like (except perhaps by altering the receiver’s brain to match to subject), because to do so would require qualia to be contentful and they aren’t.

    “do without people thinking we’re telling them they have no experiences.”

    I don’t think I’ve said that, and neither has Frankish. We do have experiences, but all there is to experiences are functional states. Experiences have content in this sense, but they do not have qualia (apart from zero qualia). Talk of qualia or raw feels or “what it is actually like” is confused insofar as it incorrectly presupposes that there is any content there to talk about or be communicated.

    Liked by 1 person

    1. Hi DM,
      In this discussion, I think it might help to make a distinction between the subjective and the objective. Frankish is right in terms of the objective, but I think the terminology he uses for the subjective is counter-productive.

      Subjectively, qualia, as instances of subjective experience, exist and are subjectively ineffable and subjectively irreducible. (As well as private and reliable in the trivial sense discussed in the post.)

      However, objectively they don’t exist. What does exist is sensory and affective processing, and the generated content accessible to the rest of the brain, along with introspective judgements about that content, all of which contribute to the system’s model of itself, a model that is effective but not accurate.

      Now, we can insist that the objective reality is all we should talk about, and we can do so provocatively by saying that subjective experience, including instances of it, are an illusion, which gives people the impression we’re saying they don’t experience what they think they experience, and then proceed to spend a lot of time explaining that’s not what we meant.

      (I get the urge to go this route. A lot of philosophers (Nagel, Jackson, Chalmers, etc) have conflated the subjective with the objective and concluded qualia can only be explained non-physically. But countering one bad move with another doesn’t equal effective communication.)

      Or we could just say that the subjective is not a reliable guide to the objective.

      On robots, no modern robot (that I know of) has affective states, so none process sensory information that way. But that could change in the future. If we have systems with guiding impulses that the system, after simulating consequences, can selectively override, we will have affects, including affects associated with sensed states.

      Liked by 1 person

      1. Seems to me a roomba has affective states. It has guiding impulses [move around the floor in a pattern which covers as much as possible while avoiding obstacles] which, after simulating consequences [power is at 2%, so will run out soon] can be selectively overridden [time to stop wandering around and go plug in].


        Liked by 2 people

        1. I don’t think that counts. The Roomba has one reflex overriding another reflex. For it to be an affect, it would need to make use of the impulses in prediction. (Its own predictions, not those of its designers, although admittedly that’s a blurry line.)

          It’s worth noting the criteria Feinberg and Mallatt use for establishing affect
          1. global operant learning
          2. value trade off behavior
          3. frustration
          4. self delivery of analgesic or rewards
          5. approaches reinforcing drugs/conditioned place preference

          (No, I’m not certain what 5 is either.) I’m not sure we’d want to see 3-5 in machines, but 1 and 2 strike me as very functional capabilities that, if present, might indicate some kind of affect.

          For a Roomba, 1 might be it learning that due to the uncertainties of its current environment, it should seek power at 10%. 2 might be making an exception because it knows it’s almost done and knows it’s near its charging station.

          Liked by 1 person

          1. 3 might happen if someone drops a backpack in front of the charging station.

            But I don’t see how learning is necessary for affect, so I’m not convinced 1, 4, and 5 are necessary. To me, affect is a response that has global effects, specifically, effects on other behaviors which are competing to, um, behave. The roomba example would be better if it showed specifically different options of behavior, like general cleaning mode vs. spot cleaning mode vs. light cleaning mode. Which one happens depends on current priorities, and low power would affect all of them.

            As for the blurry line between it’s own predictions vs. its designer’s, I would say that line is so blurry that it’s pointless to make that distinction. So stop it, unless you want to defend it. En garde!


            Liked by 1 person

          2. “3 might happen if someone drops a backpack in front of the charging station.”

            Do you mean “frustrate” in this instance as impede? Or would a Roomba rev up its little motors and act out from a backpack being in its way?

            “But I don’t see how learning is necessary for affect,”

            Strictly speaking, it isn’t. It’s just that learning shows that more than just a reflex is at work.

            “To me, affect is a response that has global effects, specifically, effects on other behaviors which are competing to, um, behave.”

            That seems a bit vague. I think what’s missing is that the response serves as input into an action planning system. (Even if the planning is about what to do in the next second or so.)

            “As for the blurry line between it’s own predictions vs. its designer’s, I would say that line is so blurry that it’s pointless to make that distinction. So stop it, unless you want to defend it. En garde!”

            Are you saying that evolution determined what I’m going to choose for lunch today? Seems strange since I haven’t even figured that out myself yet. Just because there are blurry borders doesn’t mean there isn’t a coherent distinction.

            For a simple animal, the optimal response to a particular situation might be 55847596, but evolution might only provide 55847 and leave the organism to figure out the rest based on its experiences and perception of the situation. For a human, the response might be 78634990405847385050483740505678 with evolution only providing the 78634. Genetics constrains the choices, but underdetermines the final result.

            Liked by 1 person

          3. Frustration is the recognition that a goal is not being accomplished. If the recognition leads to global consequences (like shutting down some or all other systems until goal is reached), then that’s an affect.

            Of course I’m not saying evolution determined what you would have for lunch today. Likewise, the roomba designers did not determine that the roomba would stop exactly here today (because otherwise it would fall down the stairs). I’m saying affect, and Consciousness for that matter, are about what things do, not who designed them.


            Liked by 1 person

          4. On defining frustration, what’s important is how the researchers testing for it defined it (emphasis mine).

            Among the vertebrates, only mammals and birds show behavioral contrast (degraded behaviors that last for some time after frustration, behaviors such as agitation, aggression, leaving, or stress).

            Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

            “If the recognition leads to global consequences (like shutting down some or all other systems until goal is reached), then that’s an affect.”

            The typical usage of “affect” is to refer to the conscious experience of an emotion, feeling, or mood. These do affect subsequent actions (hence the name) but there’s an evaluative volitional aspect that I think you’re overlooking. Without it, I’m not sure how we have anything more than a reflex.

            Liked by 1 person

          5. The “conscious experience of emotion” is Dennett’s “then what happens”. It’s the interoceptive experiences that follow whatever generated the systemic effect, the experience of “degraded behaviors that last for some time after frustration”. I see the “evaluative volitional aspect” as one of the effected behaviors, namely making the instigating episode more memorable and tied to a good or bad result.


            Liked by 2 people

      2. Hi Mike,

        I don’t know what “subjectively exists” in this context really means if not “illusory”. If something which does not objectively exist appears to exist to the subject, then that’s an illusion. That’s just what the word means. Perhaps your talk of “subjective existence” would go down better than talk of illusions. Perhaps not. Either way we’re probably going to alienate and confuse people. I think that it’s just one of those things that’s hard to communicate. Almost ineffable, I’d be tempted to say!

        What I think a robot is lacking to “experience qualia” is the cognitive capacity to reflect upon and second-guess its own cognition and sensory apparatus. I think the affective states and triggered associations etc is a red herring. Although this certainly plays a role in our experience of sensations I don’t think it’s required for what we would perceive as qualia. For instance, someone with total amnesia who has no associations would still presumably report experiencing qualia. Even if on medication which numbed affect, I would say.

        An agent that just responds to sensory input without philosophising about it does not fall for the qualia illusion, and so has no qualia of any sort (because the only concept of qualia that really makes sense is zero qualia — that which disposes us to report experiencing qualia). I wouldn’t say it doesn’t have experiences or is not conscious (depending on the agent), but I would say that it doesn’t have any sort of qualia problem.

        Liked by 1 person

        1. Hi DM,
          Definitely as I noted in the post, this can be seen as just another way of describing illusionism. Does it go down better? It seems to with most of the non-illusionists I talk with. Of course, to a non-physicalist, it’s still insufficient, but that’s going to be the case either way.

          Presumably you’re not saying it’s impossible in principle for a robot to experience qualia, just that modern robots, or any we’re like to engineer soon, won’t? Even if we want to regard subjective experience as an illusion, there’s no reason in principle a robot couldn’t experience that illusion, one physical computational system reproducing the capabilities of another one.

          Someone with total amnesia and no associations would be unable to perceive anything (perception is prediction, using learned and innate associations), or be able to communicate anything about it (language is a galaxy of associations). I’m not sure it’s meaningful to talk about them having any experiences.

          That’s not to say that someone can’t be dramatically impaired as far as those associations. I mentioned akinetic mutes above, people with injuries that largely preserve their sensory associations but few, if any, affective ones. It’s an interesting debate whether they are actually phenomenally consciousness. My view is there’s no fact of the matter. Consciousness is in the eye of the beholder.

          “I wouldn’t say it doesn’t have experiences or is not conscious (depending on the agent), but I would say that it doesn’t have any sort of qualia problem.”

          Ok, if they have experiences, then presumably they’d have individual instances of those experiences. And those experiences would be from its perspective and dependent on the way it processes information, in other words, subjective. So we’d have instances of subjective experience. What is the difference between that and qualia, which are normally defined as instances of subjective experience?

          Are we just talking about an objection to the word “qualia”? Or are we talking about objections to the theoretical commitments some philosophers have heaped on them?

          Liked by 1 person

          1. Hi Mike,

            Your way of talking may go down better, but it may be insufficiently radical. I worry that people who accept your way of talking have not had their intuitions challenged to the extent that they need to be, and are left more or less where they started — with an incoherent model of consciousness.

            > Presumably you’re not saying it’s impossible in principle for a robot to experience qualia”

            Depends what you mean by experiencing qualia. As an illusionist, I would say that nothing at all can experience qualia. If you mean that a robot could fall into the cognitive illusion where it believes itself to be experiencing qualia — then sure, that could happen with a sufficiently advanced robot.

            > Someone with total amnesia and no associations would be unable to perceive anything (perception is prediction, using learned and innate associations)

            I don’t think I agree. I think someone can have total amnesia and be able to perceive things just fine without associations to memories or much else.

            > What is the difference between that and qualia, which are normally defined as instances of subjective experience?

            Qualia are normally defined as ineffable and private and all that, and furthermore they’re not simply subjective experience but the “raw feel” of a basic sensory perception. Without all that, all we have are “that which dispose us to judge we experience qualia”. If something is not equipped to make such judgements, then there is no sense in which it experiences qualia, simply because it doesn’t perceive itself as experiencing qualia. If it weren’t for all our human confusion about qualia, there would be absolutely no need to introduce the concept of qualia if we were only interested in understanding the consciousness of non-human animals. All such an agent has are the remaining functional sides of sensory perception that it is equipped with. That may be enough to constitute a genuine, rich, conscious experience (at least in my view) but qualia simply aren’t in the picture.

            Liked by 1 person

          2. Hi DM,
            Well, hopefully you know me well enough to know I don’t have a problem challenging people’s intuitions when it’s called for. But challenging people’s intuition, in and of itself, isn’t necessarily a virtue. Panpsychists and idealists challenge intuitions too.

            I think this comes down to the old dilemma about whether we should consider a table to exist, because we know it’s only a collection of atoms. We could say the table is an illusion, but if so it’s a very useful one.

            You seem to be doing some hair splitting between qualia and experience. I’m not going to repeat the points I made above, but I think you need to carefully consider your definitions in this area.

            Frankish, to his credit, doesn’t do hair splitting. He regards phenomenal consciousness to be illusory, period. In his view, there is only access consciousness. (Although note the lack of clarity in the title, a common sin from the illusionism camp. (And yes, I know the common excuse that editors choose titles, but it’s a myth that writers have no say-so in them.))
            As I noted above, objectively I think he’s right. But I don’t think simply denying phenomenal consciousness is productive. I think the right way to think about it is as the subjective side of access consciousness, access consciousness from the inside. In that sense, every aspect of it can be objectively studied.

            Liked by 1 person

          3. I would say it’s worth challenging intuitions if those intuitions are leading people astray. I really feel that (as with quantum mechanics) if the truth about qualia/consciousness doesn’t seem profoundly unintuitive to you then you probably don’t understand it correctly. You and I have similar pictures of what is going on (not fully in agreement, but close enough for present purposes), but I’m saying that your explanation of that view may go down easier because it is not being understood correctly. Perhaps I’m mistaken in this, it’s just a worry I would have.

            > You seem to be doing some hair splitting between qualia and experience.

            No illusionist really wants to say that consciousness/experience don’t exist. We do want to say that qualia don’t exist, because qualia has a very specific meaning. There would be no need for the word otherwise. The term has a technical philosophical meaning and was coined to express the idea of an ineffable private content to the “what-it-is-likeness” of some experience. It’s jargon. Most people don’t know what it refers to. It does go by other names “phenomenality” etc, but it’s only this concept that illusionists want to eliminate. Not experience or consciousness itself. So that hair needs to be split, or else people will think that we want to say that consciousness does not exist. If Frankish does not adequately split it, then I would hardly say it’s to his credit.

            I don’t much like the title of the Aeon essay either.

            You seem to be doing some hair splitting between qualia and experience.

            Liked by 1 person

          4. When it comes to intuitions, I actually have bigger fish to fry, namely that there’s any fact of the matter for a lot of these things. You stated that qualia have a precise technical definition. I don’t think that’s true. The SEP article on qualia discusses at least four conceptions of it, including Dennett’s version. This seems to be true of most consciousness concepts. A lot of the arguments are definitional disputes in disguise.

            The fact is, consciousness is a protean concept, a hazy and inconsistent collection of capabilities. It can be defined succinctly, but only at the cost of precision. And it can be discussed with precision, but only with controversy, including systems we intuitively think shouldn’t be included, or leaving out systems we intuitively think should be.

            We process information in certain ways. As a social species, we intuitively look at other systems and project our way of processing onto them. When we’re looking at each other, that’s plausible (albeit far from error free), but the further we look away from systems like us, the less of a fact of the matter it becomes. In short, we use the word “consciousness” to refer to systems “like us”, but “like us” will always be an indeterminate notion.

            Under this viewpoint, the question of whether a machine can be conscious is a meaningless one. The only thing that matters is what capabilities the machine has or doesn’t have.

            Still think I’m insufficiently challenging people’s intuitions? 🙂

            Liked by 1 person

          5. Hi Mike,

            I agree with most of what you say here, but I still think we need to be able to say that qualia don’t exist. We can do so while saying that consciousness does exist. I think this is the best of both worlds — challenging intuitions re qualia while reaffirming that we are conscious.

            I concede the point that I over-hyped qualia as a clearly defined term. It really isn’t. But on the other hand I don’t see that much difference between the core ideas in the 4 usages on the article — they’re being used in different contexts and with different theories but to me it seems that the meaning overlaps enough to agree with my point that the term “qualia” is more specific than “consciousness” or “experience”.

            Your strategy of saying that qualia exist subjectively may work also but I personally don’t favour it because (1) the idea of subjective existence here is questionable and (2) it may leave people confused and wondering about meaningless questions such as:

            1) whether the qualia they subjectively experience are the same or different from the qualia other people subjectively experience
            2) why the subjectively experienced quale of “Red” is like _that_
            3) … or whether the qualia could potentially have been different
            4) … or whether we need to posit new physical laws to account for how _these_ qualia correspond to _those_ functional states (perhaps resorting to panpsychism).

            These questions need to be “unasked”, (in the sense of Zen and the Art of Motorcycle Maintenance). That is what is acheived by denying the existence of qualia. If you prefer to talk of subjective existence, I think you need to make this point clearer.

            Anyway, again, I think that no matter what way you want to go, this is going to be a tricky thing for people to swallow. You may be right that there are serious problems with the strategy of Dennett/Frankish, but I think the same is true of yours.

            Liked by 2 people

      1. I’m afraid I don’t have a technical or precise definition.

        But on functionalism (and illusionism, which is a variety of functionalism), all that matters for consciousness and congnition is what is going on from a functional point of view in the brain. In other words, the physical state of whatever chemicals or neurons you have swishing around or firing ought to be in principle abstractable out to a functional substrate-independent state, the functional state.

        So the functional state is just whatever is going on the brain that would need to be reproduced in some other substrate to reproduce the same information processing activity and (we would say) experience. When I say qualia don’t exist and all there is is the functional state, I mean that when you have reproduced the functional state required for the functional/information processing side of what the brain is doing, there is nothing else required to produce the same experience.

        Liked by 1 person

        1. Fair enough. Consider the following suggestions related to your ongoing discussion with Mike.

          I suggest separating the concepts of experience and qualia. I suggest saying that some objective process (yet to be determined) constitutes an experience. A quale, then, is , objectively, something associated with that experience. As a specific example, the quale of a “red” experience is objectively somehow different from the quale of a “blue” experience. Now whatever objectively constitutes that difference, it seems to be subjectively available. As was said above, the subjective description comes down to an ineffable “raw feel”.

          Now I think, based mostly on Twitter discussions, Frankish and illusionists (like me) would say that experience happens, and it has qualia (in the sense I describe above), but conclusions about what qualia *is* based on subjective access are illusory. The subjective illusion is that the qualia is a property of some “thing”, in the same way that color seems to be a property of an object. Seeing qualia as a property of a thing suggests that other things might have that same property, and so that same raw feel. So it would seem, subjectively to me, that my “red” might be the same or different from your “red”, even though the objective explanation of experience and its qualia says, no, that’s not really coherent.

          I’m going to attempt an objective explanation of subjectivity in a response to Wyrd, so you can look there if you’re interested.



  8. “Put another way, qualia only exist subjectively.”

    For me, that says it all. That’s what needs explaining. How any system can have subjective experiences of any kind. Deflate it all you like, it still defies any explanation.

    Liked by 2 people

    1. Obviously this is an area where we disagree since I think there are numerous plausible functional answers. You’ve noted before that you’d be prepared to accept some variant of dualism as the answer. Are there any others you’d see as plausible?

      Liked by 1 person

      1. I’m agnostic and have no idea. But “plausible” and “maybe” and “perhaps” are guesses, not answers or explanations.

        MAYBE subjectivity is what it’s like to be a sufficiently complex system, but we can’t account for that currently.

        MAYBE dualism of some kind obtains, but we don’t know that either.

        My bottom line, if physicalism is true, then there’s an explanation. Has to be, right?

        Liked by 1 person

        1. For physicalism, there does have to be an explanation for any concrete precise question. But I’m not sure the question, “How any system can have subjective experiences of any kind,” qualifies.

          Crossing the subjective / objective divide seems to require clarifications. Are we talking about the generation of the content? Why those contents are internally irreducible? Or are we talking about the utilization of the content? Perhaps the automatic reactions? And then the utilization of those reactions (new content) in scenario simulations? Or why there even is a system generating and utilizing the contents?

          Or something else? If so, then what precisely?

          Liked by 1 person

          1. Why is there “something it is like” to have a working brain?

            Unless you veer into some form of pan[proto]psychism, presumably there is only “something it is like” for brains, not any other system we know.


            I think that’s a pretty simple, very concrete precise question. It’s the answer that’s driving everyone nuts.

            Liked by 1 person

          2. Actually, I think the “something it is like” phrase is just as vague, really just a synonym of the earlier language. It only seems precise until we try to agree on what it means. One interpretation: why do brains operate the way our brains do? Of course, they only do to varying degrees, and all for evolutionary reasons.

            Liked by 1 person

          3. “I think the ‘something it is like’ phrase is just as vague,”

            I dunno, man. I’m starting to see that sort of response as a defensive reaction to a challenging idea.

            To me, the “something it is like” to be human is a universal we all share. It’s only vague because it’s so all-encompassing and varied. How precise can anyone be about something so big it includes the taste of caramel and the pain of being alone? Love is equally difficult to define and easy to describe cases of. Most primal things are.

            “One interpretation: why do brains operate the way our brains do?”

            Talk about vague. In that sentence we can replace [brains] with anything that has a function. Why does anything operate the way it does?

            Specifics: On the one hand, all the systems for which (as far as we know) there is nothing it is like to be that system. On the other hand, the systems for which there is. And the determining factor seems to be the property: Has a brain.

            So far it’s been a hard problem. Along with a few others: What made the Big Bang? What is time? How did life start?

            But if physicalism is true, then there do exist answers to those questions. It’s possible the answers are beyond puny humans to find, but under physicalism they have to exist. (And they’re not at all dumb questions. More like fundamental ones.)

            Liked by 1 person

          4. “It’s only vague because it’s so all-encompassing and varied. How precise can anyone be about something so big it includes the taste of caramel and the pain of being alone?”

            Like any other problem, we do it by breaking it up into manageable chunks. In truth, Chalmers has already solved the hard problem, or at least pointed toward how the solutions will be arrived at. He did it in 1995, in his paper that coined the term, in the paragraphs immediately prior to that coining. By insisting that there’s some distinction between the manageable chunks and the overall whole, he preserved the appearance of an intractable mystery.

            Of course, he didn’t originate that problem. Gilbert Ryle addressed it in 1949, pointing out the category-mistake of a visitor to Oxford, touring the lecture halls, cafeterias, and offices, meeting faculty, students, etc, then asking, “But where is the university?” No matter how much extra touring or information was provided, the visitor could steadfastly insist that university-ness had not been accounted for. Looking at all the “easy” problems, but then insisting there’s still a separate overall hard problem, is the same category-mistake.

            Liked by 1 person

        2. “My bottom line, if physicalism is true, then there’s an explanation.”

          Physicalism is true, but only within its own context; and that context is derived from reality/appearance metaphysics. RAM is the ultimate arbiter therefore, RAM is the explanation, not physicalism.


          Liked by 1 person

          1. I can’t agree with that, either. A principle I live by is: “It’s not a dumb question if you don’t know the answer.” (Also useful: “Dumb questions are much better than dumb mistakes.”)

            Liked by 1 person

        1. “Because there is no answer.”

          There is an answer alright; but that answer can only be derived from asking the correct question. And the first question should be this: What comes first in hierarchy?


          Liked by 1 person

    2. “How can any system have subjective experiences of any kind?”

      Maybe a system doesn’t have a subjective experience when it steps forward mechanically from a current state to a next state by taking account of external inputs.

      Maybe it does have a subjective experience when it also uses knowledge of where it is, where it has been, and where it could go in its space of possible states, and which of those states have good and bad consequences for it.

      Perhaps this could be stated as: subjective experience is actionable knowledge of ones own position in valenced state space.

      Liked by 1 person

          1. The joy of playing with puppies. The awe of looking at Mt. Everest. The taste of cheese enchiladas. The smell of baking bread or fresh cookies. The sound of a David Gilmour guitar solo. The excitement of taking off in a plane. (The thrill of jumping out of one.) The humor of a good joke you’ve never heard. The fear invoked by a good horror movie. The satisfaction of finishing a really good book. The relaxation after a Jacuzzi. The pleasure of dancing or running. The stagefright of public speaking. The nervousness of a first date. What if feels like to figure something out or learn something. The colors of a rainbow or prism. The peace of dust motes floating in sunlight.

            “Subjective experience” is primal, irreducible and probably impossible to define, but it can be described by reference to what we all experience daily.

            Liked by 1 person

    3. Okay, ready to take a shot.

      Let’s say that an experience is an objective physical process. Let’s say that this process can be diagrammed as Input—>[mechanism]—>output. Let’s say that the output of this process is available as input to several other processes. So we have:

      Input1 —> [mechanism1] —>
      Output1 —> [mech2] —> Out2
      Output1 —> [mech3] —> Out3
      Output1 —> [mech4] —> Out4

      Okay so far?

      So we can consider the “system” to be all of the mechs together. Objectively, we can watch all of the pieces. But when we talk about the system as a subject, we are considering only the outputs of which the system is capable. As objective observers we can conceptualize the Output1, recognize that it is the causal result of seeing an apple of a specific color, shape, size, etc., at a specific position in space. But the subject only has its outputs. Out2 might be a name or phrase (“that apple there”), Out3 might be initiation of reaching out for the apple (which plan is currently actively suppressed by other mechanisms), Out4 might constitute a memory. Suppose all of this is recurring, so it’s a dynamic state of “seeing the apple”.

      So for a system to “have” an experience is for the system to have outputs in response to an input (Output1) which input(Output1) constitutes a kind of representation. The quale of that experience is the object of the representation (the thing represented). The subjective “raw feel” of the experience is a reference to that quale.

      Well, it’s a start.



      1. “Let’s say that this process can be diagrammed as Input—>[mechanism]—>output.”

        Find me a process that can’t! As I mentioned to you a while back, your I→[M]→O is well-known to any analyst as the IPO model. As the “P” in the middle indicates, this universal generic model applies to pretty much any Process.

        So, yeah, okay so far. 😉

        “So for a system to ‘have’ an experience is for the system to have outputs in response to an input (Output1) which input(Output1) constitutes a kind of representation.”

        The computer I’m using, the dog I’m dog-sitting, and I, all “have outputs in response to an input” and we all have representations of some model of reality embedded. All three of us are processing information, and all three of us seem to have qualia as you’ve described them. (If I’m wrong on that last, what’s the difference?)

        I’m as confident as I am of anything that there is something it is like to be me. And I’m pretty sure there’s something it is like to be Bentley (the dog). But I’m also pretty sure there is nothing it is like to be my computer.

        Why the difference? What in your description applies to me and Bentley, but not the machine?


        1. “Why the difference? What in your description applies to me and Bentley, but not the machine?”

          That instead of:


          we have

          (I and “I>[M]>O”) >[M]> O
          (I and “I>[M]>O”) > [new M]

          The “I>[M]>O” here is a representation of your own input, processing mechanisms and output, flattened into the present and sent back round as if it was a sensory input.

          That means you get to know about your own situation, and (last line) get to modify your own processing mechanism M. The conscious you is the evolving M, the content of consciousness is
          (I and “I>[M]>O”) and subjective experience is what happens when the two meet.


        2. What’s the difference between you (and Bentley) and the machine? Mostly scale, but also organization. Representations in the machine are mostly digital, and the hardware is setup to be multipurpose, so the mechanisms to recognize specific inputs get repurposed for multiple operations, whereas the mechanisms in you to recognize specific inputs are mostly permanent (relative to the machine). The multiple mechanisms in you have pretty much simultaneous, parallel access to the representation in question, whereas the computer would have to swap in and out the structure for each mechanism that might respond to the representation.

          But just in terms of scale, the number of mechanisms you have to produce the representations is astronomical compared to your current machine.



          1. Which, if either, is more significant, scale or architecture?

            Is there an “event horizon” where a system crosses from not having subjective experience to having it?


        3. No event horizon. Organization is key, but the organization includes a functional aspect. So the organization of an “experience”, in my understanding, looks like this:
          where M1 has the purpose of creating a representation, O1, and M2 has the purpose of responding to the representation. M1 and M2 will have internal processes, but the organization of those internal processes are irrelevant, so those internal processes will be vastly different between you and the machine, and only slightly different between you and Bentley.

          So when people refer to an internal model, they’re essentially referring to M1, the mechanism which generates the representation whose “object” is the “thing modeled”.

          But the question is about subjectivity. In each case, you and machine, there is a subject, namely the combination of all the M1’s and M2’s, as well as all of the M0’s that lead to the input, and M3’s, etc. So what is the subject’s relation to the “experience”? The subject cannot see any of the [M]’s. The subject can only produce outputs, including new [M]’s maybe. You certainly are set up that way, but probably not the machine as currently programmed. These new [M]’s can represent (via their output) that original experience, and so they can reference the experience, but only by pointing at it, saying “that experience”. And the only difference between that experience and other experiences will be the purpose of the [M1], which is the object being represented by the O1.

          So the significant difference between you and the machine is that you generate mechanisms to refer to the experience, and the machine, as currently programmed, does not.

          BTW, for folks following at home, another term for the M1’s described here is unitrackers.



          1. “No event horizon.”

            So at what point does scale and organization cause the system to have subjective experience?

            “Organization is key, but the organization includes a functional aspect.”

            Organization is the key? But you just said:

            Mostly scale, but also organization.”

            [bolding mine] So,… both equally? Plus functionality? none of which are specified or are, at best, vaguely specified. This is all hand-waving until you have hard data. Which was my original point.

            I go back to what I said earlier:

            MAYBE subjectivity is what it’s like to be a sufficiently complex system, but we can’t account for that currently.

            Other than to assert you think it happens.

            You’re pack a lot of unknowns into the “I”, “M”, and “O”, of this. Block diagrams just don’t cut it. It’s a nice way to lay out the putative organization and flow, but what always matters is how those blocks are implemented.

            More to the point, the original question: Why there is something it is like to be those blocks. All current evidence suggests that’s not the usual case.

            (Does it help if I clarify that I don’t see Chalmers’ “hard” problem as at all mystical or deeply philosophical? It’s just a problem science finds very hard to figure out. A bit harder than most.)


        4. Wyrd, I was answering the question why there is a difference between you and the machine. The most obvious difference is scale. Like the difference between a snowflake and a glacier. At bottom, they’re the same (frozen H2O).

          There is also a difference in organization between you and the machine, but that difference is below the level I think is important. Both you and the machine (assuming proper programming) have the same organization which is necessary for “experience”, but again, you have that organization on a vaster scale, and you have additional mechanisms to do things with it.

          I know that you think the lower level organization is important, and all I can do is disagree. I think everything that needs to be explained can be explained without reference to anything going on at lower levels. My explanation may not be clear (I’m working on it as we discuss), and it may not be intuitive, but that doesn’t mean it isn’t accurate.



          1. “Like the difference between a snowflake and a glacier.”

            You mean how one looks pretty on my coat sleeve while the other shapes landscapes? 😉

            [Assuming physicalism] Obviously organization, scale, and function, are what we’re talking about, but (as I keep saying) we lack specifics. We do not have a blueprint, yet. All we have are bits and pieces of the puzzle.

            “I think everything that needs to be explained can be explained without reference to anything going on at lower levels.”

            I find that to be an extraordinary claim. But it’s your muse, so I wish you all the luck!


  9. Successful coordination of interpersonal action seems to be the strongest argument for shared experience—for a shared Manifest Image. Some, however, might say that a mutual grasp of communicative and referential intent suffices for such success, independent of any communal bathing in qualia.

    Liked by 1 person

    1. It does seem like the communication can have references in it, but those references ultimately have to be de-referenced in the recipient’s conscious experience. So if I talk about the pain of a backache, and the person I’m talking to has never had a backache, their understanding will be limited.

      Liked by 1 person

  10. Apparently a crime has been committed here Mike. The work of cognitive scientist Hakwan Lau, and soft scientists in general I presume, is being hindered by means of standard philosophical debates. On one hand we have Chalmers implying that something funky is going on with “qualia”, then on the other we have Frankish taking the major step of stating that it does’t even exist (supposedly just to clarify that nothing funky is actually going on with this qualia stuff that he’s bizarrely stating doesn’t exist). Yes Eric Schwitzgebel is trying to free our soft scientists of this burden by means of his “Inflate and Explode” argument, but to no avail. I believe that only the acceptance of a full metaphysical principle will help free Hakwan Lau and his kind of this sort of thing. (In an apparent cry for help the man even admitted “philosophy is often above my pay grade”.)

    In order to hopefully put this tragic mess behind, I believe that we’ll need to divide science into a club that never entertains supernatural notions, as well as a club that does. Educated specialists could then take their pick. And given the potential for certain elements of reality in our strange world to indeed function beyond causal dynamics, why would some scientists decide to never entertain this sort of thing? Pure pragmatism. They’d observe that to the extent that causal dynamics of this world fail, nothing exists for them to even potentially understand. This is my single principle of metaphysics, as well as the first measure that I’d use to help our soft sciences finally begin hardening up.

    Liked by 2 people

    1. It’s always alarming when you see a scientist being influenced by philosophy you disagree with. (I have the same feeling when I see scientists influenced by Searle’s pseudological nonsense.) If it makes you feel any better, illusionism is really philosophy’s way of dealing with what science has been finding since the 1960s. In other words, illusionism isn’t constraining science, the results of science are influencing philosophy. (Of course, the influence goes both ways and is ongoing.)

      I also wouldn’t worry too much about Lau in particular. He’s repeatedly expressed the view that scientists shouldn’t have slavish devotion to any theory, model, or philosophy. It might prevent them from going where the data leads. Along those lines though, I suspect he wouldn’t be interested in your proposal.

      Liked by 2 people

      1. Two guns blazing, eh Mike? 😀

        See if you can square the following two apparently contradictory assertions. You agree with the premise of Searle’s Chinese room thought experiment, as well as my own “thumb pain” modification, (and by this I mean that you agree with the conclusions that we’ve presented of your position, namely “understanding Chinese” and “thumb pain” would result from associated lookup table processing), and yet given that you agree with our clarification of your perspective, you also consider our observations “pseudological nonsense”? How can you both agree with such hypothetical situations, as well as consider what you agree with “nonsense”?

        What this suggests to me is that you’ve invested in a position that you didn’t previously realize was so precarious. Thus now you’re understandably dismayed regarding our clarifications of the implications of that initial belief.

        On Lau not having a slavish devotion to any theory, model, or philosophy, that fits in well with what I’ve just said. The full implications of the theory that phenomenal experience exists by means of processed information alone, and so regardless of what mechanisms do the processing, is something that needs to stand up to scrutiny. Even if science can’t confirm that lookup table processing is able to produce phenomenal experience, what else in nature is observed to function so generically regarding material dynamics? Is it not “convenient” to explain the existence of phenomenal experience in a way that has no such precedent?

        Note that a “two clubs” approach to science would only occur to the extent that scientists find this functional. Today scientists have no formal principles of metaphysics, epistemology, and axiology for guidance, or the very subjects which philosophers instead reside over. But if a respected group of professionals (and perhaps they’d mainly be scientists, such as Sabine Hossenfelder), were to come up with some ideas that seemed productive, this surely might help our soft sciences harden, not to mention physics.

        Regardless my interpretation of Hakwan Lau’s post was that he’d love to side step this “illusionism” business (which he instead tagged “illusionishm”). If he had the option of forgoing all that to instead join a group of scientists who never permit supernatural dynamics to factor into their deliberations, I currently suspect that he’d gladly accept.

        Liked by 2 people

        1. Eric, I didn’t mean to have any guns blazing. Just laying out my thoughts on what you said.

          I think Searle’s Chinese room is pseudologic because 1) the thought experiment is ridiculous, so ridiculous that no one takes it seriously, not really, because it would involve Searle taking centuries or millenia to respond. Taking it seriously destroys the intuition that the conclusion is based on. And 2) the conclusion is not based on logic, but on an intuition, a preexisting bias.

          It’s the sicko argument, which goes, “Only a sicko would reach any other conclusion. You’re not a sicko, are you?” In Searle’s case, he positions it as only a fool would reach a different conclusion than he does. That’s all he has: emperor’s new clothes. It isn’t a real logic argument. It’s fake logic wrapped in peer pressure, pseudologic. It only works if you already share Searle’s prejudices. (If it makes you feel any better, this philosophical thought experiment is far from the only one that’s little more than rhetoric for a bias. I put Mary’s room, zombies, and many similar exercises in the same category.)

          Searle didn’t stipulate lookup table functionality, at least in his original thought experiment. If he had, that would have just made the scenario even sillier since lookup tables quickly scale to unfeasible, and eventually physically impossible, proportions.

          “what else in nature is observed to function so generically regarding material dynamics? ”

          Evolution frequently solves functional problems in a variety of manners. Respiration and circulation among organisms happen through a wide variety of mechanisms.

          Photosynthesis, for instance, can happen in at least three different ways.

          Prior to the evolution of nervous systems, single celled organisms had their own mechanisms for transmitting information from sensory apparatuses to motor systems.

          And that’s before we get into things like heart replacement pumps, pacemakers, replacement joints, and other already existing technology for supplementing or supplanting biological functionality.

          I think a better question is, what else in the universe has functionality that can only be performed one way or with only one type of material?

          On Hakwan Lau, I’m not going to get into a debate about what he would or wouldn’t accept, particularly since he has an online presence and could just be asked. (Granted I did my part in bringing us down this path. Sorry, my bad.)

          Liked by 2 people

      2. Mike,
        What we’re talking about here is not an actual experiment which would thus need to be practically doable and then attempted. It’s a thought experiment and so is instead about exploring the conceptual implications of a given idea. And surely the idea that something as amazing as phenomenal experience exists through processed information by means of any medium with the capacity to do such processing, should be assessed in this manner at minimum before being accepted. What does and doesn’t this idea suggest? It makes no difference that John Searle can’t practically get this stuff done. What matters is conceptually staking out what a given idea means so that we might effectively assess it.

        So if Superman were given symbols on paper which represent what goes to my brain when my thumb gets whacked, and then were to use this to properly write out other symbol inscribed sheets of paper just as fast as my brain accepts and produces information, would something then feel what I do? Of course not. However we run this thought experiment, a ridiculous result emerges. Why? Because it should take more than just processed information itself to create phenomenal experience. Causal dynamics of this world mandate that the medium which information animates should matter as well. So beyond my brain processing whacked thumb information, it should also be animating mechanisms which produce what I feel. I can’t think of a better culprit for this than the electromagnetic radiation associated with neuron firing. So if those sheets produced by Superman were fed into a phenomenal experience producing machine set up to animate them, thumb pain should indeed result, though not otherwise.

        I appreciate the examples that you’ve provided regarding how different mechanisms can effectively do the same sorts of things, though I don’t consider them to apply. Notice that in all of these cases there are mechanisms that do exactly what they’re suppose to do given associated physics. What we’re instead looking for is a case where the mechanisms don’t matter as long as the information is properly processed. My perception is that all computational output requires associated mechanisms in order to be realized, that is except for what’s being proposed regarding phenomenal experience.

        Let me emphasize that it’s not my goal to attack you here, but rather help fix a vast system that I consider flawed in many ways. I’m sure you know that I respect you tremendously. And of course on this issue you’re in the mainstream while I’m the radical. You’ve done a great deal to help me understand what’s believed in academia, and I’m deeply grateful for this ongoing service.

        On the potential that Lau would be sympathetic to the notion that this illusionism business is a waste of time, and certainly given my plan to segregate science into “natural” and “whatever” factions, yes I should explore this. I’m always looking for people with similar perceived interests, so thanks again!

        Liked by 2 people

        1. Eric,
          The plausibility of the experiment details matter because the putative conclusion is that it’s implausible that the room has understanding (or consciousness). If you posit an implausible scenario to drive a conclusion of implausibility, I can use that argument for any known computation. I could say that Pac Man is impossible because sitting in a room following pencil and paper, I can’t produce the necessary functionality.

          On bringing in Superman, that’s fine as long as we’re clear what Superman is doing, going through the instructions hundreds of billions of times faster than Searle possibly can. Or we could look at it as Superman doing the work of a hundred billion Searles. At that point, we’re back to just the intuition that computation can’t produce understanding / consciousness.

          Your “Of course not,” is not a logic step, but simply a statement of strong inclination. The conclusion only appears to follow for you because you already have that intuition.

          “What we’re instead looking for is a case where the mechanisms don’t matter as long as the information is properly processed.”

          There are many different types of devices you can use to browse this web site, with varying architectures and mechanisms. In principle, it’s possible to build such a device with mechanical switches, or even simpler mechanisms. The brain is an information processing organ. To argue that it’s functionality can’t be replicated by another information processing system of sufficient capabilities, you only need to identify what, besides information processing, it’s doing (aside from physical processes to support the information processing). It’s worth noting that the etymology of “information” is to input forms into the mind (in-form-ation), so this shouldn’t be a radical proposition.

          “My perception is that all computational output requires associated mechanisms in order to be realized, that is except for what’s being proposed regarding phenomenal experience.”

          No one is asserting that the computation happens in isolation from the environment, including the body. Even intermediate computing stages have output to subsequent computing stages.

          “Let me emphasize that it’s not my goal to attack you here,”

          I don’t take it that way. Hopefully it’s clear that we’re attacking ideas, not each other. Generally, when I feel like a discussion is turning personal, I move on. Life is too short and friends too sparse to spend time in those kinds of discussions.

          Liked by 2 people

      3. Mike,
        I didn’t quite grasp how you’re able to use an implausible situation to invalidate the existence of Pac Man.

        Regardless, I see that I shouldn’t have said “Of course not”. If phenomenal experience does exist by means of processed information alone, then if Superman properly processes the right symbol inscribed paper to then produce the right second set of symbol inscribed paper, he’d thus also create what I know of as thumb pain. (That it’s hard to say what would experience it doesn’t matter — something would.) So even though I implied that this perspective wasn’t on the table, from the premise that phenomenal experience exists as information alone, it is on the table.

        As for me, I suspect that another step would be required in order for Superman to produce thumb pain in such a way. I believe that the information he creates would need to animate mechanisms associated with phenomenal experience — possibly the sort of electromagnetic radiation produced by neurons.

        Beyond any implausibility associated with information alone being responsible for phenomenal experience, there’s also a more objective element to my concerns. We haven’t yet identified any other computer output which exists as processed information alone. All else seem to require mechanism based existence (as is the case for Pac Man). This, I think, is why it seems so strange for any medium at all which does the proper information processing, to thus display phenomenal experience. It doesn’t seem natural for something “of this world” to exist so generically.

        Liked by 1 person

        1. Eric,
          I could have been more clear on the Pac Man thing. What I meant is that it can be used to argue that Pac Man isn’t computation. Suppose we were future archaeologists who discovered an old Pac man arcade machine, figured out how to power it, and discovered its functionality.
          future-Pinker: Clearly this is a computational system.
          future-Searle: Not so fast. If I were in a room attempting to produce its functionality with paper and pencil, it would be ludicrous to suggest I could produce that.
          f-P: That’s an invalid comparison. The time scales are all wrong. It would take you hours, days or weeks* to complete even the simplest operations.
          f-S: Suppose I’m Superman. Then I can do it at the same speed as the machine.
          f-P: Then, assuming appropriate I/O peripherals, you’d be producing the functionality of Pac Man.
          f-S: Ludicrous!

          * Pac Man is a far simpler system than modern programs. I think it ran on a 1 MHz processor with 16k of RAM or something. So it could be done in much shorter time scales than any putative AI system.

          “We haven’t yet identified any other computer output which exists as processed information alone.”

          The phrase “processed information alone” isn’t really coherent. (Except perhaps platonically, but that’s not relevant here.) It’s a strawman that no one is arguing for. All computation has output, even if only to subsequent computational steps. Remember, all information processing is physical, so the output of step 1 is a physical product which is input to step 2. That is always true, 100% physical, 100% of the time.

          Liked by 1 person

          1. “future-Pinker: Clearly this is a computational system. future-Searle: Not so fast.”

            Unfortunately for the analogy, unlike consciousness, computation is well-defined, so it can be objectively determined that a Pac Man arcade machine is a computer.

            When it comes to computation, time-scale is 100% irrelevant. A process that takes one-million years to compute Pac Man is still computing Pac Man. (Superman makes no difference to Searle’s original idea.)


          2. Stepping around the whole computation is objective assertion, I would note that if we’d never seen the technology before, and had to make our determinations solely by observation, it would be far from straightforward.


          3. What level of analysis are you allowing? If the workings of the mechanism itself can be inspected, then computation is pretty clear given the logic circuits, the clock, the fetch-decode-execute cycle. In this case they would also have the output — the functionality — to see exactly what’s going on.

            As you pointed out, those old Pac Man machines weren’t that complicated.


          4. I’m assuming a situation where things have changed so much that the mechanisms are no longer obvious. (Maybe most of the computing is now quantum and/or neuromorphic.) If that’s not removed enough, we could imagine unearthing an alien computer; its function might not be at all obvious. It’s easy to just assert it would be clear, but I’m thinking for someone no longer familiar with the standards and protocols (or if they’re utterly alien ones), there might be enough room for controversy. (Particularly if it was first found by people who developed a religion and a mysticism around it.)

            Even under the best circumstances: Could a Neuroscientist Understand a Microprocessor?


          5. Well, again, what level of understanding and analysis are you allowing? Are we dealing with rational intelligent actors with reasonable tools?

            It kind of boils down to whether your “future archaeologists” have discovered computing or not.

            Computation is a form of abstract mathematics, and so it is universal. Based on our experience of sample-size one, it’s a game-changing discovery akin to fire and the electron.

            So absolutely, any society that’s discovered computing and has reasonable technology (voltmeters, logic probes) would recognize a Pac Man game as a computational device. Assuming they have a notion of games, then it’s purpose might be apparent, as well.

            OTOH, if you dropped off a (battery-powered) Pac Man machine in the Middle Ages, they wouldn’t know what to make of it.

            Liked by 1 person

      4. Thanks for the clarification Mike. Maybe we can finally straighten this out! I’m simply demanding the same condition that Pinker is afforded regarding appropriate output peripherals for “thumb pain”. The idea that Superman can produce this with pencil and paper alone, is simply unnatural. But if his code were provided to a machine that was set up to animate it, then something would indeed feel what I know of as “thumb pain”. So what is it that neuron function animates in order to create phenomenal experience? Is this not a plausible question for a naturalist to ask?

        Liked by 1 person

        1. Eric,
          In the case of Pac Man, we can clearly identify the inputs (controllers) and outputs (screen, speakers). Can you clearly identify the output of thumb pain? If you mean that the system has motor output so it can hold its version of a thumb and cry out in pain, that seems like an odd stipulation. If you mean the physiological interoceptive loop, that’s at least plausible, although it could also be handled with a virtual construct. (As could, for that matter, any I/O.)

          If you mean that the thumb pain is being output to something other than the brain, then once again, I think you have to demonstrate that this other thing exists. Until then, it’s just unanchored speculation, searching for a reason to have a mysterious something, a special sauce, rather than dealing with the evidence we have.

          I would again note that there is an output, physical output, from the affective processing that identifies pain. The anterior cingulate cortex produces output utilized by the frontal lobes in decision making. In the room, that would equate to one subsystem outputting to another. Does that qualify for your requirement? If not, then again, we’re back to the hunt for the special sauce.

          Liked by 1 person

      5. Mike,
        I think we’re pretty close here so I’d like to keep things simple and make sure we don’t mess this up. You asked if I can “clearly identify the output of thumb pain”, which I take as “where it goes”, like to speakers or a screen. But by that point it would already exist. The question that I’m currently considering is by what means does thumb pain exist? This is the “How?” of phenomenal experience, and as you know it’s often said to exist as a “hard problem”.

        I’m saying that if the brain, or Superman with pencil and paper, or anything else, causes what I know of as “thumb pain” to exist by means of nothing more than information processing, then these will be supernatural events. But if instead such processing is permitted to animate various specialized mechanisms for producing phenomenal experience (and perhaps electromagnetic radiation would yield such a mechanism), then the existence of phenomenal experience could instead exist by means of causal dynamics of this world.

        Liked by 1 person

        1. [or … maybe … if the processing animates any mechanism which makes use of the information, that is all there is to the experience. The subjective/phenomenal aspect of that experience is just the system’s referencing that information.]


          Liked by 1 person

          1. Okay James,
            So while some would say that Superman could create what I know of as thumb pain by quickly following instructions to turn symbol inscribed paper into other symbol inscribed paper, and I would say that he could do so if the results were fed into a machine that was set up to convert that information into associated phenomenal experience, is your position that any mechanism which is able to make use of this information, will thus produce what I know of thumb pain? I’d think that if somewhat different mechanisms were able to interpret Superman’s symbol inscribed paper, that they should produce somewhat different experiences when the physics doesn’t quite align for perfect replication. Similarly two different computer monitors can provide somewhat different images by means of the same information.

            Though we do seem somewhat square above however, you also said this:

            The subjective/phenomenal aspect of that experience is just the system’s referencing that information.

            To me the “referencing” term doesn’t quite do justice to what I consider to happen. For example you might remember an embarrassing situation in which the memory itself also causes you to feel present embarrassment. I suspect that here the past chain of neuron firing also fires again somewhat given their relative strength, though not just by “referencing” that past experience. Instead i suspect it’s by animating mechanisms that create that sort of feeling once again.


        2. Eric,
          I hate to spoil things since you think we’re close, but your statements make me think we’re actually quite far apart. I don’t want to keep repeating the points I’ve already made, so I’ll bow out here, at least until or unless the opportunity comes up to make a new point.

          Liked by 1 person

      6. Mike,
        Well it’s not exactly that I’ve considered us close, but rather that my case that the powers that be are inadvertently pushing a supernatural position, seems to have gotten quite strong. I should test these arguments out on others, as well as give you some time to find holes in them. Not that big shots like Pinker and Dennett are counting on you to save them from humiliation by the likes of me!

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.