The function of qualia

Often when I mention that I’m a functionalist in terms of the mind, someone references the Stanford Encyclopedia entry on functionalism. Strange to say, but I’ve never gone through that entire entry. This week I poked around a little in it, mostly in the objections section. Most of the objections either strike me as more a consequence than a problem, or seem strained and contrived.

But the last objection the entry covers, the problem of qualia, I’ll acknowledge is different. I can understand why people see it as a problem.

From section 5.5 of the article:

Functionalist theories of all varieties — whether analytic or empirical, FSIT or functional specification — attempt to characterize mental states exclusively in relational, specifically causal, terms. A common and persistent objection, however, is that no such characterizations can capture the phenomenal character, or “qualia”, of experiential states such as perceptions, emotions, and bodily sensations, since they would leave out certain essential properties of those experiential states, namely, “what it’s like” (Nagel 1974) to have them.

This is, of course, another description of David Chalmers’ hard problem of consciousness or Joseph Levine’s explanatory gap. Philip Goff often characterizes this as a divide between quantities and qualities. Science can handle quantities but not qualities. And this is by design, since Galileo excluded qualities to make science work. Science can’t deal with the problem of consciousness, Goff says, because its foundations excludes the subject matter. All of which resonates with this objection to functionalism.

The issue, I think, is failing to think about what perceptual qualities actually are. Many philosophers seem to take them as something irreducible and fundamental. To ask what they might be or do seems to violate some norm. Even illusionists seem to buy into this, at least up until they dismiss qualia. But I think it’s important to ask Daniel Dennett’s hard question: “And then what happens?” Put another way, what causes qualia, and what do they cause? What is their role in the causal chain?

We can begin to get at this by looking at what happens when the experience of that quality is missing. Consider the perennial example of color. A person with red-green color blindness typically can’t see the “74” in the image below, instead usually seeing “21”. A completely color blind person won’t see any numbers at all.

So, absence of some or all of the normal experience of color comes with functional costs. In this example, an inability to discriminate between the reddish and greenish circles. In the world, it means less able to discriminate between different types of objects, such as between flowers, or what traffic lights are indicating. For a monkey in the wild, it might mean ability to recognize ripe fruit, which tends to be yellower or redder. This is often thought to be why primates have better color discrimination than most mammals.

So the experience of color seems to fulfill a functional purpose, allowing a seeing organism to make important discriminations and associations, allowing it to utilize affordances in their environment.

But, some philosophers ask, can’t we imagine a situation where two people’s perception of color are inverted? Maybe my red is your green, and we’ve spent our entire lives using the words “red” and “green” to mean different experiences. In other words, when you see grass, you see my version of red, but call it “green”. Maybe for you my red is the soothing color of much of nature and my green the attention grabbing one of blood and roses.

A closely related issue is our inability to describe a color to someone born blind. We can describe colors in terms of their associations, such as yellow being with the sun, bananas, or egg yolks. But, the usual point is, we can’t describe yellow itself, or any other color.

Which leads to the question, doesn’t that indicate a non-functional ineffable aspect in the experience of color?

The answer depends on whether there’s more to red, green, or yellow than the associations and reactions they generate in us, that there’s some “mental paint” separate and apart from all those relations. If there isn’t, then the answer is no; the functional structure and relations provide a full account. It certainly doesn’t feel like that’s the case, but it wouldn’t be the first time strong intuitions led us astray.

The question is, if associations and reactions are all that’s there, why do we feel there’s more? It might have something to do with many of these associations and reactions being below the level of consciousness, although many of the resulting impressions aren’t. That produces an experience of mental paint whose origins we can’t introspectively trace.

There is an intense debate in anthropology on whether humans in different cultures see the same colors. If colors are just learned associations, then it seems plausible that different cultures see different colors. Consider a rainbow. In reality it’s a smooth spectrum of wavelengths with no stripes except for the ones our nervous system interprets to be there. Someone with a different background might literally see a different rainbow.

The question is whether they really see different colors, or simply divide up the spectrum with different labels. How much does language reflect the consciousness of the native speakers? And how much does that language in turn affect their consciousness?

One response to the color relativists is that we do all seem to have innate responses to particular colors, such as red having the attention grabbing effect it does. Similar responses actually show up in other primate species.

But this is complicated by the fact that we all have different ratios of photoreceptor cells in our retina that are sensitive to short, medium, or long wavelengths of light. And our ability to discriminate colors reportedly diminishes with age. Which seems to indicate there are variances in the signals and processing between individuals, and even within the same individual over time. This seems particularly borne out by the infamous “dress” incident a few years ago, where people looked at the same photograph but saw different colors, with many adamant that the version they perceived was the right and true one.

All of which seems to point in the direction of color being described by the reactions and associations it generates in us. In this view, colors are categorizing conclusions of our visual system. Some associations and reactions may be innate, but many are learned. And the “mental paint” of our visual field is the collection, the pattern, of those conclusions.

But what about our intuitions from the possibility of inverted qualia? Consider an alternate scenario introduced by other philosophers. What if my pain is your pleasure? Every time I use the word “pain”, I’m referring to the sensation you know of as pleasure. The question is whether this is a coherent scenario. If pain doesn’t cause the same reactions in you that it does in me, is it meaningful to call it “pain”? Likewise, is it meaningful to talk about a green that has all the reactions of red in your nervous system?

If the answer is no, then all qualia are categorizing conclusions of our nervous system, with no sharp distinction between them and the conclusion my laptop reaches when it scans my face for login. As a functionalist, that’s my conclusion. At least for now.

What do you think? Are there reasons to believe in non-functional aspects of qualia that I’m missing? Or other reasons to doubt the functional account?

Featured image source

86 thoughts on “The function of qualia

  1. And here I thought the relative color hypothesis had been put to bed. Quite some time ago I read that studies showed that when we perceive colors, our mental/brain responses are identical to the input. Basically, when we are told red is red, that is because it is red . . . to our eyes/brain.

    Of course, someone could be taught falsely that green is red, but that is a different question. And, of course, color is a mental construct which is unique to each species experiencing it because the apparati we have are so very different. For example, insects with compound eyes can see many, many more colors that do we and so can see no color at all. Human eyes with human brains attached see the same things, no matter what we call them. That these functions sometimes decay with age should not come as a surprise to us.

    Liked by 3 people

    1. New evidence does have a tendency to reopen old debates. That and issues with the previous evidence that might have seemed conclusive before. The article I linked to goes into the reasons why color universality isn’t the obvious conclusion it might seem. Myself, I tend to think the reality is somewhere in between simplistic universality and complete relativism. There is an innate aspect to color perception, but also a strong learned aspect.

      Anyway, to say red is red is somewhat to ignore the fact that red doesn’t exist in nature. I used to think we could map them to particular wavelengths of electromagnetic radiation, but that turns out to be more complicated than it seems, with color illusions and “the dress” incident showing it’s more complex.

      Like

  2. We’ve talked about this so many times over the years that you could probably write my comment yourself. 🙂

    I know you don’t agree with Nagel, but I think he nailed it. There are functional aspects behind our sensorium (such as rods and cones in the case of vision), and there are functional aspects behind our reactions and behaviors in response to sensation, but in between is the experience itself, and that is what we don’t understand and what Chalmers called “the hard problem” — the “something it is like” to be a human being.

    It might be the case that a sufficiently complicated machine will convince us that there is “something it is like” to be that machine, but until we accomplish building one, it remains an assumption.

    Liked by 2 people

    1. We have discussed it many times, although I did try to work in some items in the post I hadn’t covered before.

      My biggest issue with Nagel is he doesn’t specify exactly what he means. Of course, the usual reason is that it’s ineffable. The problem with ineffability is that it might be ineffable because there’s some mysterious reality we just can’t describe. Or it may be ineffable because our feeling that it’s there is wrong. Interestingly, as I’ve noted in my writings about Chalmers’ views, the difference may be operationally neutral, with the experiential aspect an epiphenomenal metaphysical glaze, something we can choose to regard as there, like platonic abstract structures, or choose not to.

      One thing writing this post reminded me of was that interesting conversation we had a couple of years ago about the monkeys that, through genetic treatment, had some of the M-cones in their retina converted to L-cones. At the time we mulled whether they really saw a new color, or were just able to detect shades of the previous colors. I now think there’s no firm fact of the matter. As their nervous system rewired new associations, I think it’s possible the new discrimination might have increasingly seemed more striking to them, until it did reach the point where we’d say it was a new color. Of course, there’s no way to really know. As one of the scientists said, all we can know for sure is they gained new capabilities.

      Like

      1. Would it be helpful to not see it as ineffable so much as foundational? A definition uses foundations to define a concept, but if that concept is already bedrock, we can only point to it and say, “it’s like that!” Nagel is speaking of something that is shared by all humans yet is as individual as we are, so there can be no precise definition.

        I haven’t thought about those monkeys in a long time. I suppose it depends on what we mean by experiencing a “new” color. My color-blind friend got a pair of those glasses, and he reported seeing purple flowers like he’d never seen them before. It was, from his description, a more vivid experience but not a new one. (It was an eye-opener for me that he really couldn’t see the numbers in those color-blindness tests. To me they’re as clear as day.)

        Like

        1. I can see the foundational label as maybe applying to the lowest level of perceptions. In that sense, I can’t describe yellow, I can only point to examples of it. But I think there’s a difference between subjectively fundamental and objectively fundamental. There is substantial processing involved in reaching the conclusion of yellowness. It’s just not anything we have introspective access to. The yellowness is the limit of our introspection. The wiring doesn’t exist for us to reach any earlier conclusions. But objectively we know they’re there. The question is, is there something in addition to all that, something besides all the conclusions, besides the processing to conclude the category of color that’s there?

          I’ve read varying accounts from people who got those glasses. There’s a bunch of dramatic Youtube videos of people overwhelmed by the experience. But I’ve also read people who give accounts similar to your friend’s. There are more vivid distinctions, but they’re not the overwhelming thing they’re often portrayed to be. Those glasses work by blocking certain wavelengths between the L and M ranges. Apparently the issue for many red-green color blind people isn’t so much an absence of a particular cone, as too much overlap in the ranges they’re sensitive to. It makes me wonder how distinct the cone categories actually are, even in fully functional people, and what would happen if someone designed glasses to make portions of their range more distinctive for us.

          Interestingly enough, while I can see the “74”, I wouldn’t say it’s clear as day. So I may have less color discriminating ability, at least in that range, than you do.

          Like

          1. What I meant by foundational was the whole “something it is like” to be human. Color is a tiny aspect of that, but in terms of that it would be the in-the-moment experience of seeing and experiencing yellow. Machines can detect it, but the question is whether that is qualitatively different from experiencing it. (Goff, I thought, makes a good point about quantities versus qualities in science.)

            Yeah, those dichromatic glasses are notch filters that, as you say, create some space between the red and green cones (which, as you know, overlap quite a bit in contrast to the blue cones). The experience the glasses provide may depend on the nature of the color-blindness. It’s a good point about what those with normal color vision might see. I should ask my buddy if I can borrow them. My guess is things wouldn’t look that different.

            I think I do have acute color discrimination. Light and color has always fascinated me. In high school I was “the theatre lighting guy” and I’ve done work with still and motion photography. Once I took a test where you used a slider to match a color patch to a given one, and I always was right on the mark.

            Like

          2. I’d say there is more to experiencing a quality than just detecting it. I think it involves the entire system analyzing those qualities in ways that, I’d agree, machines are still a long way from doing.

            I know a guy who has tetrachromacy, four types of cones, and he’s exceptional at identifying colors. Supposedly it’s pretty uncommon in men, but your acute color discrimination makes me wonder if you might have it. (I have no idea how my friend learned he had it. )

            Liked by 1 person

          3. Hey! It’s been awhile since I’ve commented! I hope everyone is well.

            I apologize if this has been answered elsewhere and I haven’t seen it. But what baffles me about qualia is “how” they are produced. You said to Wyrd: “I’d say there is more to experiencing a quality than just detecting it. I think it involves the entire system analyzing those qualities in ways that, I’d agree, machines are still a long way from doing.” What physical laws of nature that we currently know about explain how our analysis of those qualities produce the qualia? For example, we can use physics to help explain about the detection process, i.e. photons hitting the optic nerve and the energy transformations involved in the vision process, etc. But what laws of physics currently can be used to explain the production of the image I “see” when looking at my computer screen right now?

            Liked by 1 person

          4. Hey Eric,
            Good hearing from you. Hope you’re well too!

            If the descriptions I’ve given in the post and this discussion aren’t clicking, you might want to check out a post I did last year. I’ll warn you, it gets into neuroscientific weeds. But if you want the down and dirty physical explanation, it’s where I think you’ll need to begin.

            Perceptions are dispositions all the way down

            It’s worth noting that the explanation will probably never completely feel obviously right. But scientific explanations often don’t. Anyway, hope it helps.

            Like

    2. I think the problem with Nagel’s “something it is like” is that it refers to the subjective view of an information processing system. So asking “what it’s like to be a bat” is asking you to compare your subjective view as an information processing system w/ what you imagine to be the subjective view of the bat’s information processing system.

      I think it’s a reasonable way to look at it, but only if you understand what you are doing. The hard part is understanding that while your subjective view is a result of the physical stuff happening, that view is not dependent on that specific stuff.

      *

      Like

      1. I always read Nagel’s point as there being no way to imagine the “something it is like” to be a bat, although it would exist for bats. But, yeah, by extension, it’s absolutely about our subjective experience.

        Like

  3. There’s a lot to comment on in this and this relates to a post I’ve had underway for a few weeks but haven’t completed. Maybe your post will encourage me to finish it. (I may have a different and shorter post up today. The longer one will still need more time.)

    A few items.

    I’ve never been particularly impressed with the quantitative/qualitative differentiation. Colors ultimately map back to quantities like litmus paper maps to ph. It actually can be reasonably precise.

    The squirrel monkey experiment where they modified green cones to react to red has always suggested to me that the qualities of qualia must originate close to the sensory neurons themselves. In other words, the formerly green cones must be actually producing different neural streams when they send their red information upstream to the brain. It isn’t just the brain identifying the information is coming from green cones; otherwise, the modification of the green to red reacting would not have resulted in any change in sight capability. However, generally, I would there must be some commonality in the color input otherwise the brain might interpret it as a sound.

    That brings up the issue of what accounts for the distinctive characters of the qualia. We always like to use color but I guess there are likely hundreds of qualia. There’s not only the qualia associated with external senses but also internal monitoring neurons as well. Even something as relatively simple as smell can be mapped across multiple dimensions of acidity, sweetness, etc as well as intensity and direction. If my observation about the squirrel monkey experiment is correct, then the distinctive character of each quale must also originate close to the sensory neurons themselves. The brain assembles the qualia into coherent wholes by sensing the activity of the sensory neurons themselves.

    Liked by 1 person

    1. I’ve always considered qualia in terms of pattern recognition. If I have three separate pattern recognition units, say one for cats, one for dogs, and one for screwdrivers, the difference between those is qualitative. My choices for qualia (literally, which type) has to be one of those or none. You can also answer questions of what it is like. Cats are usually going to be more like dogs than screwdrivers, but not necessarily. If your pattern recognition is binary, as opposed to giving percentage chances, then nothing will be more like anything else.

      It’s just that things get fun when the number of pattern recognition units (unitrackers) is in the millions.

      *

      Like

      1. Cats and dogs, I think, are somewhere up the line from qualia.

        I’m speaking at a more basic level- for example, how edge detection, motion detection, and color detection get combined in an image.

        Like

        1. But these are all parts of the same thing, at different levels of hierarchy. Edge detectors are pattern recognizers, as are motion detectors. So you’re already invoking some of the millions of units which mammals have, each one associated with a quale in its own right. We just don’t pay much attention to most of them.

          *

          Like

    2. Looking forward to that post. Or posts!

      I’m with you on the qualitative vs quantitative distinction. I mentioned it because Goff repeats those points a lot. And it helps to remember that “quality” is another word for “quale”, and qualia are qualities. I think Eric Schwitzgebel is right that the issue is philosophers saddle this simple concept with a lot of dubious theoretical baggage.

      I actually was thinking about that squirrel monkey experiment when writing this post, and mentioned it to Wyrd above. I definitely think the discriminations that result in color perception begin in the retina. But it’s complex. The cones all map to ganglion cells, with lots of cross talk in between. And the ganglion firings is the information that gets sent to the brain.

      But I think the perception of color involves more than that. The visual cortex arrives at early conclusions, which then trigger associations in many other regions. I think a color is that initial conclusion plus all those associations. These associative firings provide feedback to the visual cortex and affect the distinctions it subsequently makes. It’s the galaxy of those associations that make us regard certain colors as distinct while others are just shades of the same color. (Think of the distinctions between red, orange, and yellow, all involving different signaling from those ganglion cells.)

      I think it’s why even though the monkey’s retinas had the new L-cones, it took several months before they could functionally distinguish between the color. Their nervous system had to notice the differentiations and learn the new associations, some of which were probably latent innate ones that had never been triggered. Of course, they were doing this long after their initial development, so a lot of that was probably irretrievably atrophied. It may be why the males who had the treatment were never as good as the females at distinguishing colors. Although we don’t know how much they might have improved after the study period.

      I think the right way to think about the distinctive character of qualia are as categorizing conclusions. It’s why we can sometimes disagree about a color. Or can come to see things like the pain of stretching, the bitterness of coffee, or other things as pleasurable when we initially just found them unpleasant or painful. The conclusions are always subject to being updated.

      Liked by 1 person

      1. I’m thinking in terms of distinction between “raw” qualia and “assembled” qualia. Assembled is what results when motion, color, edge, etc detection is brought together to form an image, for example, At that point, memories and learning can become involved to form an assembled image.

        The brain needs to assemble the “raw” into a coherent image. Even if we see red at some level, for example, when it becomes assembled it might be that edge detection overrides the red and the object or line appears gray. From a wave computation standpoint, this could be thought of as stronger wave overlaying a weaker one during assembly.

        So I’m not fundamentally disagreeing with your point that many parts of the brain, can come into forming the final image with involvement of learning and memory.

        Of course, males are never as good with color as females anyway. 🙂

        Liked by 1 person

        1. James, I think you’ve hit on an important point. And what I’m going to say is, I acknowledge, pretty counter-intuitive.

          First, I’m assuming when you say “raw” qualia, you mean something conscious. If so, I actually don’t think that exists. I think it’s “assembled” gestalts all the way down. The lowest levels of perception we have introspective access to are already heavily assembled. They are conclusions reached at a certain stage of the processing. We have access to that and higher level conclusions, which is why we can focus on the tree, its bark, the wood, it’s color, shape, etc.

          There is, of course, a raw signal which comes in from the retina, but based on everything I’ve read, we wouldn’t recognize it. It’s far below the level of consciousness. Even the signal from the retina isn’t really raw. The brain gets the retina’s analysis of the image impinging on it rather than anything camera like. We can’t focus on the high acuity fovea image that is constantly shifting due to eye saccades, with its hole in the center and increasingly blurry periphery. I don’t think we can focus on edge detection. Or the factors involved in concluding a particular color is present. It seems like the earliest levels of processing in the visual cortex are out of our introspective reach.

          Of course, we could define “raw” to mean the boundary of those lowest level conclusions we have introspective access to. But there’s nothing special about that boundary other than it’s the earliest one we can access. It has conclusions built on lower level conclusions, and that enable higher levels ones.

          I know I’m definitely not as good at colors as the women I know, and many men.

          Liked by 1 person

          1. Your assumption is wrong. I don’t think of “raw” qualia as conscious. I don’t think it works anything like a camera. What I am saying, however, is that the “quality” of qualia – what distinguishes red from green, sight from sound, smell from touch – might originate at the sensory neuron level. We need to account for why visual qualia comes from the eyes and audio qualia from the ears. Why don’t we see music or hear color? This question usually gets overlooked because , with our visual bias, we almost always start (and end) with color when discussion of qualia arises.

            New-born ferret brains rewired so that the visual input went to where auditory input normally is processed developed fully functional visual processing capability. Clearly backend processing is required to assemble anything, to see anything. However, presumably the ferrets did “see” rather than “hear” with their rewired auditory centers. So I speculate that the raw neural feed must contain information that causes its representation to be visual, not auditory.

            Liked by 1 person

          2. Hmmm. The first thing that occurred to me when reading this was synesthesia, where information from one sense affects the other.

            More generally, I have to admit I’ve usually assumed the normal processing locations are factors of what they’re connected to. But the brain is highly plastic during development, with many “critical periods” for sensory processing to develop correctly. So I could see if you rewired new-born brains they might end up organizing in a manner to be functional under that arrangement. (I suspect things would turn out far less well if it was done to adults.) But what the ferret actually experiences is an interesting question.

            This reminds me of something I read years ago, about scanning the brains of blind people. They navigate the world through sounds and feel. What was interesting is that when they thought about the layouts of somewhere like their house, similar regions in the brain lit up as they do for a sighted person. This despite the fact that the visual cortex usually gets recruited for other processing in those situation.

            I seriously doubt there is a visual tag, an auditory one, etc, for sensory streams. I think the brain learns to make sense of the world in any way it can. Which could be radically different if some senses aren’t there, or are jumbled. But I’ll fully admit I haven’t studied this question.

            Liked by 1 person

          3. Well, you see my point, I think.

            Maybe there are no tags in the sensory streams, but that leaves the seldom explored (maybe I’ve missed it?) topic of why the different senses “feel” so differently and how the brain decides to render vision one way and hearing another.

            Synesthesia did occur to me but it seems more like a mix-up during what I’m calling the assembly phase of processing. A different, but slightly similar, sort of mix-up during assembly happens, for example, in the fake arm experiments where different senses supply different information. There is no raw feed of something touching the arm but the brain from learning and memory supplies the feeling of touch based upon a determination (?) that vision is more reliable than touch.

            Liked by 1 person

          4. BTW, there would not need to be necessarily explicit tags in the streams. If the data coming from sensory neurons were sufficiently different, then that might logically lead to the brain rendering the information in a different way. That would change the question to how or in what way the information coming from different senses is different.

            It does occur to me that it is possible that the different renderings actually might reflect something about reality. Electromagnetic radiation “feels” differently from acoustic vibrations for humans. For bats or blind humans, maybe acoustic vibrations are seen.

            It might also be that the renderings need to match their functional purpose. 🙂

            Liked by 1 person

  4. As usual, your post was most interesting. I’m a functionalist just like you, Mike. I’ve encountered so often the claim that knowing what it’s like to be something or someone is necessary for positing a conscious state. If that were true, then we’d all be justifiably solipsists. I no more know what it’s like to be you than you know what it’s like to be me. I’m the only one who really knows what it’s like to be me. And yet I possess consciousness and I assume you do too. I have no problem imagining an AI system that “knows” what it’s like to be its own system (providing it can read its own source code or machine code) but may or may not know what it’s like to be me, and be as conscious as me (providing sufficient functionality minus specific biological functions has been programmed). Color perception is a trivial case of qualia. Some people find certain things beautiful while others don’t. Some fear certain things while others don’t. Some love what others hate and vice-versa. All may be assumed to be conscious. All know how they feel themselves. So what? How is that necessary for maintaining the state of consciousness?

    Liked by 1 person

    1. Thanks Mike! I agree with just about everything you say here.

      I would note that an AI wouldn’t necessarily have to have access to its own code, just as we don’t have access to our own processing at the neural level. That’s not to say it might not have access to a copy of its code, just not direct access to the currently running copy inside it. I think like us, an AI system would likely have limits on its ability to introspect. It might observe something like the “explanatory gap” between what it can directly introspect and what it can read in its engineering diagrams. Although unlike us, those diagrams should be accurate and complete, since it would be an engineered system, allowing a full reconciliation currently unavailable to us.

      Like

  5. I would suggest that the problem with qualia is that the very notion involves an experiencer experiencing an experience, if you see what I mean. 🙂 Hence retorts to any attempted functionalist explanation often go on the lines of: “You are evading the obvious problem: why does my experience of red colour present itself to me as such vivid redness?” Which is how one slips into the rabbit hole of dualism by thinking of an experience as some “it” that presents itself to “me”. But there is no such it separate from me.The quale of redness *is* me perceiving red colour – there is no “presentation” involved.

    Liked by 1 person

    1. I agree. A lot of this comes from the idea that somewhere in the brain there’s a division between a presentation and an audience. It’s the theater of mind metaphor, what Dennett calls the Cartesian Theater. It fits most people’s intuitions about what’s happening. It’s a hard notion to shake. But I agree it’s a deeply problematic one.

      Like

      1. Hmm. Part of the problem may be that there is good reason to think that there is a presentation, or rather, a representation, which does require an audience of at least one, an interpreter that bases action on the representation. And in some cases there may be something like a screen, i.e., a medium that can present more than one possible representation, depending on its input. I’m specifically thinking of Eliasmith’s semantic pointers.

        Like

        1. I’ve long been uncomfortable with the word “representation”, although struggled to articulate exactly why. I think you’re flushing it out. The embedded “presentation” implies everything you’re saying. It’s why I prefer words like “model”. Of course, you could say the systems that utilize the model are having it “presented” to them, but that seems like it would involve a whole bunch of micro-presentations throughout the system, the same way a computer’s hard drive “presents” its data to the CPU.

          Like

  6. Ask GPT-3 about yellow and you’ll get a sonnet or poem or illustrative description of sunflowers, lemons, Big Bird and taxis. Does it experience this “mental paint”?
    Inspect a newborn’s optical and neural stimulus when presented by a wall of yellow and you’ll get a genetically coded response of non-threat / neutral calmness.
    Bolt the latter to the former, feed GPT a map of labels to those neural signals and what? It’ll be conscious, aware of some Van Gogh’esque happiness or more likely, a melancholy despite its programming?
    Would we be able to tell it no? You are not immersed in some cheerless sentimental journey comprised of prior experience or the data distilled from such experiences — all caused by your exposure to a gloriously sunny day? I’d wager its interpretation, given enough capacity to blend a trillion nuanced signals, would indeed be consciousness.

    Liked by 1 person

    1. From everything I’ve read about GPT-3, it’s just selecting text based on algorithms that don’t involve anything like a world model. It doesn’t sound like it passes the Turing test, even the light version commonly administered. Of course, if you bolt enough of the newborn to it, you could get a conscious entity, but I think mostly because the newborn is supplying the relevant functionality.

      That’s not to say machine consciousness isn’t possible. But I do think the designers will need to at least be aiming in the right direction, engineering something that constructs and uses self and world models. If part of that construction means categorizing objects according to the wavelengths of light they reflect, I think that system might experience yellow, or at least do processing similar to what we do when we experience it.

      Like

      1. Wanted to say a couple things about GPT-3. I think the associations between words do constitute a kind of world model. Just not an especially reliable one. In one of the Turing tests Luciano Floridi participated in, his gotcha question was “If we’re shaking hands, whose hand are you holding?” I put this question to GPT-3, and on (I think) the third try I got “Why yours of course. What are you on about?” My point is that there is a whole lot of information about the world in all that text. I think the next development will/should be adding something that can do fact checking internally on GPT-3’s output, possibly causing GPT-3 to try again before presenting its output.

        *
        [ultimately, we need to work up the unitrackers that will help with this]

        Like

        1. I think you might be vastly underestimating what’s involved in the “something that can do fact checking” part. Along those same lines, it’s figuring out how all those unitrackers relate and enable actions that’s the hard part.

          Like

          1. Maybe I’m underestimating, but I’d bet the Cyc (gofai) software could be used to this effect today. As for unitrackers, hard maybe, but that’s where we should be working. But I can see most of it in my head, so it shouldn’t be that hard.

            *
            [this is actually the point where I usually find someone’s already been working on this for a couple years]

            Like

          2. Working in IT, one of the things I learned a long time ago, is that things that seem simple and clear in our head, become much more complex and problematic when we actually try to write them down, or explain them to someone else.

            Like

    1. Associations, I think, describe things at a certain level. But the higher level of those associations matter. As well as which associations a system starts out with, which it learns, and how that learning happens. An undifferentiated mass of associations that aren’t organized for predictions would be lacking important functionality. Maybe another way to look at it, is the associational networks need to have the right constituents, and that part seems far from trivial.

      Like

      1. A multi-dimensional hierarchy, self organizing for a variety of optimization algorithms could account for and describe the matrix of associations in a way that mimics consciousness. And if can mimic consciousness who’s to tell the difference?

        Like

  7. For me qualia are richer than you describe. Alongside the discrimination of sensory information are the possibly good and bad implications for me, the actions I could take to improve the outcome, and the things I could pay attention to next support those actions, as well as the possible outcomes (sensory, pleasure, pain) of having taken those actions. Implicit in all of that is a model of me.

    Furthermore this representation of qualia is actionable, and leads to my actual selected action and attention sets, and so to the next set of qualia…and when the next information set is constructed, it includes a record of what I did and could have done, and any surprising elements of the outcome, so a sense of time continuity emerges from it.

    I find it useful to think of this as a whiteboard of actionable information. The question to work on is then: what information needs to be on that whiteboard at each cognitive cycle to support what we do (functionally) and what it feels like (phenomenally). Subconscious mechanisms crank through the data on the board to select action and attention sets and to update the whiteboard every few 100ms. It is the model of self, and relation of self to world via action, attention and outcomes that gives rise to the sense of self. The information representing this self is on the whiteboard, and the consequences of it (action and attention selection) play out.

    Liked by 1 person

    1. Hey Peter,
      I actually agree with everything you say here about qualia. I just wasn’t explicit enough for you to see it. I referred to the associations and reactions involved in the qualia, but just didn’t go far enough in describing what that entails. I did note that the question is what causes qualia, and what do they cause? And you give a good account of their downstream effects.

      The whiteboard concept you describe sounds very similar to global workspace theory, which I think provides an overall framework, but has to be filled in with a lot of other stuff, such as Anil Seth’s predictive coding theories, and Michael Graziano’s attention schema, among many others.

      Like

      1. To my mind the global workspace theory is horribly misnamed. It seems to imply that there is some place in the brain which acts as the global workspace. Baars himself offered somewhere a much better analogy — that of a tannoy, a public address system making some information available to whomever it may concern. For me, Halligan and Oakley, in their “Chasing the Rainbow: The Non-conscious Nature of Being” get it right in taking that line of thought to its logical conclusion. To quote from their overview of their position:

        “In our account, we take this argument to its logical conclusion and propose that “consciousness” although temporally congruent involves no executive, causal, or controlling relationship with any of the familiar psychological processes conventionally attributed to it. In particular, we argue that all “contents of consciousness” are generated by and within non-conscious brain systems in the form of a continuous self-referential personal narrative that is not directed or influenced in any way by the “experience of consciousness” (which we will refer to as “personal awareness”). In other words, all psychological processing and psychological products are the products of fast efficient non-conscious systems.”

        Ironically, it suggests that there is indeed a “presentation” in a non-Cartesian theatre. Consciousness is a kind of mirror in which our non-conscious minds can contemplate themselves in order to achieve a better internal coordination. Just like an eye cannot see itself except in a mirror, we need consciousness to contemplate ourselves. It seems to me that this is what Dennett means by his (also unfortunately named) “user interface”.

        Like

        1. A lot depends on how we choose to define “consciousness”. The view you’re describing seems like the consciousness=introspection one. There’s something to be said for it. Without introspection, we wouldn’t even have the concept of consciousness. This fact impressed me for about a year or so a while back.

          But what pulled me back is that we’re often not introspecting, yet still consider ourselves to be conscious. And when patients with brain injuries lose their ability to self reflect, but can still communicate and navigate the world, most of us still consider them conscious, at least to some degree.

          But in the end, consciousness lies in the eye of the beholder. I can’t say the introspection version is wrong, only that my view is a bit more liberal than that.

          Like

  8. Great post as usual. But I always get completely frustrated with you. It is me or you who sees this consciousness/qualia thing completely the wrong way. Since you are a good writer and manage also to think about interpretations about quantum mechanics, unfortunately I have a feeling it is me.
    But to me you can’t just say qualia do something. Somebody with colour-blindness is not physical identical to somebody with no colour blindness. The fact that somebody with colour blindness cant recognize red fruit, is not because he is missing red qualia. It is because he is missing out on some cones. If you want a human or a robot to find red fruit, you give him the right cones or photo receptors. That’s the only way to fix it. Then the question is, why does a human have qualia and a robot (probably) not. But this seems to me our main difference. You always try to make qualia some functional causal aspect of information processing and I can imagine that same information processing going on without qualia. But I always enjoy reading your posts, and maybe one day you will write it so clearly I understand the fault in my reasoning. smiley

    Liked by 3 people

    1. Thanks Oscar! Sorry to hear about your frustration. It’s definitely not my aim.

      Here’s something to consider. Imagine we have someone who can discuss their experience of red, blue, pain, and all the rest. Now, we do something that removes their qualia, but preserves all functionality. In other words, we turn them into a philosophical zombie. Can they still discuss their experience of red, pain, and the rest?

      For me, this is an implausible scenario. It seems to require some type of completely epiphenomenal extra ingredient in the mix. It would mean that when we talk about the redness of red or the painfulness of pain, it isn’t the actual redness of red or painfulness of pain that’s causing us to talk that way. It implies a sort of psycho-physical parallelism, Leibniz’s pre-established harmony.

      The thing is, if you accept that as reality, I don’t know of anything I can provide to convince you it isn’t the case. But I also don’t know of anything you can provide me to convince me it is the case. Similar to platonic forms, it seems to be something we can choose to accept or not.

      I don’t know if that makes things less or more frustrating. I hope it helps.

      Like

  9. As a kid, I actually thought red was green and green was red. I’d learned the words wrong, that’s all. I have a very clear memory of being extremely upset when I found out about my mistake. That’s probably not super relevant to this discussion, except as an example of how language muddle things up for us. Even if we experience the world in exactly the same way, linguistic confusion could make it seem like our experiences are vastly different.

    Liked by 1 person

    1. That reminds me of something that happened sometime around first grade. I went to school in a shirt that, I think, was bluish-green. We were supposed to follow our teacher out of the auditorium, but I got distracted. Then realized my class was leaving without me and ran after them. The lady running the event (can’t remember what it was) called after me from her podium to come back, but I ignored her and joined the class.

      She later sent a kid to our class asking for “the boy in the green shirt” to come back. (Maybe worried I went with the wrong class?) I remember looking at the kid and saying, “My shirt is blue,” but in that instant not being sure myself whether it was green or blue, and wondering if I was about to be called out on it. Apparently the kid wasn’t sure either, because after some confusion, he ended up accepting that I wasn’t the one he was looking for. It was sort of my “these aren’t the droids you’re looking for” moment.

      Liked by 1 person

      1. I see I confused the issue by using the mirror metaphor. No, the suggestion is not about introspection — as you say, that’s a non-starter. The idea is that consciousness is an kind-of clearing house of salient information — some internal, a lot of it external. To anthropomorphise: Um, guys, that stripiness in the grass there… could higher decision centres engage, please? To which higher decision centres may post a request: All sensory modules, please attend– a tiger? And the olfactory centre may respond: Nothing to report. To which higher levels may post: Which way is the wind blowing? Etc…

        Not introspection, but more efficient use of information e.g. in that it is public, so may elicit a response from unexpected quarters (Memory archive: It’s that stripy rock again!)

        IIRC, H&O also suggest that by being approximately serial, the flow of information through that clearing house may greatly assist in communicion with other humans, which is by its very nature also approximately serial.

        Like

        1. Hey Mike,
          Sorry, not sure why the spam filter gave you so much grief. I passed the original message rather than the final shorter one. If you’d prefer any of the others, just let me know.

          Anyway, thanks for the clarification!

          Like

          1. Thanks. I was confused and the system was confused, claiming I was an unknown user and asking me to login and when i tried to login telling I was already logged in. Plus it was totally unclear whether my efforts to post were actually going in for moderation or not. Anyway, good to see it sorted out. That post is fine as is. Thanks again.

            Liked by 1 person

  10. It seems to me that a major problem here might be taking a very complex evolved dynamic, such as human visual images, and then trying to work backwards to the basic nature of the qualia behind them. I like to instead begin with a very basic non evolved hypothetical qualia and then propose complex dynamics such as human vision as an evolved product. Consider the following account:

    Surely long ago there were organisms with brains that functioned non-consciously somewhat like our computer controlled machines function non-consciously today. So just as some of our robots have cameras, microphones, and so on to serve as input information from which it’s algorithms produce output function, various organisms must have also functioned this way — camera eyes, microphone ears, chemical analysis for nose and mouth areas…

    Theoretically these biological robots must have stopped progressing somewhere however. I suspect that under more open environments, novel situations often couldn’t be met with sufficient enough programming.

    Next imagine brains doing something that inadvertently creates qualia and therefore the rise of sentient experiencers. I suspect this would be in the form of the right neuron produced electromagnetic radiation. Here something would feel bad/good on the basis of associated brain physics. Furthermore imagine the experiencer inherently having at least some memory of past qualia in the sense that neuron firings which create a given experience should be more apt to fire again to some approximation of the original. So not only should it have present personal interests, but a past to temporally consider in some sense.

    Next imagine this experiencer being able to slightly affect organism output function. In EM radiation form this is theoretically possible through ephaptic coupling. Though mainly epiphenomenal so as not to wipe them all out quick, perhaps some of these experiencers would affect the organism positively given the mentioned programming problem (which is an inability to effectively deal with more open circumstances, a problem also displayed by current robots). Furthermore with success new experiencers should tend to be given more and more information. Existing nose chemical analysis might incite neurons to fire so that EM radiation associated with “smell” results. Or the light detected by an eye might add EM fields of phenomenal space and color. Thus the full human gamut might evolve one day.

    Though I’ve tried to state this scenario simply, I do grasp how involved this must seem. Could evolution really do all that? I think so. Furthermore if this is right there should be evidence of it. Conversely popular theories in general might ask far less of evolution, though I think by means of a magical step.

    In any case to answer Daniel Dennett, I believe that qualia exist essentially as a motivating value dynamic — good qualia brings a good existence and bad the opposite. Furthermore it would seem that phenomenal information may be provided in this form as well. While our taste often leans heavy on value for example, our vision often leans heavy on information.

    Liked by 1 person

    1. In terms of whether to take the bottom up approach, from dynamics to qualia, or the top down one, qualia to dynamics, hopefully they would reconcile at some point. But if they don’t, I would favor the bottom up approach, mainly because introspection is unreliable. Doesn’t mean our introspective impressions don’t eventually have to be accounted for, just that we shouldn’t automatically assume the implications of those impressions do.

      On EM fields, you might want to check out James Cross’ latest post. He highlights a study you’d probably find interesting. I remain unconvinced about EM being substantively causal, but if they are, I remain unconvinced that that the conscious vs unconscious line would fit neatly on the EM vs neural processing one. But these are ultimately empirical questions and maybe I have an update coming at some point.

      On answering Dennett’s question, those answers sound good to me Eric. The main thing is that qualia aren’t something in addition to the functionality. They are a crucial part of it. Once we accept that, they become something science can study. If we hold out for some kind of extra non-causal ingredient, then it becomes a unfalsifiable metaphysical add-on. Keith Frankish yesterday pointed out to me that’s the heart of illusionism. (A distinction I think he’d do well to clarify whenever he writes about it.)

      Like

      1. I don’t think Eric (or me) has been suggesting that qualia is “some kind of extra non-causal ingredient”. I may not be following your comment about Frankish but I would think the heart of illusionism is that qualia are exactly “some kind of extra non-causal ingredients”.

        Like

        1. James, I didn’t necessarily mean to imply that about you or Eric. It was just some extra commentary while agreeing with Eric’s point. Although it is worth noting that few people see themselves as advocating for that, but to the extent they accept the idea of philosophical zombies, that’s essentially what they’re arguing for.

          Interestingly, Frankish tweeted this this morning.

          Liked by 1 person

          1. Maybe I don’t get Frankish.

            The stuff Mary doesn’t know about is color or more broadly qualia.

            How is that an illusion?

            If it doesn’t exist at all, then what is there even to discuss? Why do people stop at red lights and go at green ones?

            If we assume it exists in some way, is it physical?

            If it is physical, can it be caused and can it be a cause? If not, why not.

            If it is not physical, then it must be supernatural.

            The entire position seems nonsensical.

            Liked by 1 person

          2. What he’s saying is illusory is that extra non-causal ingredient. Here’s another tweet (in response to me).

            As I’ve noted before, I think he’s right ontologically, but using the word “illusion” just invites endless misinterpretation.

            Like

          3. What Mary does not know is merely how her specific, unique central nervous system’s (CNS) response to a particular sensory stimulus she has never experienced, would fit in and compare with responses she has experienced. There is nothing non-physical about it, but because it includes detailed knowledge of the (current) state of her CNS, it is not covered by “knowing all there is to know about colour perception”. NB, Frank Jackson, who invented Mary in 1982, has since then recanted and no longer accepts that there is anything non-physical involved.

            Like

          4. The issue with Mary’s room, like so many of these thought experiments, is the premise. No one really imagines her acquiring “all the physical information”, which would include all the information about her own nervous system. But the metaphysical implications only apply if she truly has “all the physical information”, which no one really thinks is possible. If she only has some practical subset, then the implied conclusion about physicalism is moot.

            It’s actually the epiphenomenal nature of what Mary supposedly wouldn’t know that convinced Jackson to recant.

            Like

          5. The problem I see is that when Mary leaves the room she does learn something new about color but the knowledge is something physical in her nervous system. If it is physical, it can be caused and probably is caused by the nervous system. If it is physical, it could also cause or be a partial cause of something physical. So in what way would it be an illusion or epiphenomenal?

            Like

          6. If Mary truly has all the physical information, if she then learns something new, then it must be non-physical and non-causal. That’s the version Jackson recanted.

            The version you’re describing is Mary only having practical book knowledge. (Which is really what most people actually imagine when they think about this thought experiment.) In that case, it’s trivially true that she learns something new, but has no implications for physicalism, nor any requirement to be epiphenomenal or illusory.

            Like

          7. It is in principle impossible for Mary to have *all* physical information, because that information includes the state of her current CNS and the knowledge of that changes that state in an manner unpredictable to Mary. Unpredictable, because no system can have a complete and correct model of itself. Hence the premise “If Mary had *all* physical information” is necessarily false and therefore the putative conclusion is only true in the vacuous logical sense of an implication being true if its premise is false.

            Liked by 1 person

          8. If the information about color in her nervous system is physical, then it is impossible not to have seen color and have all the physical information about color.

            Like

          9. Depends on your definition of “physical”. But if it’s utterly non-causal, philosophically epiphenomenal, then I don’t see how we could ever know about it. It might be those intrinsic non-relational non-structural properties of matter Goff and other philosophers say exist.

            Liked by 1 person

          10. “But if it’s utterly non-causal, philosophically epiphenomenal, then I don’t see how we could ever know about it”.

            The view explains why ghosts and spirits are so hard to detect. They must be non-causal too for the most part. 🙂

            Seriously, this view puts qualia and consciousness right there with ghosts and spirits. It maybe gets generated by matter, apparently caused by it, yet is utterly undetectable otherwise since it cannot react with anything in the material world. If it could be detected, then it would causal.

            Liked by 1 person

          11. But the metaphysical implications only apply if she truly has “all the physical information”

            No – they don’t apply even then. Earlier, you mentioned Joseph Levine on the “explanatory gap”. I had to refresh my memory. Turns out my vague memory was right: Levine is one of the good guys. As Wikipedia puts it, “He agrees that conceivability … is flawed as a means of establishing metaphysical realities.” Exactly. For very closely related reasons, explanatory gaps do not establish metaphysical gaps either.

            Like

          12. It seems like two issues are being conflated here. One is the explanatory gap, for which I agree with what you said. The second is what it means if Mary has, in a Laplace’s demon sense, all the physical information about her nervous system, yet still learns something new. I do think the second would have implications, but since I think it’s also impossible, they seem moot.

            Like

          13. This is a wrinkle sometimes missed in philosophical debates. If something is possible in principle, yet is also as a matter of principle impossible in practice, then it is de facto impossible in principle. Discussions based on a failure to grasp this are really just hot air. (I nearly said as meaningful as discussions about angels dancing on the tip of a needle, but for all I know, that particular scholastic dispute might have been just a matter of scholastics attempting to grapple with the concept of infinity.)

            Like

          14. Right. I think once we understand what the premise entails, the thought experiment is done. But I often receive a lot of pushback for making those kinds of points. Apparently you’re not supposed to drag practicality into these kinds of things.

            Like

          15. What would it mean to have all the physical information about your nervous system?

            Surely, you can never have complete information about what your nervous system might experience in the future. The problem is that knowledge in your nervous system isn’t fixed but changes based on experience.

            Like

          16. It seems uncharitable to interpret “Mary has all the physical information” as making her Laplace’s demon, even if that is the most obvious literal reading. Neither you nor I would say that one needs *that* much information to determine what experiences a person has, though we are both physicalists.

            Like

          17. Perhaps. I used the “Laplace’s demon” line to emphasize we’re not just talking about conventional book knowledge here, not if the thought experiment is going to have the implications Jackson originally claimed.

            Like

    2. To me the key thing about neurons is that they react. In a sense, all neurons are sensory neurons. The ones we usually think of as sensory neurons react to stimuli external to the nervous system (although possibly internal to the organism). Other neurons react to other neurons.

      The most primitive forms – possibly neurons in sponges – may have a simple capability to react to something like invading bacteria to trigger actions to clear the bacteria out. The path between detection and reaction is short, possibly through a single neuron. As more complex decision decision making becomes advantageous, more neurons evolve to sit between the detection and reaction. The additional neurons enable more nuanced and flexible responses.

      The problem arises as the number of neurons increases the problem of integrating the decision-making becomes exponentially more difficult. Communication also becomes problematic. We are working with relatively slow communication mechanisms and hundreds of different types of input as well as the need to integrate memory and learning along with sensory input. Brain rhythms provide part of the solution for keeping everything in sync, but probably isn’t sufficient by itself so consciousness evolves, possibly appropriating the EM field already present as a side-effect of brain electrical activity to create a higher level of integration of sensory input, learning, and memory.

      Like

      1. I reckon nested brain rhythms with top-down (or inside-out) feedback provide the organism-wide coordination that James mentions, and that it is not necessary to invoke an EM field effect beyond the ‘normal’ operation of neurons and their connection at synapses. The different brainwave frequencies (gamma, alpha etc) correspond to individual neuron firing rates, stacking up over layered processing to the overall frequency at which the whole brain can play a part.

        Here’s an extract from a note I wrote about how the speed of language gives clues to this:

        “The speed at which we are able to speak and be understood falls within quite a tight range, even across different languages, with typical values of:
        English: 6 syllables per second
        Mandarin: 5 syllables per second
        Spanish: 8 syllables per second

        This corresponds, in round numbers, to about 100 ms (thousandths of a second) per syllable, around 300 ms per word and around 200 words per minute.

        This closely relates to the timing of a single cognitive cycle (the basic timeslot over which the mechanisms of consciousness repeat), which is also around 300 ms. This is made up of 3 steps:

        0-100 ms perception
        100-200 ms understanding
        200-300 ms action selection.

        My interpretation of this is that the syllable is the smallest unit that we are able to separately perceive and recognise, or produce when speaking, as a distinct entity, while a word is the fundamental unit of conscious understanding.”

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.