What is a non-functional account of consciousness supposed to be?

I’m a functionalist. I think the mind and consciousness is about what the brain does, rather than its particular composition, or some other attribute. Which means that if another system did the same or similar things, it would make sense to say it was conscious. Consciousness is as consciousness does.

Functionalism has some advantages over other meta-theories of consciousness. One is that since we’re talking about functionality, of capabilities, establishing consciousness in other species and systems is a matter of establishing what they can do. But it does require accepting that consciousness can come in gradations. And that “consciousness” is not a precise designation of which collection of functionality is required. So it means giving up primitivism about consciousness, accepting that rather than a single natural kind, it’s a hazy collection of many different kinds.

It’s worth pausing to be clear on what functionalism is. It’s about cause-effect relationships. These relationships can, in principle, be modeled by Ramsey sentences, a technique David Lewis adapted from Frank Ramsey, which models a causal sequence, or entire structures of those sequences. (Suzi Travis has an excellent post which includes an introduction to them.) At the heart of the entire enterprise are these cause-effect relations.

Of course, cause-effect relations are themselves emergent from the symmetrical (reversible) structural relations of more fundamental physics. Causes and effects attain their asymmetry due to the Second Law of Thermodynamics, the one that says entropy always increases. So another way to talk about functionalism is in terms of structural realism. Ultimately functionalism is about structural relations. (Something it took me a while to appreciate after discovering structural realism.)

Over the years, I’ve received a lot of different reactions to this position. Not a few aren’t sure what functionalism is. Some are outraged by the idea. Others equate it with behaviorism. (Unlike behaviorism, functionalism accepts the existence of intermediate states between stimuli and response.)

But occasionally someone responds that the idea is obvious and trivial. I think this response is interesting, because I basically agree. It is trivial, or it should be. I only started calling myself a functionalist because so many people insist that the real problem of consciousness isn’t about functionality.

Philosophers have long argued for a version of consciousness that is beyond functionality. Ned Block, when making his distinction between phenomenal and access consciousness, while admitting there were functional notions of phenomenal consciousness, argued for a version that was something other than functionality (or intentionality, which is also relational). And David Chalmers argues that solving the hard problem of consciousness isn’t about solving the structure and relations that science can usually get a handle on.

Anyone who’s known me for a while will be aware that I think these views are mistaken. But I have to admit something. Part of the reason I’m not enthusiastic about them is I don’t even know what a non-functional view of consciousness is supposed to be.

I understand old school interactionist dualism well enough. But in that case there are still causes and effects. It’s just that most of them are hidden from us in some kind of non-physical substrate. But the interaction in interactionist dualism should be detectable by science, and hasn’t been, which I think is why many contemporary non-physicalists gravitate to other options.

It’s when we get to views like property dualism and panpsychism that I start to lose understanding. We’re supposed to be talking about something beyond the functionality, beyond structure and relations, something that could be absent without making any difference in functionality (philosophical zombies), that could change without change in functionality (inverted qualia), or is in principle impossible to observe from any perspective other than the subject’s (Mary’s room). It’s not clear to me what exactly it is we’re talking about here.

This view has epiphenomenal implications, that consciousness is causally impotent, making no difference in the world. It’s interesting that the arguments to avoid this implication inevitably sneak functionality back into the picture. One option, explored by David Chalmers in his book: The Conscious Mind, is that consciousness is causality, which strikes me as a very minimal form of functionalism. Another, one Chalmers favors, is the Russellian monist notion that consciousness, or proto-consciousness, sits in the intrinsic properties of matter, and is basically the causes behind the causes, which again, seem to amount to a form of hidden functionalism.

But these arguments aside, it’s still unclear what exactly it is we’re talking about. It’s frequently admitted that no one can really say what it is. However, it’s typically argued that we can point to various examples to make it clear, such as the redness of an apple, the painfulness of a toothache, seeing black letters on a white page, the taste of a fruit juice, imagining the Eiffel tower, etc.

The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with. Black letters on a white page is pattern recognition to parse symbolic communication. The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.

I’ve read enough philosophy to know the usual response. That’s I’m identifying the functional aspects of these experiences, but that the functional description leaves out something crucial. My question is, what? Of course, I know the typical response here too. It’s ineffable. It can’t be described or analyzed. Ok, how do we know it’s there? Each of us supposedly has first person access to it. But I just indicated that my own first person access seems to indicate only functionality. Impasse.

So I’m a functionalist, not just because I think it’s a promising approach, but because I really don’t understand the alternatives. Could I be missing something? If so, what?

76 thoughts on “What is a non-functional account of consciousness supposed to be?

  1. Sigh. Sadly, much as I would like to believe otherwise you are probably right. Does this also leave you denying free will? Presumably. Again, that’s probably right. Again, sigh.

    Liked by 1 person

    1. I’m a compatibilist, so I think our will if free enough for social responsibility to be a coherent and useful concept. But libertarian free will? I’m not sure how coherent a concept that is, or what benefit someone who doesn’t buy an omnipotent deity would see from it.

      Like

  2. I read your consciousness articles (with minimal comprehension as my IQ seems to be slipping as I slide toward my grave) and I am guessing you might be a good person to ask whether anyone had parsed out the development of consciousness as, as I like to view it, a dance with evolution. Obviously brains are involved, but physical bodies have to provide the platforms necessary to stimulate evolution in the direction of consciousness. Apes and monkeys, with physical structures like opposable thumbs and under environment stresses like the retreat of forest and the expansions of flat grasslands that allow seeing farther and thus provide a pressure toward walking upright, etc.

    We seem not to be the only conscious species, but we do seem to be the most conscious species, so mapping out the various dances with evolution toward consciousness by many animals may be telling. (I have never accepted that dolphins are as conscious and intelligent as they are claimed to be because their environment doesn’t seem to provide as many environmental stresses as land animals receive.)

    Liked by 2 people

    1. I’ve been working on an extended piece that includes the evolution of consciousness. The bits and pieces, I think, extend back to single cell organisms, but it doesn’t really come together into something that seems like consciousness until we have the development of vision and hearing, brains, and a spatial and temporal mapping of the environment. This is found in hippocampal structures in vertebrates and similar structures in cephalopods and arthropods.

      Like

    2. Several thinkers discuss consciousness from an evolutionary viewpoint: Antonio Damasio, Joseph Ledoux, Daniel Dennett, and some teams including Todd Feinberg and Jon Mallatt in their Ancient Origins of Consciousness book, and Simona Ginsburg and Eva Jablonka in The Evolution of the Sensitive Soul. (Warning, the last book is long and technical.)

      I often talk in terms of a functional hierarchy.

      1. Automatic responses and fixed action patterns
      2. Models of the body in an environment.
      3. Causal models.
      4. Recursive models of the above.

      Humans have all 4. Some animals have glimmers of 4, but language really gave humans selection pressure to develop it more. 3 is pretty sophisticated in mammals and birds, but seem limited in fish. There’s debate about how much it’s in invertebrates, at least other than cephalopods. 2 seems present with distances senses (sight, hearing, smell). It’s very easy to see examples of 2 and think they’re the same as 4, but we have to always be on guard against both anthropomorphism and anthropocentrism.

      Like biology overall, I don’t think consciousness makes much sense except in the light of evolution. But that’s coming from a confirmed functionalist.

      Like

        1. Depends on how you define “consciousness”. It seems clear some mammals have varying degrees of metacognition without language, but it also seems clear they’re pale imitations of what humans have. And in human evolution, increasing sophistication in language and those models may have happened together in a feedback loop.

          But who knows where the evidence might eventually swing.

          Like

  3. I was first introduced to the idea of a “functional definition” by my 2nd-year logic professor, Dr. Anil Gupta. He only mentioned it in passing, but I was immediately struck by its good sense. Why would anyone define a pen as a series of small concentric tubes containing ink, when they could define it as an instrument used for writing?

    We can apply functional definition to any number of things, including consciousness. A functional definition of a toothache might be that it is a feeling that causes someone to express pain physically and verbally, take an analgesic, make an appointment with a dentist, and so on.

    So why would anyone define a toothache as a complex of neural activities? And yet out of the gate you define mind and consciousness is “what the brain does.” Wouldn’t it make more sense, functionally speaking, to define it in terms of what a person does?

    Liked by 1 person

    1. The good thing about functional definitions is it keeps us open to alternate implementations. I think about the old American railroad industry, which dominated the economy a century ago. But they defined themselves as being in the railroad business, instead of in transportation, and so missed the shifts as roads improved and air travel became cheaper.

      I focused on the brain because it’s the only organ we know is indisputably necessary for consciousness in a person. A person in a coma can have an abcessed tooth, but there won’t be a toothache without a working nervous system.

      But we could use a whole person as a starting point. The main thing is it focuses on what the system does, or can do. And if another system can do the same things, then we have to think carefully if we want to say that system isn’t itself conscious. Of course, we can debate about exactly which of those functions are necessary for the label “conscious”.

      Like

      1. Someone could say of a pen, “I focus on the tubes and the ink because they are the only physical things we know that are indisputably necessary for its function as a writing instrument.” There’s nothing wrong with that, but the focus is now on a non-functional account of the pen — an account in terms of what it’s made of and how it works, rather than what it does.

        You ask what a non-functional account of consciousness would look like. My answer is that it would look like an account in terms of what it’s made of and how it works, rather than what it does. It would focus on the brain rather than on experience.

        An engineer legitimately concerned with how a pen works would be hard pressed to explain, in terms of tubes and ink, what is meant by “writing.” He would be starting from the wrong ontology. In the same way, a focus on the brain is not the way to get at the functionality of consciousness. For that, one must begin with experience, the very basis of consciousness. I trust the engineer who struggles to explain writing in terms of the indisputably necessary components of a pen would not suggest in the end that, because writing is not found among them, it must be a fiction.

        Like

        1. I would say “how it works” and “what it does” are simply the same type of analysis at different scales. What a person does depends on how their body, including their brain, works. And how their brain works depends on what the various subsystems do. In both cases we’re talking about cause-effect relations, just at different levels of analysis.

          “How it’s made” can come into that analysis. I noted in another conversation that nothing in biology makes sense except in the light of evolution. I think that’s true of consciousness as much as anything else. If for example we want to understand colors, we have to understand the survival advantages they provide for an organism, and that often means examining its evolutionary history.

          So I think we want to conduct our analysis at all levels. But it seems at every level, cause-effect remains relevant. At least as far as I can see.

          Like

          1. “How it works” and “what it does” are related analyses, but not the same. You can know what something does without knowing how it works. Examples abound: an airplane, or a large language model.

            More to the point, before you can understand how something works, you need to know what it does. I think it’s safe to say this; I can’t think of any counterexamples. So to understand how consciousness works, you first need to understand what consciousness does. A functional definition begins at a higher level and proceeds to the details. In understanding how something works, you start to understand what the parts do. But then you are engaged in a functional definition of the parts.

            All of this is only to express my bemusement at the term “functionalism” as used in consciousness studies. A true functionalism would pay more attention to the appropriate level of ontology. The function of colour is not the function of rods and cones, and can’t adequately be explained in those terms.

            “How it’s made” is a different question again. You can understand how something is made without having any idea what it does. There used to be a TV show called “How It’s Made,” and if you tuned in during the middle of a program, you sometimes had to guess what was being made, even though you could see how it was being done.

            Liked by 1 person

  4. The alternative is pretty obvious. Consciousness is something physical that we don’t fully understand. The fact we don’t understand it doesn’t mean we can dismiss it. The mere fact that we both talk about “red” and understand what the other person is saying is evidence of its existence.

    What I can never understand about functionalism is how we know when we have identified the “function?” It’s endlessly subjective and non-scientific which you correctly point out when you say consciousness is in the eye of the behold.

    I noticed recently that some scientists managed to trick the eye with lasers into seeing a color never before seen. What was the “function” of the brain generating that color?

    Liked by 2 people

    1. Right. We can always dismiss a theory we dislike by saying we just don’t understand it yet and holding out for something better. But what would that perspective theory be if not something in terms of cause-effect or structural relations? What is a hypothetical description of such a theory?

      Identifying the full functionality in a biological system is never simple. And evolution is a messy engineer that never has to worry about understanding what it comes up with. But we can gradually build up a picture of the cause and effect relations. I think color, for instance, is best understood within the framework of our evolutionary affordances. It’s why other species, with different affordances, perceive the world very differently.

      I’m waiting to see if other labs can reproduce “olo”. But we have to remember that a function doesn’t necessarily have to be adaptive. Functionalism is compatible with spandrels.

      Liked by 1 person

      1. Maybe you could explain how the “functions” are derived in some manner more objective than opinion or feeling?

        Neurons aren’t performing “functions.” They are reacting to information from other neurons, neurotransmitters, and probably even the brain infrastructure and passing information to other neurons. That’s their function.

        Some of that information comes from neurons reacting to the external world. Reacting and passing information on – that must have been what the V1 and V2 neurons were doing when “olo” was seen.

        Liked by 1 person

        1. “Maybe you could explain how the “functions” are derived in some manner more objective than opinion or feeling?”

          Are you thinking of “function” in some sense other than cause-effect relations? If not, then why wouldn’t standard scientific methods of observation and experiment not work?

          Right, there are the functions of individual neurons, and there are functions of neural circuits and regions. We won’t find color, for instance, in any single neuron. Color is more likely a long and complex causal chain including what objects reflect what wavelength of electromagnetic radiation, the relevance of those objects in our evolutionary history, and the associations that are triggered by the particular stimuli.

          Liked by 1 person

          1. Let’s take “pain” for example.

            It would be easy to say “pain” is to let you know something is damaged, therefore ease off or help it heal. There’s a “function.”

            But let’s look at additional evidence.

            1- People with various diseases – fibromyalgia, for example – feel pain when there is no apparent damage.

            2- People feel emotional “pain” with no apparent damage. Tylenol will even relieve it.

            3- People with extremely severe damage may feel no pain.

            Do all of these become exceptions, additional functions, or something else? We can’t dismiss them as the “pain function not working correctly” so we don’t need to consider them. If we are multiplying “functions,” how do we know when we have all the correct ones?

            To need to understand why or how pain is produced you can’t just look at “functions.” You have to get down to how the neurons are working, how they are connected, and what types of emergent turbulent structures develop when pain is felt.

            Ironically, I think, you are proposing a theory of reduction to function, while I am looking at a reduction to physical matter and forces.

            Liked by 1 person

          2. I think we have to make a distinction between functionalism and teleofunctionalism. The first is just concerned with causal processes. The second focuses on the adaptive role of those causes and effects.

            I think we would expect the lion share of functionality to be adaptive, at least in healthy organisms. But aside from the possibility of spandrels, things often don’t work right. All organisms eventually break down. Or they can be in environments that their processes aren’t optimized for.

            So the adaptive role of pain, an assessment/reaction normally made from signals from the peripheral nervous system, is normally to motivate activity that mitigates threats to an organism’s health. But we can have the reaction without the signal, or the signal without the reaction. These may not be adaptive, but they’re still causal.

            I’m totally onboard with studying neurons, as well as neural circuits and overall brain regions. But when we’re studying them, we’re studying to learn what causes what effects.

            That said, you can be a functionalist without being a physicalist. I noted some possibilities in the post. But if you think it should all be discoverable, then that seems inherently physicalist.

            Liked by 1 person

          3. “But we can have the reaction without the signal, or the signal without the reaction. These may not be adaptive, but they’re still causal.”

            Right, but what’s the function of “reaction without the signal, or the signal without the reaction?” If anything that is caused in the brain is definitionally “functional,” then functionalism can explain everything, like elan vital, but it doesn’t provide a theory for anything. It’s unfalsifiable.

            Liked by 1 person

          4. So, with what I said above in mind, are you asking about the causal effects or the teleonomic purpose? The causal effects are obviously a reaction without the signal is to impair activity. The evolutionary purpose is likely to inhibit activity so an animal can recuperate. But as I noted above, when it’s not working correctly, that purpose isn’t being served, and lives can be ruined as a result.

            Functionalism is a philosophical theory, a meta-theory, that focuses on theories about causal processes. Is that falsifiable? I guess it depends on your stance toward non-causal…whatever. It’s probably more precise to say that non-causal theories aren’t falsifiable, but of course that makes the converse unfalsifiable as well. This is the triviality argument again. As I noted in the post, it should be trivial. But people argue against it. My point is I don’t understand what they’re arguing for.

            Liked by 1 person

          5. What you were you talking about when you wrote this?

            “The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with. Black letters on a white page is pattern recognition to parse symbolic communication. The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.”

            That seems teleonomic to me. Maybe that’s where the confusion comes in.

            If we want to talk about how “red” arises from signals from specialized cells in the eye that pass through multiple layers of neurons to appear to us eventually as “red,” that’s a causal chain of effects that, I think, would be valuable to talk about and study. The problem comes when you try to abstract the “red” out of red and begin to talk about salience or other purported “functions.”

            Like

          6. Those examples are teleonomic. As I noted above, it seems reasonable to expect most of the effects in a healthy organism to be adaptive. Maybe I should have thrown in a maladaptive example, but no set of examples will be exhaustive, and I was trying to get the general idea across.

            “The problem comes when you try to abstract the “red” out of red and begin to talk about salience or other purported “functions.””

            Why is that a problem? How is it different from biologists looking at the effects of a working heart or a liver and trying to figure out the adaptive functions they fulfill? We know natural selection can’t work on epiphenomena. Something like color perception makes a lot more sense when looked at in evolutionary terms.

            Liked by 1 person

          7. It’s fine to look at adaptive functions, but that doesn’t allow a heart surgeon to use a screwdriver and wrench to fix a heart like a mechanic might use them to fix a gas pump. It doesn’t make the mechanical pump and biological pump the same in any sense other than they both pump something. Taking apart the mechanical pump and examining its workings does not tell us much, if anything, about how the biological heart works. even if some of the physical principles are same.

            So, sure the heart acts like a pump to distribute oxygen, nutrients, immune cells, and such around the body like a mechanical pump. How does offer any account whatsoever of how the heart works?

            You are asking for a non-functionalist account for something for which there is no functionalist account.

            Like

          8. You just gave a partial functionalist account of the heart. Is it complete? No, but it’s useful for a lot of purposes. (Would we have pacemakers without it?) If the standard is that only an exhaustively complete account is a functionalist account, then we can’t provide one even for the systems we engineer and build.

            So I think we have partial functionalist accounts for hearts, lives, stomachs, and yes, even brains. We’re constantly learning more. But what we’re learning is functionality.

            Now, someone comes up and says that our attempt to understand the causal processes of the heart is misguided, because we’re not studying it’s non-functional nature. All I’m asking is what that could even mean?

            Liked by 1 person

          9. “Now, someone comes up and says that our attempt to understand the causal processes of the heart is misguided, because we’re not studying it’s non-functional nature. All I’m asking is what that could even mean?”

            I’m not saying that. so I don’t know or care what it means.

            I’m saying that knowing the heart works as pump doesn’t add to our understanding of its causal processes because many types of devices with different causal processes can perform the function of a pump. Seeing the heart as a “pump function” creates a black box that serves to hide the substrate specific causal processes that make the heart work.

            Liked by 1 person

          10. “I’m not saying that. so I don’t know or care what it means.”

            People are saying it about consciousness, which is my whole point.

            Whether we want to black box particular functions to hide the substrate details or not depends on what we’re trying to accomplish. If we’re trying to reproduce the functionality somewhere else, the black box might be ideal. If we’re trying to understand the intricacies of the system being studied, then black boxing it might be self defeating. But it would all be functional understandings.

            Like

  5. The ability to analyze the phenomenon begets the ability to analyze the ability.
    To what effect? Self deification? Bah! These questions are fallout of us attaining a level of recursive analysis.

    Liked by 1 person

  6. I agree with you 100%. We are not likely to know whether an AI or AGI system is conscious until we have identified the functionality of human consciousness. I’m not saying that we have to understand human consciousness in order to program artificially conscious systems; just that if we want to compare what we’ve programmed to the consciousness to which we all have access, we’ll have to understand the functionality of our own consciousness. But we could start from scratch, programming the desired functionality of consciousness that common sense tells us: perception of the external and internal worlds, associating specific perceptions with general concepts (~thoughts), assigning values (~feelings/emotions) to perceptions (~visual/audio/tactile/homeostatic) and concepts, storing and accessing perceptions, concepts, and values in short and long-term memory, assumptions (~axioms/beliefs) and derivations (~theorems), assignment and execution of goals (~behaviors), limits on specific behaviors (~ethics/morality), self-programming (~self-actualization), etc.

    We can add, subtract, multiply, or subdivide further according to our tastes.

    Liked by 1 person

    1. AI researchers have been trying to build a mind from first principles for decades. Taking cues from the brain has helped. It’s not clear developers would have considered connectionist networks without neuroscience providing information on them.

      Of course, it goes both ways. Neuroscience occasionally takes ideas from AI research on what to investigate in the biology. So although they’re very different fields (be wary of an expert in one opining about the other) a lot of progress happens from cross pollination.

      I think you’re right that we’ll continue lurching forward, with breakthroughs in one area helping the other. All with a lot of people continuously declaring that they’re absolutely not the same thing and acting like they are is a grave error.

      Liked by 1 person

  7. Suzi’s article actually made me swing back against functionalism a good deal, whereas I had thought I was more or less some kind of functionalist. The input-output model skips over what’s happening in the middle, which I think is the really interesting part. I’ll quote from my comment there:

    The toaster example seems to me like a good argument against functionalism — it discards most of the causal structure we should be interested in! Eg we could create a tiny language model that responds to very short pieces of text, then create a program that produces the exact same outputs for every possible input, storing each input and output in a big table it simply refers to each time. In terms of their input and output they’re identical, but I think it’s clear they’re not doing the same thing. We could further imagine a “Swamp program” where the output values in the table were determined randomly, and just by extraordinary luck happened to be the same as the language model and its imitation program. The *how* is, I think, extremely relevant.

    [Thanks for introducing me to the “Swamp Man” thought experiment btw]

    That example was about a language model, but we could do the same thing with a (seemingly) conscious mind. Or with any functional part of a mind.

    It makes me think of Occasionalism – the idea that God alone is the only real cause, and that God just chooses to make created substances correlate in such a way that it seems as though they had real causal powers on each other. Would you say that a God intervening in such a way as to produce the same functional inputs & outputs, down to the subatomic level, would be sufficient to constitute consciousness?

    I think consciousness comes down to some kind of causal structure, but the how of the function is at least as important as the what. (While experience more broadly is causality itself, experienced from a particular POV, with equal ontological validity to every other POV)

    “The thing is, all of these examples strike me as examples of functionality. Redness is a distinction our visual system makes, making something distinct and of high salience, among other likely functions. A toothache obviously is a signal of a problem that needs to be dealt with.”

    I think you’re making a mistake when you try to analyse it, especially because your analysis is trying to apply a third person perspective to the situation. A toothache does indicate a problem, but we experience it simply as pain. Redness is a distinction our visual system makes, but we would feel redness much the same even if we had no idea we even had a visual system. We can imagine being ghosts, without any visual system yet still somehow able to see red, which demonstrates that the two concepts are not simply identical, even if they may be so in fact.

    A helpful way to think about it is via empathy. If someone is in pain, we don’t just take an external perspective that there is a signal that there’s a problem, we try to take their point of view and imagine feeling their pain.

    Liked by 2 people

    1. I think when being concerned about what’s in the middle, we have to remember that functionalism is not behaviorism.  It does take into account intermediate states.  So the question is at what level of detail we have to capture the individual causal relations.  My answer is it has to be at a level where the system itself couldn’t tell the difference. 

      Of course, we might imagine a technology that could observe your behavior for your entire life, and when you die construct a simulation of you.  In principle, it might even be indistinguishable from you for friends and family.  I’d even say it was conscious and could well believe itself that is is you.  

      But I think most people would agree it would not be you from original-you’s perspective.  For that, it would need something shaped from your private thoughts throughout that lifetime of behavior.  On the other hand, if the technology monitored your brain state for the same period and constructed a simulation based on it, then the question seems more of a judgment call.

      So maybe the real concern here is capturing cause-effect relations at the right level of detail?

      I think we definitely have to take the first person perspective into account when looking at this.  But it seems like a mistake to only take that into account.  That’s basically taking only one perspective, particularly one with a lot of limitations.  (We didn’t evolve to have accurate introspective models of our mind.)  It seems like a deadend for an investigation.  And  what we call a “third person” perspective is actually a synthesis of first person perspectives, each one able to add more reliability to the theoretical picture.

      Of course, when we’re feeling pain, we don’t consciously process the causal chain.  We may not know why we’re feeling the pain.  But we evolved the ability to feel pain for reasons, and that feeling usually motivates action that might not happen without it.  (Consider that we usually are motivated to resolve pain fairly quickly, but often not to resolve bad eating habits without a lot of discipline.)

      You’re welcome on Swampman.  Although I’m reluctant to put too much weight on profoundly low probability scenarios.  If I ask you how babies are made, and you give me the usual account, but I then object with swampbaby created by a lightning strike, are there really grounds to doubt the usual account?

      Liked by 1 person

      1. “My answer is it has to be at a level where the system itself couldn’t tell the difference.”

        What do you mean by this? At a glance, it seems like it requires consciousness, which would be an issue for something that’s meant to be part of our criteria for determining if it is conscious or not.

        I think the difficulty is, what is the function of consciousness? What does consciousness do? If it’s merely what enables certain forms of behaviour, then I think we are indeed back to behaviourism. But from an evolutionary point of view, it seems clear to me that behaviours are the ultimate functionality. If we had evolved the lookup table simulacra of consciousness, we would be able to get the same evolutionary benefits without the same consciousness.

        Whether a simulation of me would be conscious is a slightly different question. I think it depends on the mechanisms it uses to simulate me. If it worked via look-up table, I don’t believe it could be conscious.

        I think functionalism is right that it’s about causal structure, and I also think it’s (probably) right that it’s not about fine-grained precise causal structure at the lowest level. But how can we say it’s about function if we can’t say what the function of consciousness is?

        “I think we definitely have to take the first person perspective into account when looking at this.  But it seems like a mistake to only take that into account.  That’s basically taking only one perspective, particularly one with a lot of limitations.  (We didn’t evolve to have accurate introspective models of our mind.)  It seems like a deadend for an investigation.”

        We certainly shouldn’t exclude third person accounts, but my point was that you were missing what a non-functionalist idea of consciousness might mean because you were jumping too quickly to identifying the 1st person explanandum with the proposed 3rd person explanans. In order to see what a non-functionalist account might look like, you need to stop and take in the 1st person explanandum on its own, and see how it is not conceptually identical to the 3rd person explanans. You can think one without thinking the other. It’s like how you and I both know that water is H2O, but we can see that the experience of water is distinct from the idea of H2O, and so we can imagine how people might have different theories about water.

        Although I’m reluctant to put too much weight on profoundly low probability scenarios.  If I ask you how babies are made, and you give me the usual account, but I then object with swampbaby created by a lightning strike, are there really grounds to doubt the usual account?

        I think this is missing the point of thought experiments. The point is not that these things might ever actually occur, or to use them as statistical evidence against the normal case, but to draw out the full implications of our concepts and beliefs, good or bad. And impossible/unlikely thought experiments have been so crucial to the development of philosophy and science, we really must take them seriously.

        Liked by 1 person

        1. “At a glance, it seems like it requires consciousness, which would be an issue for something that’s meant to be part of our criteria for determining if it is conscious or not.”

          Certainly if we try to explain consciousness using consciousness, that would be an issue.  But criteria for consciousness that requires consciousness?  What other criteria should we use?  More specifically, I was referring to whether the system can detect the difference.  If I change out my computer’s monitor, but in a way that doesn’t require any new drivers, then nothing’s changed as far as the OS and computer are concerned. There are aspects of the system the OS can’t monitor, and that definitely is true with minds and brains.

          “What does consciousness do?”

          Things we use “consciousness” to refer to do a lot.  That’s the problem,  It can’t be boiled down to just one thing.  We can talk in terms of models of the body and environment, simulating past and possible future scenarios, the system recursively modeling its own operations, and many other things.  But which one of those amounts to consciousness depends on which definition of “conscious” we’re working with.

          “If we had evolved the lookup table simulacra of consciousness, we would be able to get the same evolutionary benefits without the same consciousness.”

          The problem is that for any system that has to handle a large range of stimuli, the lookup tables quickly escalate into physical implausibility.  Evolution had to find shortcut optimizations.  The integrated nature of the mammalian cortex is the cumulative result of those shortcuts.  Put another way, we’re heavily optimized search engines.  But the search happens below levels we can introspect.  So it seems like we just have primal experiences.

          “In order to see what a non-functionalist account might look like, you need to stop and take in the 1st person explanandum on its own, and see how it is not conceptually identical to the 3rd person explanans.”

          What would you say is present in the 1s person explanandum that is missing from the 3rd?

          “I think this is missing the point of thought experiments.”

          Thought experiments have their uses, but I think people have a tendency to take too much from them.  Philosophical thought experiments are primarily intuition clarifiers.  They can show the limits of our intuitions, where they break down, or become contradictory.  But too often they’re simply propaganda for a partisan set of those intuitions.  

          When it comes to reality, they can help in developing hypotheses.  Or help in explaining theoretical concepts.  But I think taking them as authoritative in the same way actual experiments are is a mistake.

          Liked by 1 person

          1. “But criteria for consciousness that requires consciousness? What other criteria should we use?”

            The criteria should imply consciousness, but it cannot take it as a required starting point. Otherwise we’re not explaining anything.

            “More specifically, I was referring to whether the system can detect the difference.”

            I think this is the right line of thinking, but I’d probably not say “detect”, as it again seems to be presuming some degree of real awareness (although it’s fine if we accept a little panpsychism). But if we replace it with, “is affected by the difference”, I think that works.

            Like we might hypothetically replace part of a (very slow) digital computer with a clockwork equivalent and the rest of the computer wouldn’t be affected. I suppose the trouble here is that we’re again working with mere input-output pairs, because we’re taking the “perspective” of the remaining digital computer. In that case, the vast look up table would again be functionally equivalent, because we’re looking from the outside of the change (even though still inside the computer). We could ask whether the computer as a whole is affected, but then the answer is clearly “yes” — it’s been radically changed by having part replaced with clockwork. We need to look at the structure of its internal self-effects. We have to consider its “functionality” from its own “perspective”.

            The fact that lookup tables are impractical is besides the point. If a theory of consciousness assigns animal-level consciousness to a lookup table, however large, I think that’s an extremely bad sign for it.

            “What would you say is present in the 1st person explanandum that is missing from the 3rd?”

            1st person experience itself. It might be implicit in the 3rd person account, like how the nature of water is implied by the full account of h2o. But that needs to be demonstrated still. If we are to jump from 1st person experience to 3rd person functions, we need some explanatory bridge capable of bringing us back.

            Although… Going back to what I said above about looking at its “functionality” from its own “perspective”, and looking at the structure of the internal self-effects, that may point to the solution. The explanans isn’t 3rd person functionality, it’s 1st person functionality. I suppose this is kind of what you were saying in your initial statement that “the system itself can’t detect the difference”?

            Liked by 1 person

    2. the how of the function is at least as important as the what.

      Hear hear!

      We can imagine being ghosts, without any visual system yet still somehow able to see red, which demonstrates that the two concepts are not simply identical, even if they may be so in fact.

      You just reinvented the Phenomenal Concept Strategy. Which is a core point in defending a non-spooky, yet non-functionalist view. Great minds think alike, and so do I. 😉

      Liked by 2 people

          1. I’m not a fan of the PCS, but it could be seen as part of an a priori account of consciousness. That said, I’ll agree it’s not in the spirit of analytic functionalism.

            Like

  8. Awesome post! Great explication of functionalism. And you’re this close [holding finger and thumb *very* close] to getting it [“it” being my position … ahem]. You describe yourself as a functionalist. Now you only have to precede that word with another: computational.

    The key here is information, by which I mean correlation, aka mutual information. You use all the right words in your descriptions of the functions of perception, specifically distinction, signal, pattern recognition, information, symbolic communication. These words are all about (mutual) information. Information is the property which is not physical, but necessarily depends on the physical. This is the property which is an affordance for function. This is why you can’t look at neurons firing and say “that’s seeing red” without knowing where the inputs to those neurons came from. The information itself is causally impotent, but it explains how the particular arrangement of physical causation was arranged for an adaptive benefit, aka goal.

    *

    [the word “computational” may be unfortunate to the extent that people automatically think about computers as opposed to information processes, but what’s a better alternative? Pattern-recognitional functionalist? Maybe informational functionalist.]

    Liked by 1 person

    1. Thanks. But I’m largely already there in terms of computational functionalism. I just didn’t invoke the c-word in this post because it tends to attract a whole other level of controversy, and it didn’t seem necessary for the point I was making.

      My way of looking at computation is that it’s logical processing. Logic, to me, is an account of causation. Logical processing is causation within a tight range of energy levels, typically as low as possible since there is waste heat and other costs to consider.

      Mutual information, correlations, is definitely part of it. But as we’ve discussed before, when I think about what makes that information mutual, it always seems to involve a relation between the two ends, a relation that comes into being due to a shared causal history. (Except in very low probability Swampman type scenarios.)

      So yes, mutual information, also known as shared causal history, or more fundamentally, shared structural relations.

      Liked by 1 person

  9. Cool. But I guess my point is that, if you’re trying to figure out why some people are non-functionalist, the answer is that they’re seeing the informational aspect without realizing it’s physically and functionally derived.

    *

    Liked by 1 person

  10. I’m confused about your views on functionalism since I don’t quite get how it is compatible with a mechanistic reductive causality, and that’s what I see as the real problem, not the functionalism, per se. I can’t see agency in this picture at all.

    So that’s what’s missing: a top down causality beginning with the form. We’ve talked about this before, I think. You have to know what consciousness as a whole is for in order to define the functions of its parts. The same is true of a car engine. If you don’t know what an engine is for or what a car is for, you’ll just see a mess of stuff and you won’t be able to describe how things are supposed to work without knowing the goal. What’s needed to make sense of functionalism is a holistic understanding of consciousness. This is, I think, what people are getting at in saying “consciousness comes first”. So what is consciousness for? As you might put it, what, exactly, does it do? Can that be defined without taking the conscious individual into the larger environment? What about the society and culture? What about the entire world? The whole picture, whatever that is, needs to come first, massive though this undertaking might seem, before the parts can be described as functions. This is harder than putting together a jigsaw puzzle without knowing what the picture is supposed to be. In that case you already know the basic idea.

    What’s needed for this:

    “.A toothache obviously is a signal of a problem that needs to be dealt with…The taste of a drink conveys information about that drink (good=keep drinking, bad=stop and maybe spit out). And remembering past experiences or simulating possible new ones, like imagining the Eiffel tower, has obvious adaptive benefits.”

    …Is an explanation of how information processing gives rise to values such as these. You can’t discuss good and bad or other experiences as mere mathematical or causal structures. I suspect the values we have thanks to our own experiences are getting projected here, then get excluded later in the description of what makes a causal function.

    Liked by 3 people

    1. On the issue you’re seeing on the relationship with reductive mechanistic causality, remember, functionalism is all about cause-effect relations.  

      I don’t see the issue with agency, but I suspect it’s because we’re using different senses of “agency”.  For me, if a system has an agenda and foresight, and can make decisions in that light, agency is present.  But that’s not the agency of libertarian free will. If you’re looking for that, you won’t find it in functionalism, or likely any physicalist theory that doesn’t wave around quantum indeterminacy.  (I don’t think you’ll find it there either, but many do.)

      I don’t think we want to confine ourselves to any one approach, top-down, bottom-up, side-to-side, etc.  We want to do whatever allows progress, approaches that start at the psychological level, others at the neural, some at the molecular level, and everything in between.  The hope is that the approaches can eventually meet and reconcile.  From everything I’ve read, we’re making progress, but there’s obviously still a long road ahead.

      But I think the only thing science can do is explore structure and relations, whether it be the composition of biological systems, or their functionality.  Philosophers (or scientists doing philosophy) will have to make the case for which combinations of functionality amount to consciousness.  Strictly speaking I can’t see a  fact of the matter on this.  Our reconstructed understanding of consciousness will eventually be whatever version future generations take up.

      “You can’t discuss good and bad or other experiences as mere mathematical or causal structures.”

      Why not?  I’ll grant it’s not intuitive, but I think we have to stay flexible.  Values are learned or evolved, and I think both can be explored reductively.  Not that it’s easy, but I think we should be cautious about ruling out approaches.

      Liked by 1 person

  11. Experiences exist and happen, in enormous quantity, without any reference to any sort of function whatsoever. Of course a theorist could choose to take on the labor of reliving or thinking through as many of these experiences he can remember, which would be a tiny subset, and trying to abstract out some ‘function’ associated with each. In the living glow of the sheer sharp experience, these abstractions mortify a sensitive consciousness. One can see that they subsist purely out of a prior materialist commitment and strain credulity. Consciousness must be approached artistically in order to be (better) grasped, not abstractly. For it is artistic in quality and nature. In my experience, it is exactly people who have a deep entanglement with academic philosophy who will have the hardest time dropping the abstraction impulse and experiencing the experience. To souls not so afflicted, they actually appear to be deficient in cognition in some way. One can call oneself a functionalist, but I would say, no you are not a functionalist — although you are pretending with strenuous efforts to be one. This can seem harsh or rude, but it is not meant to be at all. And I have observed years of the reciprocal form of opinion from abstractionists — who simply repeatedly demonstrate that they initialize all discussions on the matter with the insistence that what I am speaking about does not exist, and please now follow my assumptions from there. If you want to talk about consciousness, at least try to touch on the subject. As Jaron Lanier put it, only a Daniel Dennett could come out with a tome entitled “Consciousness Explained” which isn’t even about consciousness.

    Liked by 2 people

    1. I have no objection to consciousness being approached artistically. But it seems like the purpose of art is to create an emotion in the audience. Nothing wrong with that. I’m a fan. I’m just also interested in trying to understand what’s behind what the artist describes.

      If you think I’m not a functionalist, then what would you say a functionalist is? What attribute would I be missing?

      Along similar lines, if you think I’m not addressing the actual problem of consciousness, then show me where I’m wrong. What is your alternate theory?

      Of course, if your theory is something irreducible, indescribable, unanalyzable, and indetectable by anyone other than the subject, then all I can say is I don’t know what this is describing. Maybe that’s due to some deficiency on my part. But my deep suspicion is that it more reflects the sentiment you express here:

      “In the living glow of the sheer sharp experience, these abstractions mortify a sensitive consciousness.”

      Dissections can be very mortifying, but often the only way to learn is to dissect.

      Like

  12. Non-functionalist accounts would look different depending on the definition of “functionalism” involved. As Ignatius said (but I’ll rephrase it a bit), if what consciousness does is defined by certain types of behavior, then functionalism is a kissing cousin to behaviorism.

    And that’s a good definition of functionalism, in my view. It’s historically appropriate. The behavior in question is, roughly, what psychology (including animal psychology) studies.

    On that definition, it’s pretty easy to construct a non-dualist, non-functionalist approach to consciousness. It would involve looking for a natural kind to explain each type or aspect of consciousness, where a natural kind could be specified by structure or material composition (if these provide good explanations). Far from excluding causal significance, such a search would be driven by the actual causation of paradigmatic examples of consciousness.

    We readily distinguish between “internal combustion” engines and “electric” engines, even though looking at two different cars propelled by these engines, there is no functional difference that anyone cares about. Or if there is, it can be engineered away, without replacing the engines.

    So that is one version of “functionalism”. On the other hand, sometimes you seem to want to define “functionalism” in a way that basically equates it to Ontic Structural Realism. But then “what consciousness does” could include ridiculously fine-grained details of neural processing. Spell out “what consciousness does” with long enough Ramsey sentences, and you could make it so that only brains, and never silicon, could count as producing consciousness. After all, some things that consciousness does are make certain patterns on fMRI scanners, others on CAT scanners, etc. And these depend on the composition of the brain.

    I think that is a bad definition of “functionalism”. It’s ahistorical and needlessly confusing.

    Liked by 3 people

    1. I think consciousness is about how not what. By analogy, a robot with human abilities might have the “internal combustion” consciousness (if we want to insist on calling it that). And real biological people have the “electric” consciousness. They can do behaviorally the same things but they are powered in completely different ways.

      Like

    2. We can see the difference between functionalism and behaviorism by the fact that two systems can have identical behavior with different underlying functionality. Functionalism accepts mental states, albeit as intermediate causal steps, but that acceptance is what separates it from behaviorism. It is true that analytical functionalism has lineage from logical behaviorism, but it’s a successor theory, not the same one.

      You can see the distinction in the early material on functionalism, from people like Hilary Putnam, D.M. Armstrong, and David Lewis. All of them delineated their view from behaviorism, which was much more prominent when they were writing. Equating the views isn’t historically accurate.

      There are many functional distinctions between internal combustion and electric engines. Without even getting into their internal differences (which are causal), they require different inputs and have different ranges, and have different causal effects (noise vs quite running, environmental impact, etc.) The only way they’re the same is if we narrow the specific functionality we’re looking at, but then we’re not even taking a behaviorist view of it, but something even more narrow.

      I don’t see the issue with equating functionalism with structural realism, as I did explicitly in the post. Chalmers also does it in his Reality+ book, so I’m not being eccentric here. Certainly the level of analysis is an issue. But that’s an issue for any physicalist theory of consciousness. You can always take a strawmannish version of it and then declare the whole thing hopeless. But it seems more like dodging rather than engaging with the proposition.

      Like

      1. Just realized that those last couple of sentences were snippy. Sorry.

        And it is true that assessing the level we should be looking at has a lot to do with behavioral results. But much of that behavior includes self report of mental states, something behaviorists generally eschewed.

        Liked by 1 person

      2. I agree that functionalism is a major step beyond behaviorism. That’s why it’s good that there’s a different term for it. And as long as you’re defining “behavior” to mean talk about mental states, plus desire-like behavior and fear-like behavior, plus sleep and wakefulness and so on, you’re not going to the new level that I was worried about.

        Note that there is a wide-open sense of “behavior” in which physicists talk about the behavior of electrons in a magnetic field, and so on. If one wanted to define a “functionalism” wherein all that stuff counted as “behavior” then I think “functionalism” would be the wrong term.

        As long as you don’t go to a level where all microphysical activity counts as behavior, I think it’s quite easy to identify a physicalist view of the mind which isn’t functionalist. I think I just did that above. (And yes, electric cars make less noise, but engineers are deliberately adding noise to them. There is no reason the noises couldn’t be like combustion engines, other than that it’s not sufficiently futuristic and cool.) We could argue about whether that’s a good stance to take – but it’s clear that there is a place to stand there.

        Liked by 1 person

        1. Right. Behaviorism focused on external behavior, and then only a subset at that, excluding self report as valid data. Functionalism is broader, excepting report as well as action. Although I don’t think functionalism would exclude internal activity as evidence, but really only if it had previously been associated with report.

          I do think science has to stay flexible though on what level might be required. Biology doesn’t have clean levels of abstraction. In some cases, fundamental physics may be relevant. Certainly how photons interact with rhodopsin proteins in the retina is relevant to our perceptions of color and resulting behavior.

          But I agree that insisting that everything has to be exactly like it is in a human brain down to fundamental particles, without an explanation of what causal factors are unique, isn’t functionalism. That’s more mind-brain identity theory.

          Like

  13. “I don’t even know what a non-functional view of consciousness is supposed to be..”

    I share this frustration but I am not convinced functional descriptions tell the whole story. The most important word here may not be ‘consciousness’ but ‘know’. What does it mean to know something? Our inability to grasp the non-functional aspects of consciousness could be related to what we accept as valid knowledge. Perhaps we should begin with a functional model for the knowing process.

    Liked by 1 person

    1. Interesting point. My (current) take on knowledge is that it’s information that is reliably more accurately predictive than information that is just belief. “Reliably” is important to distinguish knowledge from one-time lucky guesses.

      This can also get into what we mean by “information”. My take there is something like a snapshot of causal processing. Such processing always shares a history with other processes and so has correlations.

      Liked by 1 person

      1. Defining knowledge as reliable and predictive information is a good start. It is interesting you refer to information as “like a snapshot of causal processing”. How do such snapshots get taken? This is exactly what I meant by “process of knowledge generation”. Is it possible describe this process using reliable and predictive information in the first place?

        If we could come up with such a description, a model for knowledge generation using what we know about evolution of knowledge-capable life forms, we could perhaps go onto claim “there isn’t anything left to know about consciousness apart from functional descriptions”. If not, questions will remain about the phenomenal content of experience, despite usefulness and reliability of functional models.

        Liked by 1 person

        1. “Snapshot” may not be the best word. But consider the contents of a book. The patterns in the book are the result of causal processes (writing, printing, etc). If read, the book will have causal effects in the world. But while it’s sitting on the bookshelf, it’s just static, a snapshot.

          Or consider something like the stretches of DNA we call genes. They result from an evolutionary process, and their effects either enhance their long terms prospects of replicating or inhibit them. But while they’re not actively being used to create RNA, they’re something of a snapshot.

          The same for tree rings. They’re the result of the tree constantly growing throughout the seasons, but the pattern is essentially a snapshot until someone like us looks at them and uses them to determine the age of the tree.

          Brains evolved to be uniquely dense collections of these causal effects. They enable an organism to respond to things much broader than their immediate stimuli, to longer term patterns than what might be immediately present.

          Hope that makes sense. Totally open to any critiques, or suggestions to alternatives to “snapshot.”

          Liked by 1 person

          1. Fully on board with evolution of patterns in nature. But isn’t there an explanatory gap between patterns evolving in nature & human observers being able to comprehend them?

            Knowing involves the observer becoming aware of patterns & underlying casual relations. We could think of “knowing” itself as another function, but then what is the mechanism involved? How do we go from complex brain mass and electrochemical gradients to observers & awareness? What would a functional model for “knowing” look like?

            Liked by 1 person

          2. Above we talked in terms of knowing being prediction. If that’s right, then we already have engineered systems which know things.

            How are we able to predict things? Effects from the environment impinge on our sensory organs (photons, air waves, chemicals, physical contact, etc). These result in a cascade of electrochemical signals to the brain, which spread out from early sensory regions triggering numerous associated concepts as predictions, which recurrently feed back to the early regions, causing either a resonance (confirmation) or disruption (error signaling). The associated concepts are the ones built up over a lifetime of sensory experience.

            Of course there remain vast gaps in understanding on how this occurs. But they’re scientifically tractable.

            Unless of course there’s something more vital missing from this sketch?

            Like

  14. Have you ever thought about this: do colors have an appearance? Obviously, they do. The very fact that colors have any appearance at all is part of our subjective experience.

    If you build a robot that can recognize colors, it won’t experience colors the way we do. It will just detect that something is, say, green, and react accordingly. But for us, there’s a private, subjective “movie” going on — colors feel a certain way when we perceive them. That’s what we mean when we talk about qualia.

    That’s also why the inverted spectrum idea doesn’t really apply to robots. We wouldn’t ask whether robots “see” colors differently, because they don’t see colors at all — they just process signals. Sure, you could set up two robots with reversed color mappings — adjusting their sensors so that each internal code points to the opposite color — but as long as their behavior is the same, it wouldn’t make any real difference from their perspective.

    The very fact that colors look like something to us shows that there’s an aspect of consciousness that goes beyond pure function.

    Liked by 1 person

    1. Have you ever thought about what has to happen for colors to look like something? I have, a lot. I know the standard answer for a lot of people is that nothing happens, that it’s just fundamental. To me saying something is fundamental is just a statement that we don’t want to explore it. But I do want to explore it.

      You say that the difference between us and a robot is that the robot is just detecting the color and reacting. That’s true, to an extent. But our experience of a color begins with a similar detection and reaction. It’s just that the reaction leads to further detections and reactions, an entire chain, or for a neural net, a galaxy of chain detections and reactions, a cascade, that triggers all the associations we inherited or learned over a lifetime, including assessments of what it means for us, whether it’s something desirable or to be avoided.

      This all happens quickly and outside the reach of introspection, so we’re only aware of the results, the impressions and the feeling (assessment conclusions). Those results, if you think about it, are themselves functional, even if only viewed from a first person perspective.

      Why does red look the way it does for us? Why does it stand out more than green or blue? Why is walking into a completely red room unnerving? Why are stoplights and error messages red?

      The answers, more than likely, can be found in our evolutionary history. We can see it by watching other primates and nothing how important ripe fruit is to their survival. Most mammals can’t distinguish well between green and red. The reason is that red isn’t important for them. But it is important for primates, who spend a lot of their time looking for ripe fruit (which tends to be reddish and yellowish).

      So the real difference between (contemporary) robots and us is that our causal chain for processing colors is much vaster. But that’s a difference in degree.

      Unless of course I’m missing something.

      Like

      1. You’re still missing something important. I agree that, functionally, color perception in humans and robots differs in the complexity of processes—color recognition in the brain triggers a cascade of reactions. But that’s not the core issue. Complexity doesn’t explain why colors have an appearance for us, while they don’t for robots. I can’t point to this “something” directly, but thought experiments like Mary’s help highlight it.

        Consider: “What does red look like to a robot?” or “What does red look like to a dog?” The first question doesn’t make sense, not because a robot has fewer reactions, but because it lacks subjective experience. I’m not saying nothing has an appearance to a robot; a chair has an “appearance” as a structure. But red, as something without structure, has no appearance for a robot, while in our consciousness, it looks a certain way—despite lacking structure—thanks to its quale, in a manner indescribable to someone who hasn’t experienced it. Assuming epiphenomenalism is false, this appearance may evoke emotions, but it doesn’t reduce to them.

        In contrast, the belief that grass is green can be explained functionally as the capacity to produce reactions, like the thought that grass is green, without requiring a subjective “image” in consciousness. Color perception is different: it’s a subjective quale, incomparable to a functional belief.

        Like

        1. Without reference to synonymous phrases like “qualia”, “what it’s like”, “like something”, “phenomenal properties”, “phenomenal experience”, “sensations”, etc, how would you define “subjective experience”?

          I have my own definition, but it involves discussion of a lot of functionality (modeling the body in environment, causal models, attention, episodic memory, recursive models of other models, etc.). If none of that reflects what you have in mind, what’s missing?

          If you tell me to look inward to find it, I’ll just note as I did in the post that when I look inward, I see functionality, and understand the very act of me looking as itself functional.

          Like

          1. I’ve been trying to show what subjective experience is without leaning on familiar labels like “qualia” or “what it’s like.” For instance, I pointed out that colors have an appearance for us, unlike for robots. Toward the end of my previous comment, I also drew attention to the contrast between belief and perception.

            Beliefs — like “grass is green” — can be easily understood as purely functional states: they persist and, under the right conditions, produce effects like a thought or a verbal response. But perception isn’t like that. In perception, there is an actual visual experience — an image present in awareness — something that is simply absent in the case of belief.

            When you introspect a belief, you don’t encounter an image; you only access it through its functional consequences, like the thought that “grass is green.” With perception, however, you encounter the experience itself, not just its effects.

            That’s why seeing, unlike believing, cannot be fully captured in functional terms. Ignoring this difference risks missing what subjective experience actually is.

            Does that make the distinction any clearer?

            Like

          2. Thanks. It makes the intuition you’re using more clear, the distinction between belief and perception..

            What I’d invite you to consider, is that there is a lot of stuff going on in your head that you aren’t privy to, except that for some of it, you have access to the final result, or more accurately, to the results at particular stages of processing.

            Everyone has beliefs that they don’t remember forming, but are there nonetheless. And everyone has beliefs they don’t even know they have. But those beliefs affect the decisions they make, how they think, and their overall behavior.

            Now comes the hard part. A perception is a belief. Or perhaps more accurately it’s a huge collection of beliefs, a rich hierarchy and network of beliefs, beliefs inferred by the sensory regions of your brain, and made available to various subsystems in a massively parallel fashion.

            You have access to the hierarchy at select points. So if you’re looking at a tree, you can focus on the lower level details, on the color, shape, and other low level gestalts. Or you can focus on the type of tree it is, a higher level of inferred belief.

            Not knowing how this all works, it’s easy to conclude that there is something special and mysterious about perceptions. But the reality is your brain does an enormous amount of work, work you’re never aware of (unless you have a brain injury), just the results.

            I’d say it’s beliefs all the way down, but since a belief is a disposition, a reaction, it might be more accurate to say it’s dispositions all the way down. I did a post on this a while back. https://selfawarepatterns.com/2021/04/24/perceptions-are-dispositions-all-the-way-down/

            So there’s no sharp categorical distinction between beliefs and perceptions, just the amount of information accessed at the same time.

            Unless you see a problem in this description?

            Like

    2. I can’t agree with the claim that perception is just a form of belief. What gets overlooked here is a simple but important fact: there’s a kind of private stream – a movie playing just for me – made up of sights, sounds, and sensations. No one else has access to it, no matter how closely they examine my brain. They might detect signals or patterns, but they won’t see what’s in the movie.

      Beliefs can sometimes show up in that movie, but only through their effects – like a thought that pops up when someone asks, “What’s 2+2?”. The belief itself isn’t part of the stream in the same way a color or a sound is. That’s why the idea that beliefs are just functional states seems fairly convincing, even from my own point of view: I don’t directly “see” the belief, only the traces it leaves.

      But perception isn’t like that. The movie itself – the stream of colors, shapes, and feelings – isn’t something anyone else can access or reconstruct from the outside. That’s why we don’t wonder whether a robot has one. A robot might process images, label objects, or detect edges – but nothing is being shown to it. There’s no movie playing.

      Like

      1. “No one else has access to it, no matter how closely they examine my brain.”

        “The movie itself – the stream of colors, shapes, and feelings – isn’t something anyone else can access or reconstruct from the outside.”

        A lot seems to hinge on this very common view of metaphysical privacy. Can you tell me the reasoning steps you use to reach it? Why in principle is it different from the practical privacy of what my phone is doing behind the screen? How do we know this isn’t just a technological limitation? Do the neuroscience studies that are beginning to be able to decode neural firing patterns about what someone is thinking have any bearing?

        Like

  15. Maybe focusing of “function” isn’t exactly the right idea to do the heavy lifting here Mike? For example one could say that they’re a functionalist because they believe that all things function exactly as God causes them too function. You could ask them why they believe this but the answer would be the same — God mandates their belief just like everything else. Wouldn’t each of you technically be “functionalists” even given your worldly versus they’re otherworldly perspectives about how things work? I do realize that this doesn’t address your question about what a non functional account of consciousness would be, though I’m trying to go deeper than that.

    I like to focus on metaphysical distinctions instead. There are people like you and I who believe in perfect systemic causality, as well as people who believe in otherworldly souls and whatnot that violate systemic causality. Here we should all be able to agree that their side believes in magic while our side does not.

    Other than that however, the fun part is taking the beliefs of the people like us who say we don’t believe in magic, and then determining which of us have accidentally adopted various magical beliefs? This can of course be difficult to understand and therefore science should need to enter the picture to settle the matter. I just posted something yesterday about that regarding consciousness: https://eborg760.substack.com/p/post-3-the-magic-of-computational

    Liked by 1 person

    1. Hey Eric,

      I suppose someone could say God causes everything. I wouldn’t have much credence in that scenario, but at least I can understand the proposition. Although I suspect the more we drilled into the details, the more problems would arise. But it’s always possible to construct a view that is impervious to scientific scrutiny. It’s just at the cost of being completely redundant. Most people want to resist that for their favorite propositions. It’s that resistance, coupled with the desire to avoid scientific falsifiability, that I think leads to incoherence.

      Looks like you’ve been busy. I thought I had subscribed to your substack, but looks like I screwed up somewhere. Just rectified. I’ll check it out.

      Liked by 1 person

      1. No it wasn’t you that screwed up Mike. I picked up a second account over there because in “discussions” with their chatbots I decided that I’d need a new account so that the feature for audio transcription of my text would work. Turns out that Substack doesn’t let their chatbots honestly tell people that they need a bit of popularity for that specific feature to be enabled. I can’t see how their dishonesty on that would help them in general, but then who am I to tell them how to run their business? Perhaps in the future I’ll be able to find an aftermarket AI to recite my text and then paste it rather than read anything myself? Regardless I think you’ll agree that Substack is attracting talent. That’s enough for me to like the place.

        I can’t get over that NotebookLM podcast that in the basis of my text was generated for free in maybe 2 minutes! I will say that I think I wrote it well enough for algorithms to potentially get right, but the product itself still astounds me. Text that was essentially academic in nature and so requires some experience and skill to be of interest or grasp, was broken down and converted to something that might even be conversationally appropriate for someone without such skills. I may bring that point up my response to the wonderful comment you left me over there…

        Liked by 1 person

        1. I thought I remembered subscribing after you had set up the new account. But that’s just an example of the kind of frustrations I have with Substack. I find myself following people I don’t recall following, and occasionally not following people I thought I had. That and the service’s relentless tricks to try to get you to pay-subscribe really turn me off.

          I acknowledge that Substack is hot right now. But the above issues, coupled with the fact that comment emails didn’t work for me for months, and the only support was a useless chatbot, plus some other issues, makes me leery of relocating there. I had thought about cross-posting like Eric Schwitzgebel, but the commenting issue pushed me away. For now, I’m treating it like a social network. I’m there for the people, but the service itself is not a draw.

          I haven’t listened to the podcast you linked to. I did listen to the one Suzi did. It struck me as very polished but lacking in the real viewpoint flavor of real discussion. It seems like AI content is flooding the zone with vacuous non-content, and making it harder to find the real stuff.

          Liked by 1 person

          1. The one that Suzi did hit me about the same as you, polished but maybe too much so. It was sometimes saying things so fast that I couldn’t quite grasp if they made sense. So I just looked it up again on Notes from September 19. Yes not bad, but some of it went too fast for me to truly be moved. Would I have been extra enthused if the argument directly took the form that I favor, stated simply as “Just because we can conceive of bullshit, doesn’t add any credence to bullshit”? Probably.

            With a bit of pressure I was able to get my 21 year old son to listen to the one I just did. This stuff definitely isn’t in his domain, and no he didn’t seem to get the point. Darn!

            So then I went through a podcast from a post someone else did who thought NotebookLM got his post all wrong. No that didn’t make much sense to me either. But then would I be able to grasp his post itself? I took his premise to begin with imagining that we have brains which create a magical consciousness. Of course for me there’s no need for magic in that sense because I suspect it’s actually an electromagnetic field. Then he said imagine zombies in this world that don’t have this magic but still function the way we magical beings function. Well that turns things on its head for me because I would consider them inherently magical while the human wouldn’t need to be. I went through the post as best I could, but it seemed to meander for quite a while. At some point I scanned through to check how much further it went, and so decided that I was done. No it doesn’t surprise me that he thought the generated podcast misrepresented his position. I doubt he’d like my initial take of his scenario either.

            If my wife gives my podcast a try then I’ll let you know the results. I still suspect that it should help make what I wrote far more accessible to normal people who don’t already delve into this sort of thing.

            Also it’s been over a year since I checked in with Johnjoe McFadden, so I did that hoping that he’d take a look at my post. After many years on Twitter he dropped it in September, and clearly because Musk became (as I said to him) “one of Trump’s evil hands”. I was hoping he’d tell me that he was making public announcements somewhere else so I could stay in the loop, and if not that Substack seemed like a great community to try. No I don’t think he’ll be doing any more of that, and I doubt he looked at my post either. But as a wonderful guy he did at least reply “Thank you for your advice Eric. johnjoe”

            Liked by 1 person

          2. I saw Zimbiel’s post about that. His essays tend to run pretty long. I know I have a tendency to switch into skim mode when reading them. I wonder if that has an effect on the engine. InoReaders has an AI summarizing feature, which I try occasionally, particularly for long articles. But it messes up on that post too, I think because he leans heavily on terminology he explains in previous posts, which I doubt these engines include in their analysis, even when linked to.

            I think part of my aversion to these AI podcasts is that a big part of a podcast is the social component, of learning what particular people think, and how they sell it. (Or what people like them think.) AI generated speakers throw a monkey wrench into that. Who cares what these fictional characters think?

            This can be an issue even when there are real humans doing it, if they’re presentation is so orchestrated and overly polished that we don’t feel the real person coming through. Making it AI generated just seems to pour gasoline on the fire.

            None of that pertains, I think, to an AI narrator. That’s a case where I’m more interested in the author’s views, and the narrator is just a vehicle. I actually tend to dislike narrators who make too much of their own interpretation of the text clear in their voice, unless it’s the author themselves. So I’d love to see AI narrated audiobooks make progress. I know Amazon has been beta testing it, but I haven’t seen any yet on the market. I think they’d sell well, particularly if they go at a lower price than the human narrated ones.

            Liked by 1 person

          3. Okay Mike, my wife gave this podcast a try. Verdict? I’m calling it a success! She just retired from a career in biotech marketing and apparently they always need marketing hype for their products to provide intelligent media for targeted groups of scientists. She says she hears a top quality discussion there, or the very thing that her industry traditionally spends big money to get. So I don’t think I’m imagining this. But did she actually retain the specifics of my attack on computational functionalism, or even grasp what the two sides happen to be? Over just a 13 minute discussion she probably didn’t enter this familiar enough with the sorts of things that people like us think about. So I’m guessing it should be expected that she wouldn’t have retained the question of, for example, “Should thumb pain result by means of marked paper processed to more marked paper?”

            If I’m right that modern AI did a pretty good job here then this suggest that if someone writes arguments that are focused and structured well enough, this tool might often provide a cost free special benefit rather than the ridiculous labor and materials costs associated with traditionally producing media like this. So any of the posts that we read, including each of yours, could quickly be given this treatment by you or anyone else, and simply to see what exactly results. Notice that if a given author doesn’t like what does result there’d be the question of whether the AI messed their work up, or rather their work wasn’t very good in the first place? (Or both?) Also when things do go sour, with enough experience we should get a sense of who the culprit generally tends to be. My current suspicion is that as things continue progressing, it’ll mainly be the author that’s to blame. But at least right now you seem to be leaning the other way. And you’ve probably listened to more of these than the total of three that I have. Currently each of us seem to be arguing on ironic sides, though evidence should come in as time goes on. And once you do find an extra 13 minutes of time for it, I’d love to hear your assessment of mine!

            Liked by 1 person

          4. My take is based on just my own subjective reaction to it as a listener. I think it’s been pretty well established over the years that my reaction isn’t always the common one. I went ahead and listened to it (at 1.5x because I’m impatient). It seemed similar to the one Suzi shared. Really, these are the only two I’ve listened to. (At least that I know of.)

            Personally, I don’t anticipate using these. The straight readings could be helpful on days where I have more time to listen than to read a post. But both of them take me longer to listen to than reading the actual post. I suppose the podcast might be useful for long winded writers who can’t seem to make a point without several thousand words, although there I find the InoReader summarize feature more useful, and honestly I tend to lose interest in those blogs anyway. Too much work.

            I can understand why writers like this NotebookLM thing though. The podcast characters seem to take whatever they’re addressing very seriously, and present it without criticism. It’s a little like those late night infomercials that have the form of a news program, but one that only presents the subject matter in a positive light, as we might expect from something that is actually a commercial.

            But again, I tend to be an outlier for a lot of this stuff.

            Like

Leave a reply to stolzyblog Cancel reply