Consciousness, illusions, and definitions

I’ve discussed many times that the word “consciousness” has a variety of meanings. But most commonly, the various meanings can be grouped into two broad categories.

One refers to some combination of functionality, typically the information processing that happens in the brain enabling an organism to take in, assess, and use information about itself and its environment, all to further its goals. Understanding this functionality is complicated. We’re at least several decades away from a full accounting, possibly centuries. However, it’s widely recognized that these collections of functionality are amenable to scientific investigation, and eventual replication in technology.

But there is also a widespread sentiment that the first category leaves out something important. It seems to omit the presentation, the mental paint of conscious experience. When we perceive the color red, it aids in our discrimination of various objects, helping us to tell blood from water, ripe strawberries from unripe ones, or to notice stop signs and stop lights, and to categorize a perceived object as being in the category of red things. But that functional description seems to leave out the actual sensation of redness.

The sensation of redness seems like something primal, fundamental, and irreducible. It also appears impossible for me to describe my sensation of red to you, or for you to describe yours to me. It apparently can only be pointed at and agreed that yes, there is red. Which implies that the sensation is something private, inaccessible from any possible third person investigation. And it seems to be something we can’t be in error about. To believe you’re experiencing redness is to experience redness.

Reviewing these attributes, this second version seems very mysterious. It’s difficult to imagine how any physical system could produce it. Implementing this second sense of consciousness in technology seem very hard to imagine, since we can’t even imagine how biology pulls it off.

When illusionists talk about consciousness as an introspective illusion, it’s usually in reference to this second mysterious category. The question I’m pondering in this post is how to meaningfully label that category.

Georges Rey is often cited as a radical eliminativist for his 1983 paper calling into question the existence of consciousness. (Unfortunately, if that paper is online, I haven’t been able to find it.) But either Rey’s views are more nuanced or they’ve moderated over the years. He calls the first category “weak consciousness” or “w-consciousness”, and the second “strong consciousness” or “s-consciousness.” In a 2016 paper responding to Keith Frankish’s illusionism paper, Rey speculates that, if w-consciousness is the only form of consciousness that exists, some existing simulations of pain might actually be instances of pain.

In 1995, in his paper coining the “hard problem of consciousness”, David Chalmers, following the lead of Allen Newell in 1990, proposed using “awareness” to refer to the functional category and reserving the term “consciousness” for the mysterious one. Crucially, the hard problem applies to “consciousness”, but “awareness” is only subject to the “easy problems”. Apparently he received pushback about this in the commentaries on that paper, which he acknowledges in his response, and admits that the convention probably wouldn’t catch on.

The one that did catch on was Ned Block’s distinction, made in a 1994 paper, between access and phenomenal consciousness. Block characterizes “access consciousness”, or “a-consciousness” as functional, and “phenomenal consciousness”, or “p-consciousness”, as the mysterious version. This is the terminology most philosophers seem to have adopted.

But the “phenomenal” label feels problematic, because like “consciousness”, it means different things to different people. This actually appears to go back to Block’s initial description, which left open questions on certain aspects, and apparently the commentaries he received on his paper responded to the range of possible meanings.

It seems like we can talk about both strong and weak phenomenality. I think of strong phenomenality as the full package of consciousness in the mysterious sense. Weak phenomenality is the functional appearance and only the appearance of the stronger version, not the implied reality. A strong illusionist generally insists that the word “phenomenal” is misleading for weak phenomenality, that we’re really only talking about the illusion of phenomenality. Strong phenomenal realists tend to agree. But a weak illusionist or reductionist realist is more likely to use “phenomenal” in the weak and functional sense.

Which makes the qualifier “phenomenal” ambiguous. My impression is that most philosophers tend to use it in the strong sense and most scientists in the weak one, with exceptions on both sides. It’s seems easy to have debates about phenomenal consciousness where the participants talk past each other with different definitions.

Michael Graziano, in a 2019 paper on reconciling various theories of consciousness into a “standard model”, proposes “i-consciousness” to refer to the functional information processing version, and “m-consciousness” to refer to the mysterious experential one. At the time, I wondered why he introduced new terms rather than using the existing ones. But after becoming familiar with the definitional issues, I can see it. Graziano’s motivation is clarity, since he’s denying the existence of m-consciousness (except as a model within i-consciousness), and wants to avoid the implication he’s denying anything else, like perception or imagination.

I like Graziano’s terms. I also like Rey’s “strong” and “weak” ones (which I sort of adopted for “phenomenal” above) but Graziano’s seem more descriptive. And it seems to avoid the confusion of Chalmers’ “awareness” vs “consciousness” distinction, or Block’s “phenomenal” category. (I do like Block’s “access” label though. It seems to capture something important about the dynamics.)

Although I’m a bit more inclined to just call Graziano’s categories “information consciousness” and “mystery consciousness”. Of course, these aren’t completely free of problems, since people argue about what “information” means. And I could see someone making an issue of “mystery” since the information processing aspects have plenty of remaining mysteries. Although like Chalmers’ “easy problems”, which everyone understands aren’t really easy, some interpretational charity is warranted.

What do you think? Do these labels improve anything? Or did the philosophers get it right with “phenomenal”? If so, what do you think about the strong vs weak phenomenality distinction?

Featured image source

98 thoughts on “Consciousness, illusions, and definitions

  1. I think the problem would be resolved with a better understanding/definition of information and information processing. The distinction between phenomenal/access, strong/weak, hard/easy, mysterious/informational really is related to software/hardware, software being the informational description of the information-processing developed by whatever means, including evolution. Every information process has a subjective perspective, that is, a perspective relative to the informational capabilities and functions of the system.

    *
    [need/just/a/few/more/slashes]

    Liked by 2 people

    1. I think I catch what you mean with the software / hardware point, but the brain is so paradigmatically different from the typical technological software / hardware divide, it seems like we have to be pretty careful with the comparison. Which isn’t to say that what the brain does can’t be reproduced with software, at least in principle, just that describing the brain itself that way seems like it needs careful clarification.

      We can say every information process takes in information from a particular vantage point, but most of the existing computer systems use encyclopedic information rather than information they themselves take in from their environment. Even self driving cars seem heavily dependent on map databases. We can say the self driving car has a viewpoint, but right now it still seems like a very limited one. Still, I guess we could say the Tesla that crashed into that truck “subjectively thought” there was only sky there, at least in the “weakly” subjective sense.

      [Hey, slashes indicate comprehension, at least in this case.]

      Like

      1. I think you’re close to my understanding. You’re making good noises, but let’s see.

        The Tesla is probably the best current example, because it has (at least one) definite umwelt. It even has reporting mechanisms, which is the screen display of what it sees. The main umwelt in this sense includes external things like cars, people, trucks, traffic cones, lane markers (via a visual sense) as well as solid objects (via a lidar sense), but also various proprioceptive values, like speed, charge level, tire pressure, etc. Importantly, there is no sky, no birds, no trees. There’s also no object permanence.

        The key point, I guess, is that all this discussion of umwelt refers only to the informational structure, and this structure defines the subjective viewpoint of the system. Any reference to the subjective perspective can only include those things in the umwelt.

        In your last reply you mention “ ‘weekly’ subjective “. I’m now trying to figure out if that term has any use given the umwelt discussion here.

        *

        Like

        1. Making the right noises is what it’s all about. 🙂

          By “weakly subjective” I meant something more or less synonymous with umwelt. The main point was in not asserting anything strongly phenomenal in the mysterious sense. Of course, I don’t think even we have that. As I see it, when people are incredulous about self driving cars, it should be because of its very limited unwelt, not because it’s missing the je ne sais quoi of mystery consciousness.

          Liked by 1 person

  2. One refers to some combination of functionality, typically the information processing that happens in the brain enabling an organism to take in, assess, and use information about itself and its environment, all to further its goals.

    I’m skeptical of the assumed “information processing”.

    Yes, we take in, assess and use information. But why would that require “information processing?” The idea of “information processing” seems to be something we are imposing on the brain.

    When we perceive the color red, it aids in our discrimination of various objects, helping us to tell blood from water, ripe strawberries from unripe ones, or to notice stop signs and stop lights, and to categorize a perceived object as being in the category of red things.

    To me, that seems backwards.

    Yes, we discriminate between objects, etc. But it seems to me that the color experience is how we experience the result of that discrimination. I doubt that the color experience is involved in the actual discrimination.

    As for the terminology, “phenomenal consciousness” seems okay. But I’m not sure we should say that there are two kinds of consciousness. What would it even mean for us to discriminate between parts of the world, but not be aware that we have so discriminated? Does that even make sense.

    We can see the action of discriminating. But what we see does not predict exactly how we will experience it. And that’s why people see it as a mystery. But maybe it isn’t really mysterious. Maybe it is just experiencing the result of that discriminating. And maybe that experiencing is different for different people.

    Liked by 1 person

    1. You wouldn’t think taking in, assessing, and using information isn’t processing it? Granted, I did describe the processing as enabling those things, but I really just see them as different levels of description. A lot might hinge on what we mean by “processing”.

      On seeming backwards, I didn’t mean my description to be a rigorous one in terms of the order that things happen. In truth, I think perception happens in many levels. The lower levels enable the higher levels. Our perception of redness (a conclusion reached in early visual regions) is an array of conclusions that does aid later higher level gestalt perceptions, which can be discriminations.

      On not being aware of the discrimination, that seems like a distinction between exteroception and introspection, which both seem like functional concepts to me. I can imagine exteroception without introspection. It seems like most of the time that’s what we’re doing, except when we’re introspecting. Of course, as noted in the Blackmore post, if you require introspection for consciousness, then that may imply we’re not conscious most of the time.

      I’m in the camp that doesn’t see the deep intractable mystery. I think the limitations of introspection give us what we need to explain any explanatory gap or hard problem. Of course, we do eventually need to account for why those limitations give us the impression of that gap or problem.

      Liked by 1 person

      1. You wouldn’t think taking in, assessing, and using information isn’t processing it?

        I see those as different.

        If I measure the height of my desk, I can be said to be taking in, perhaps assessing and using information.

        If I measure the two sides of a rectangle, and the multiply those measurements to get the area, then that muliplying is processing information. But the measuring is just getting information. You cannot process it until you have it.

        My actual view is that information does not exist. That is to say, information is not a natural kind. When I measure something, then I am manufacturing information. We cannot process it until it is first manufactured.

        I tend to think of cognition as manufacturing information about the world. But it manufactures it in a form that is already useful. So it does not need to do additional processing in order to use it.

        Liked by 1 person

        1. [For what it’s worth, for some, like me, information processing is a specific concept from Information Theory, such that any information process could, in theory, be broken down into its component operations (COPY, NOT, AND, OR, etc.). There’s no separate thing which is “information” which moves around. (Pretty sure about that last one, but not absolutely sure.)]

          Liked by 1 person

        2. In agreement with Neil, the very notion of “information” is an intellectual construction. So therefore, without a mind to construct what we refer to as information, it does not exist independent of mind.

          The bigger problem for the functionalist community is that they do not believe that mind is a separate and distinct autonomous system that emerges from its biological substrate which is the brain. In there view, mind is simply the collective effects of what the brain does. Functionalism is a very naive and shallow point of view.

          So yeah Mike, unless or until the so-called intellectual community recognizes that “fact of the matter” there will be no progress; and no amount of circular rationality will change that “fact of the matter”.

          Liked by 1 person

        3. An alternate way to look at this, similar to what James says, may be causal dynamics. In that sense, energy from the environment impinges on our sensory organs, and causes cascades into the nervous system, which results in what you’re calling manufacture of information. I think it’s information before that point, just being utilized. But really it seems like more of a verbal dispute than an ontological one.

          I do think using “information” to describe these causal dynamics recognizes that the dynamics are about something. Forgoing that slightly higher level terminology forces us to talk in terms of covarying correlations with the environment and other mouthfuls.

          Liked by 1 person

          1. I was not looking for an ontological dispute. I would just as soon abandon ontology. I also don’t see this as a verbal dispute. It has more to do with the causal structure of cognition.

            Like

          2. Causal structure seems like ontology to me. But I see information and causality tightly intertwined. What we call information looks like a snapshot of causal processes to me. Information processing seems like distilled causality, something that seems to enable a system to react to factors in a much larger scope of time and space.

            Liked by 1 person

          3. Take that sample sentence “the cat is on the mat.” Ontology tells us that cats and mat exist, and therefore the important part is the question of relations — whether the cat is on the mat.

            In practice, however, the brain spends far more resources in deciding which part of the world counts as a cat, and which part of the world counts as a mat. Ontology distorts this picture by telling us to take those parts for granted. In ordinary life, that may not matter. But when we are attempting to understand cognition, this distortion is a serious problem.

            Like

          4. I’m not sure what you mean when you say, “Ontology distorts this picture by telling us to take those parts for granted.” You could mean the branch of philosophy which studies reality, which will say all kinds of things. Or a particular ontological theory, where what is said will depend on the theory.

            Anyway, my only point with the ontological reference was to say this felt like more of a definitional thing. But it sounds like you might not agree.

            Liked by 1 person

          5. That we study ontology leaves us with the impression that our concepts are immutable. But they aren’t. We are forever tweaking them.

            That’s the distortion that concerns me.

            Liked by 1 person

  3. I will try to explain what thing seems mysterious to me. Let’s distinguish three aspects of consciousness: consciousness defined by what consciousness does (functional aspect), neural base of functional aspects, phenomenal consciousness (this mysterious thing).

    Phenomenal consciousness is at least seemingly different from two other kinds of consciousness. Let’s consider this with example of seeing. Let’s image robot with camera. This robot can bring red and green things. If this robot get command “Bring something red” it recognizes with camera this red thing. Does robot sees color red? It is no doubt that robot sees color in functional sense although it is impovered functionality if it is compared to what are effects of visual states in people.

    I think that robot have not got phenomenal consciousness. What is it? It is something what I know because I am me. This is some subjective being me. It is some movie consisting of: seeing, hearing, dreams, emotions etc. For example, when I see red color, I have some subjective state which is part of this movie. It is something what goes beyond consciousness does. Perhaps when I in deep sleep state without dreams I am not exists subjectively. In that state of thing I exist merely in external sense.

    Furthermore, colors looks as something for me. It is something what is ineffable and I know it because I see this color. By contrast, red color does not have any look for robot. Question “what does red look like for this robot” does not make sense.

    I don’t know many things about robots but these are not things which are not known because I am not robot.
    Even if my phenomenal consciosness is the same thing as some brain states, it is known by me in particular way.

    It seems impossible to me that it is illusion. I can understand that other people could not have any phenomenal consciousness althout they behave as they have got this consciousness. They would have functional illusion that they have phenomenal consciousness. However, I know my phenomenal consciousness directly, it is not known by functional state.

    Liked by 1 person

    1. Thanks Konrad. That’s a good description of phenomenal consciousness.

      But consider a few things. Suppose we gave the robot enough functionality that it could describe the visual information it’s taking in. So it has a basket of fruit in front of it. Due to its neural networks, it’s able to recognize what that basket is and the fruit that is inside. When we ask it to describe what it’s “seeing”, it can describe the apples, bananas, berries, oranges, etc, in front of it, including their colors. By what right would we deny that the robot was having a visual experience of some kind since it’s displaying similar functionality to ours?

      Imagine that we actually modeled the robot’s visual systems on the human visual system. So not only do we have similar functionality, but now we have a similar implementation, at least at some level of description. Does that make a difference?

      Finally, consider how you know about your phenomenal consciousness. For you to reach the conclusion, the belief, the disposition to say that you have a movie like experience, requires mechanisms operating somewhere. But how do we assess the accuracy of those mechanisms? If we’re seeing a visual illusion in the external world, we can usually figure it out by comparing our perception with additional information. But what additional information do we have available to us to figure out that our introspective impressions are accurate or inaccurate?

      It’s worth noting that in cognitive experiments, people often have little idea of the things that influence their conclusions. Or the limitations to their perceptual systems, such as the low acuity of peripheral vision. So when we do have the ability to make comparisons, those mechanisms often don’t do well.

      Just things to consider. It may not change your mind, but hopefully it can give you an idea of where illusionists are coming from.

      Like

  4. First of all, thanks for that Ned Block reference on the Wayback Machine website. I’ll have to look more into this character named Katz, who is obviously brilliant because he agrees with me (against Block) that phenomenal contents need not be intrinsically representational. Got a reference?

    Secondly, by categorizing meanings of “consciousness” into the functional / phenomenal groupings, you’re excluding a possible view. And it’s a view I hold. To wit: consciousness itself is functionally specifiable, but the particular contents need not be. Let’s distinguish several levels of consciousness. I believe you had five in some post a while back; I’m interested in three: perception, affect, and self-consciousness. And also, I’m not sure if this was on your list: aboutness or “intentionality” as Brentano called it. Four all of these levels/types, the answer to “does this creature exhibit consciousness” can be determined from functional facts alone, I think.

    And yet, for perception and affect, there are differences between phenomenal types for which there is little evidence that functionalist characterization captures them. Instead, they’re probably just brain-wiring dependent. Martians may avoid certain stimuli like the plague, but that doesn’t mean those things hurt. Maybe they itch. And unlike ours, Martian itches don’t subside when scratched, but do subside when one yells and sticks the affected body part in their mouth.

    Note that by “functionalist” I don’t just mean “something that does something to the world”. The latter is a criterion of physicalism, not functionalism. No, I mean the outgrowth of and softening of Behaviorism, which allows scientists to talk about more than just Stimuli and Responses, adding as many layers in between as desired. As long as those layers are defined in terms of Stimuli, Responses, and the network of connections between the nodes in those layers and the middle ones.

    Liked by 1 person

    1. I think that Wayback excerpt is from this book: https://www.amazon.com/Consciousness-Function-Representation-Collected-Bradford/dp/0262524627/
      But yeah, it’s kind of frustrating with these response pieces, since we often can’t see what they’re responding to unless we have access to the other material. Although the full book might provide that. But I don’t have a copy, so not sure.

      I have a few different hierarchies of levels of consciousness. The one I think you’re talking about is the functional one, which usually is something like:
      1. reflexes
      2. perception
      3. habit learning
      4. goal directed behavior
      5. deliberative imagination
      6. introspection
      It really all fits within information consciousness. But I’m onboard with the others you mention. The broad categories discussed in this post shouldn’t be taken as excluding them.

      “there are differences between phenomenal types for which there is little evidence that functionalist characterization captures them”

      I’d be curious of any examples, at least outside of philosophical thought experiments. In general, I’m leery of assertions that something like the wiring makes a difference in experience but never leads to a difference in behavior, even if only a minute one. That seems epiphenomenal.

      I’d say if Martians itch on certain stimuli, it’s likely for adaptive / functional reasons, although the evolved functional role may not be relevant outside of their ecological niche.

      I do see functionalism largely synonymous with reductionist physicalism, in the sense of operating according to causal (or at least interactive) principles. If we go non-reductionist, then it can definitely be non-functional physicalism.

      Like

      1. I definitely don’t see functionalism as synonymous with reductionist physicalism. It’s a response to behaviorism which tries to preserve a little something of its spirit, as I mentioned.

        Brain wiring makes very large differences in behavior, at least eventually. We live in a chaotic universe, and animals are one of its most chaotic elements. But that’s not to say that the behavior differences fall into any workable psychological natural kinds. They need not, and when they don’t, the science of psychology will not be able to classify the behaviors usefully into familiar categories like “aversive” or “aroused” or “is aware of the stimulus” or so on. The only science that’s guaranteed to be applicable is physics, but the physical descriptions would be so complicated and disjunctive (i.e., full of “OR” logic) that nobody would have time or energy to tackle them.

        Like

        1. Functionalism was a response to behaviorism within the philosophy of mind, but as Chalmers described in his book, it has much wider applicability. You’re welcome to hold a more narrow conception of it if you like, but then when you discuss it, you’re not discussing the actual position most functionalists hold.

          Like

          1. Well I haven’t read Reality+, if that’s the book you mean. But if the definition of functionalism has changed, someone should really tell the Stanford Encyclopedia:

            More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.

            And Wikipedia on “Functionalism (philosophy of mind)”:

            In philosophy of mind, functionalism is the thesis that mental states (beliefs, desires, being in pain, etc.) are constituted solely by their functional role, which means, their causal relations with other mental states, sensory inputs and behavioral outputs.[1]

            Or, maybe we should stick to the traditional definition and stop confusing people. If you want to defend physicalism, or reductionist physicalism, those very terms are better choices. Although “reductionism” often throws people and “physicalism without strong emergence” is more precise, if that is what is meant. I happily subscribe to physicalism without strong emergence.

            Like

          2. We’re obviously seeing different things in those quotes. I take the key point to be the causal relations requirement, that the mind is as the mind does. I think it’s a mistake to fixate on Levin’s examples in that intro (which obviously got copied in the wiki article).

            This can be seen if you look at other descriptions of it: From the IEP:

            Functionalism is a theory about the nature of mental states. According to functionalists, mental states are identified by what they do rather than by what they are made of.

            https://iep.utm.edu/functism/

            From Chalmers:

            Functionalism got started as a view in the philosophy of mind: The mind is as the mind does. But it’s applicable across any number of domains. For example, we’re all functionalists about teachers: To be a teacher is to play the role of teaching students. Teaching is as teaching does. We’re all functionalists about poisons: To be a poison is to play the role of making people sick. Poison is as poison does.

            Functionalism can be seen as a version of structuralism, where the emphasis is put more squarely on causal roles and causal powers.

            Chalmers, David J.. Reality+: Virtual Worlds and the Problems of Philosophy (pp. 428-429). W. W. Norton & Company. Kindle Edition.

            All of these describe a common idea, including the Levin quote if you read it in context.

            Like

          3. I think you’re underestimating the significance of the phrases “as teaching does” and “as poison does” in the Chalmers passage. These imply that there is a readily observable, macroscopic pattern that falls into some domain of inquiry. In the social domain, we recognize a pattern called teaching. In the medical domain, we recognize a pattern called poisoning.

            And in the psychological domain, we recognize certain readily observable patterns. The question is how far we can take this level of analysis. You can be a reductionist physicalist and still insist that, perhaps for many mental qualities (John Searle), or for a few (me), this level of analysis fails to capture some things of interest and a finer-grained analysis is needed.

            Like

  5. The mystery evaporates if you analyse the nature of the thing that is conscious systematically, something authors on the subject seem to fail to do. It is (so we are) a process of interacting states that only exists over the physical extent of the nervous system and the temporal extent of each cognitive cycle. Zooming in any closer than the interacting states in their totality dissolves the thing-as-whole that is conscious. Although it is instantiated through physical processes, analysis at that low a level doesn’t add any value.

    Liked by 1 person

    1. I’m generally onboard with what you’re saying. Although not sure I agree with your last sentence. We should definitely keep in mind what level we are analyzing, but I do see even analysis at the level of genetics and proteins often can provide insights.

      Liked by 1 person

  6. Thanks for the deep etymological presentation Mike, it was very useful. I have much to say on this topic. I agree that many of these delineations hopelessly blur together a number of concepts, with the result that sometimes two things come across as mutually exclusive when they are not meant to be.

    To start, I would say that I think we need to be careful when talking about functionalism. Functional properties are properties of ontological substrates, but they do not describe what that ontological substrate is. For example, a functional description like “x does y” is helpful insofar as it helps us pick out a referent (e.g. some component of the brain), but such a description doesn’t actually tell us what that referent is, but only a specific type of role it plays. To do so would require that we describe the physical, non-functional, facts of that component part of the brain. It should then be immediately obvious that phenomenal states can also be functional; they are not mutually exclusive concepts. For example, it could be the case that our functional description refers to a phenomenal state, in addition to a physical state (as in overdetermined dualism) or together as one referent (as in monism, like panpsychism).

    This is an important distinction, which I feel may have been blurred at times in our past conversations (no offense). It seemed like you held to the train of thought that a functional description, by definition, precluded a phenomenal one. But this is not at all true; rather, I read Chalmer’s delineation of the phenomenal/functional as meant to be about tagging descriptions. The ‘phenomenal’ is supposed to pick out the non-functional component of our experiences, in the same way as a physical descriptor about the composition of our brain doesn’t qualify as a functional one. But this difference in terminology was never intended to imply a mutual exclusivity. Phenomenal realists (the non-epiphenomenalists at least) like Nagel are more than happy to attribute functionalism to phenomenal states.

    This leads me to my second point, which is that you have often asked me what a functional description might be leaving out. Once we understand the above though, we can immediately appreciate that functional descriptions will create lacunae. It’s no surprise that they do, indeed functional descriptions by themselves wouldn’t fully capture the nature of our experiences even if experiences were completely physical. The question is not “whether” but rather “what” this lacuna is, and if the physical facts can fully fill in those gaps.

    After our latest conversations, I have started to shy away from the traditional descriptions of phenomena consciousness as “intrinsic, ineffable etc..”. I view such descriptions as ‘tags’, they are meant to pick out your phenomenal states, but they don’t actually describe the states in question. The flaw in this way of speaking is that you may not, for example, think there is any intrinsicness/ineffability to your mental content, and thus end up concluding that the phenomenal component must be a very small or non-existent component of your mental content. But this is not the case, indeed phenomenal realists mean “phenomenal consciousness” to refer to all of our mental content (hence why Chalmers originally labels it just consciousness).

    Our last conversations have helped me appreciate how harmful this framing can be. The assumption was kind of like “of course everyone would agree that our mental content are intrinsic/ineffable etc…”. However, we could just as easily reframe the debate to instead refer to the non-intrinsic, effable, facts of your experience (for people like yourself), and we would be basically still capturing the same thing. To me, phenomenal consciousness just refers to the totality of our experiences. Phenomenal consciousness can be functional; it refers to facts like “I hear a loud noise, or see a bright light, or perceive certain patterns in my visualizations…”. There may be an intrinsic component to these facts, or it may be “dispositions all the way down” so to speak. In the end, that conversation, while interesting, is a sideshow to phenomenal consciousness, which was always meant (I feel) by most phenomenal realists, to refer to the facts themselves and not their intrinsicality.

    The fact of the matter is, can we account for the facts of our experiences in a physicalist programme? We can of course give a functional accounting, but that was never in doubt (which I hope this post helps make clearer). As I argued in my last reply to you, I don’t think physicalism is satisfactory, and I view Chalmer’s hard problem as really being about the purported inadequacy of physicalism. Like Chalmers, I do not believe that physicalism can explain why certain mental facts (like the way objects in my visualizations appear to me) exist, and this remains true even if such facts are really dispositional facts at bottom. Once we admit that physical descriptions of my brain are inadequate descriptors of mental content, then as far as I’m concerned, we’ve basically admitted that we have no mental content (Hence the grand illusion! The greatest of all time if you think about it). Saying that there’s still a rich functional description is just beating about the bush in my opinion. Of course our mental content is supposed to be functional, the question was always, are there true facts about such content, and does it even exist?

    I hope this post helps clear up at least some of our differences Mike. Thanks.

    Liked by 2 people

    1. Alex,

      You are a very astute dialectician and for the most part, I find the idealist community at large to be sincerely dedicated to unwrapping the mystery of our existence. In spite of all of your well crafted arguments, I do not feel that denying the existence of matter as “what consciousness looks like through the dissociative boundary” is an adequate let alone intellectually mature response to the mind/matter problem.

      The “fact of the matter” is that a physical substrate is responsible for and underwrites our own experience of consciousness. The only question that remains unanswered is: “what is matter in and off itself”. As a dedicated seeker of knowledge, if you can effectively answer that question, then the mystery will begin to reveal itself.

      Good luck with your journey……

      Like

      1. Thanks, I guess! 🙂
        I certainly wouldn’t categorize myself as an idealist Lee, although maybe you have a different conception of what an idealist is.

        Like

    2. Thanks Alex. Glad you found it useful.

      On functionalism, I think the is / does distinction works for particular levels of description. At certain levels, saying X is Y is a useful description. For example, saying that hydrogen is an element with one proton does work at the chemistry level. But if we go down to subatomic physics, we get into a conversation about what quarks, gluons, electrons, and photons are doing. Eventually we get to a level of what quantum fields are doing. We might view the fields themselves as substance, but there’s no guarantee they won’t eventually be reduced to lower level processes.

      In other words, it seems like is almost always reduces to does. Substance always seems to reduce to process (or functionalism in the universal functionalist sense Chalmers argues for (except for consciousness)). Of course, I’m coming at this from a structural realist viewpoint. I’m open to the possibility that there may be intrinsic properties, but it’s not clear how we can ever know about them, at least without introducing new physics, but then our intrinsic properties are no longer intrinsic, but relational.

      Anyway, with your discussion of “phenomenal” broadly construed to include functionality, I think phenomenality, in and of itself, becomes moot. This is why I talked about the other labels out there, to move beyond the ambiguities of that particular one, and to narrow things down to the real issue: the putative aspects of consciousness with those mysterious properties.

      I’ll grant that it’s possible in principle to dissociate functionality from physics. There’s no reason to suppose there couldn’t be non-physical causal dynamics. But that gets into what we mean by “physical”, which I think comes down to operating according to causal or interactive laws. Even if we do allow for a separate causal framework, at some point those two things need to meet and interact, unless we’re going to go in for psychophysical parallelism, which seems back to epiphenomenalism.

      All of which makes me think our ability to provide a functional account is significant, and makes any putative gaps in that account worth looking for and examining.

      Like

      1. Hi Mike,

        I agree with you on the levels of abstraction, but contrary to your view, I take functional properties to be specifically focused on a particular level of abstraction. When most philosophers speak of functional consciousness, I view them as generally speaking of the high-level dispositional properties. There may be lower-level dispositional properties (if it’s dispensationalism all the way down), but such properties shouldn’t be categorized as functional, or we risk making ‘functionality’ far too capacious a term. The functional descriptor is just supposed to tag the conscious stuff, so it can’t be functionality all the way down, even if it is dispositionality all the way down.

        Therefore, I reject the picture that “with (my) discussion of “phenomenal” broadly construed to include functionality, I think phenomenality, in and of itself, becomes moot”.

        I agree that the most pressing issue which we are all attempting to address is the mysterious aspect of consciousness. Specifically, I would say that it seems that at particular high levels of abstractions, mental and physical facts converge insofar as they share a functional nature, but at lower levels of abstraction, they appear to diverge. Facts about the way visualized objects look to me don’t seem to be facts about my brain structure.

        Like

      2. In other words, I take functional properties to be specific dispositional properties that play a very specific role in our society etc… The dispositional properties of my mental and/or physical model might not be describable as playing such a role (at the societal level), and so wouldn’t be categorized as functional in my opinion. But I admit this is all semantic convention in the end. The only concern of mine about making the functional descriptor so open-ended is that at that point, functional consciousness and mental dispositions are purely synonymous terms.

        Like

        1. Hi Alex,
          “Facts about the way visualized objects look to me don’t seem to be facts about my brain structure.”

          I wonder what you’d see as example of this. You previously mentioned imagining a table and its spatial relationships, but it doesn’t seem hard to imagine a model for that being instantiated within neural firing patterns.

          In some ways, the name “functionalism” is of limited accuracy. A more accurate one might have been “causalism”, since the chief claim is that mental states are more about what they do rather than what they are, more about their causal role than their substance. “Functionalism” implies purpose. That fits with an engineered system. It can be stretched for the teleonomy of biological systems. But biological systems also contain spandrels and adaptations that misfire outside of their evolved ecological niche. No functionalist I’ve read denies this.

          As we go down in levels of description, the relation of things like ions surging across channels starts to look more like just physics in action, but the fact that it’s embedded in a system where it has causal effects matters at higher levels that are more intuitively functional. Molecular reactions in DNA strands might be chemistry, but they’re chemistry within an overall causal framework that magnifies their effects.

          On functional consciousness being synonymous with mental dispositions, why would it be a concern if the goal is to explain the mental in causal terms?

          Like

          1. Hey Mike,

            “On functional consciousness being synonymous with mental dispositions, why would it be a concern if the goal is to explain the mental in causal terms?”

            If that’s all you mean, then I agree that a functionalist description of the mental would fully exhaust mental descriptions, provided that we adopt pandispositionalism (dispositions all the way down).

            Alternatively, we can take the original Putnam-style assertion of functionalism as being about what mental state do, instead of what they are, as asserting:

            “A and B exist. A is about what things are, B is about what things do, and mental states are about B.”

            If we later discover that A is actually dispositional as well, we don’t redefine the mental to be about A as well, but instead we define B as being about a particular type of disposition. I suspect that this is what many of the original functionalists had in mind, that they didn’t envision functional consciousness as capturing dispositions at the micro-physical level. Under panpsychism however, phenomenal experiences could refer to events happening at the micro-physical level.

            In any case, if we adopt your interpretation, along with pan-dispositionalism, then I agree that phenomenality becomes a type of functional conscious.

            I will address in a separate post later on why I think it’s so problematic to reduce phenomenal facts to physical ones.

            Like

          2. Hey Alex,
            Pan-dispositionalism? Hmmm. It seems like a lot might depend on what we mean by “disposition”, and I’m not read on dispositionalism. (Given this and Eric Schwitzgebel’s recent discussion, something I probably need to rectify.) These -isms always seem to have commitments I can’t sign up for. But I suppose we can see elementary particles as disposed to follow the laws of particle physics. If that counts as a disposition, then sure, we could see it that way. But if we mean something more sophisticated with “disposition”, then I may not be onboard.

            Certainly functionalism’s core claim is that mental states are about what they do. But it seems artificial to consider that in isolation from what the components are disposed to do. What my brain does when a stimulus from red light is received depends on how the various neural circuits are disposed to react. It seems like those dispositions are part of the causal dynamics.

            But I feel like I’m saying obvious stuff here, so I might be missing something, particularly given my unfamiliarity with the literature on dispositions.

            Like

          3. Hi Mike,

            Sorry for the tardy reply, as promised here is my argument for why I think that we can’t explain mental content with physical descriptors.

            But first, about pan-dispositionalism, as far as I understand it, it’s non-teleological in that particles are very much disposed to follow the laws of physics. It’s not quite synonymous with the term ‘relational’ though, since we can understand something as being relational (e.g. the number 4 being smaller than 5), but without dispositions (we haven’t talked about what the number 4 does, yet). In some sense therefore, dispositions aren’t opposite to intrinsicality, the real antonym to intrinsicality is ‘relational’.

            From before:
            “You previously mentioned imagining a table and its spatial relationships, but it doesn’t seem hard to imagine a model for that being instantiated within neural firing patterns.”

            The problem is that our mental facts (like facts about visualized spatial arrangements) are not true descriptors of physical states.

            Contrast this with the software analogy, where talk about code describes physical facts (at a higher level of abstraction). Such talk is meant to capture the behavior of physical objects (like numerals on my computer screen) or the behavior of objects in a program, like how the entities in a video game function. The physical facts, such as the way that numerals on my screen are displayed, or how the graphics in a game operate/appear, would directly falsify or verify the truth of the statements about software code.

            But no such physical truth conditions appear to exist that would verify or falsify facts about the way mental objects look in my visualization. Whether the proposition “tables are arranged spatially in my visualization” is true or false depends on how the tables look in my mind’s eye, and not how the physical facts of my brain turn out. Of course, there is a relation between the physical facts and my visualization being able to happen, in the same way as there is a relation between the universe existing and the facts about software being true, but the point is that my mental facts are not about the physical facts (in the same way as facts of software code are not about the universe’s existence).

            So, the difference between the software and mental/physical cases is that a clear mapping appears to exist in the former case but is absent in the latter case. Now there are a few ways that physicalists have attempted to answer this charge. The first is to simply deny that mental facts are real. Mental talk is fictional, useful maybe, but at the end of the day I’m not really visualizing tables in my mind’s eye even though I believe I am. I take this to be the approach of the eliminativists and certain illusionists like Frankish.

            The second, and more promising retort, is to argue that mental facts are real, but that they only appear to be non-physical due to some lacunae in the way we present mental propositions. This is the stance adopted by early philosophers like Davidson and Searle, who have advocated for adopting two levels of description in our discourse, to separate first person and third person discussions. Both philosophers mostly left blank what they thought this lacuna actually consisted of though.

            This work was later filled in by physicalists like Stoljar, who first adopted what is known as the “phenomenal concept” strategy. Basically, the idea is, like before, that propositions which employ phenomenal concepts are missing some important context, and that once we fill in this context, we will see that mental propositions do in fact describe physical states.

            There are many different methodologies that have been put forth with the aim of filling in such an epistemic gap; here is a great primer by Chalmers on this issue (http://consc.net/papers/pceg.html).

            I don’t have the space or time to cover all these strategies (check out the paper if you’re interested), but like Chalmers, I’m not optimistic that any of them can work.

            Briefly, as I think I mentioned elsewhere in a comment on this blog, if phenomenal propositions were missing some lacunae, then we would expect physical propositions to also include mental facts in their descriptions, not eliminate them.

            For example, if we were like blind men feeling an elephant’s body parts, and I conclude from feeling the trunk that an elephant is flexible, and you conclude that it is firm (from feeling the foot), then it seems like the actual description of the elephant would include both descriptions of firmness and flexibility, and not eliminate both. But in our case, facts about mental table color and depth are entirely absent in descriptions of the brain.

            Another attempt that I previously covered (in my reply to Mike Arnautov), was the claim that phenomenal concepts are indexical. Basically, if mental propositions were indexical then we could exhaust all the physical propositions in the world without telling us whether the mental proposition in question was true or false. In the same way, we could describe all the propositions about every time in the universe, without being able to know what time it is right now (because we need to know the context before we can be sure what proposition that last statement was referring to).

            The problem with this analogy is that indexical statements about time *do* pick out physical propositions about time, it’s just ambiguous which one they do. However, in the metaphysical case, it’s not a problem of mere ambiguity, because there seems to be no physical proposition about my brain states which might possibly pick out my mental description (because like I said, they are eliminative). So, whereas in the indexical case, it is possible that the time right now is time T13, it is not actually possible that my mental visualization is brain state Y. Hence the indexical strategy must fail.

            Anyways, I hope this helps, and I hope it helps clarify why I am so pessimistic about reducing the mental to the traditionally physical. I think we really need to be open to non-traditional alternatives like panpsychism.

            Like

          4. Hi Alex,
            Thanks for the clarification on dispositions. I actually tried to read the SEP article on the topic, but had a hard time focusing on it. It’s a very academic subject. But the point about numbers is interesting. It seems like it might be an issue for platonists, which I’m not.

            I also appreciate your detailed thoughts on the mental and physical. Of course, I disagree. 🙂

            Can anyone provide a precise mapping between a conscious experience and the physics yet? No, but assuming that won’t ever be possible strikes me as unwarranted. It seems particularly problematic in light of what neuroscientists can already do with brain scans, such as knowing broadly what type of thought you’re having at the moment, whether it’s inward focused or not, or whether you’re hearing a sentence you think is true or false.

            Chalmers classified the idea that we’ll eventually be able to explain behavior through physical explanations as an easy problem, and he included the behavior of reporting on our mental states. The idea that once we’ve explained why we talk about something in our mind’s eye, that there remains something else to explain about our mind’s eye, requires, I think, more justification.

            As we’ve discussed before, I think you have an inflated idea of what the illusionists are denying. Frankish himself explicitly does not deny an inner mental life. He only denies that (strong) phenomenal consciousness is necessary for it. From section 1.7 of his 2016 paper on illusionism:

            Are illusionists claiming that we are (phenomenal) zombies? If the only thing zombies lack is phenomenal consciousness properly so called, then illusionists must say that, in this technical sense, we are zombies. However, zombies are presented as creatures very different from ourselves — ones with no inner life, whose experience is completely blindsighted. As Chalmers puts it, ‘There is nothing it is like to be a zombie… all is dark inside’ (Chalmers, 1996, pp. 95–6). And illusionists will not agree that this is a good description of us. Rather, they will deny the equivalence between having an inner life and having phenomenal consciousness. Having the kind of inner life we have, they will say, consists in having a form of introspective self-awareness that creates the illusion of a rich phenomenology.

            But as I described in the post, the terminology here is a barrier to effective communication.

            Instead, I think I’ll just note that the only thing that needs to be true for a physical explanation is that introspection be no more reliable than any other type of perception, that it can lead to incorrect judgments and conclusions in our mind about its own operations. It doesn’t require denying all mental experience. Only that some of our judgments about it are wrong, the result of trying to use a functional self monitoring mechanism outside of its evolved role.

            Of course, the details of why we reach these incorrect conclusions do eventually need to be worked out by science. But I don’t see any deep metaphysical obstacles to that happening, unless of course some form of interactionist dualism should turn out to be true.

            The phenomenal concept strategy is an interesting hypothesis. I wrote about it a while back. But it seems motivated from a Type-B non-reductive materialist perspective, one I don’t share. (I’m in the Type-A reductionist camp.)

            As I noted before, if I thought a physical explanation was impossible, I’d be open to views like property dualism or panpsychism. But I think we need to eliminate the more grounded explanations first.

            Like

          5. Mike,

            I had no idea that you had written an article on the phenomenal concept strategy; I wouldn’t have elaborated at such length if I did. It occurs to me that we’ve put out quite the amount of content on your site. I must say that I’m glad we had that chance encounter on Emerson’s blog! I see that I still have much to learn on this subject, and I look forward to reading more of your articles, past and future.

            “Can anyone provide a precise mapping between a conscious experience and the physics yet? No, but assuming that won’t ever be possible strikes me as unwarranted. ”

            The issue with the epistemic gap between our phenomenal and physical discourse is that its semantic, not factive. Everything that we know about the brain tells us that certain mental facts won’t be included in descriptions of those brain states (like dispositional facts about visualizations). That’s why closing the gap must involve a phenomenal concept strategy, meaning a semantic methodology to interpret phenomenal descriptions in such a way as to reconcile them with physical ones.

            “The phenomenal concept strategy is an interesting hypothesis. I wrote about it a while back. But it seems motivated from a Type-B non-reductive materialist perspective, one I don’t share. (I’m in the Type-A reductionist camp.)”

            Yes, you have to believe that there is an epistemic gap in the first place to adopt the position. This is also why I think that the illusionists are saying (or at least, that their position entails) that certain mental facts (for simplicity’s sake, I’m just going to stick to the facts about my imagined table) are false. Which leads me to:

            “As we’ve discussed before, I think you have an inflated idea of what the illusionists are denying. Frankish himself explicitly does not deny an inner mental life. He only denies that (strong) phenomenal consciousness is necessary for it.”

            There are, I think, two ways to interpret Frankish’s denial of phenomenal, intrinsic consciousness. Either:

            A) Apparent mental objects like visualized tables are not really phenomenal/intrinsic, but the dispositional facts about them are still true.

            B) The apparent mental objects like visualized tables, that our phenomenal descriptors are meant to tag, are not actually being experienced. The dispositional facts about such visualization are therefore false.

            It seems to me that you think Frankish is saying something like A, whereas I think he’s saying something like B. At the very least, I think his position entails B. So how do I reconcile this with Frankish’s insistence that he believes that we experience an inner mental life full of visualizations, as noted in your quote?

            Well, I think that when Frankish talks about mental “functional” qualia, he’s referring to the facts at a high-level of abstraction, and that when he talks of brain functional/dispositional states, he’s referring to physical facts taking place at a lower level of abstraction.

            I should point out that there are levels of dispositionality. At a high level of abstraction, talk about visualized tables is meant to capture facts like, “I am disposed to know how actual tables look”. At a lower level, talk about visualized tables is meant to capture direct mental facts like “my imaginary table A appears X percent larger than my imaginary table B”. Of course, we also have the low-level physical facts of my brain. As I earlier mentioned, the epistemic gap lies in trying to reconcile the low level physical and mental facts, even though we agree that they share high level functional facts.

            I think Frankish accepts that there are high level functional facts, and low level dispositional (or functional, to use your language) physical facts, but I think that he denies that there exist low-level mental dispositional facts (placing him in camp B). So, I read Frankish’s description of an inner mental life as referring to purely high-level descriptions, but of course that’s not what I had in mind.

            It might be argued that this is an uncharitable reading of Frankish (putting him in camp B), since it would appear to be too hardcore of an eliminativist position. But I think it’s actually more charitable than the alternative. That’s because if you don’t deny that there are low-level mental dispositional facts, then it seems like you have to acknowledge that there is an epistemic gap. This means that you have to be a non-reductive (in the epistemic sense) physicalist, which forces you to adopt a solution along the lines of the phenomenal concept strategy. But the whole point of the illusionist stance is to try to dissolve the hard problem of consciousness on its own, and so having to admit that you are also reliant on some other strategy sort of takes the teeth out of illusionism.

            Denying intrinsicality thus doesn’t get rid of the hard problem, it merely converts what we thought were low-level intrinsic facts into low-level dispositional facts. Yet at bottom those facts still need to be accounted for in our physicalist framework, and it seems like they can’t be.

            “The idea that once we’ve explained why we talk about something in our mind’s eye, that there remains something else to explain about our mind’s eye, requires, I think, more justification.”

            Sure, but note that this is just to say that we don’t need to be able to account for the low-level mental facts (only the physical facts about how we talk about this stuff) so it’s still eliminativist.

            In any case, even if Frankish does actually at heart believe that low-level dispositional mental facts are true, I maintain that this is simply contradictory to his illusionist stance.

            Like

          6. Hi Alex,
            I’m very much enjoying our conversations. Hope you are too. It seems like we’re stress testing each other’s positions, which is the best kind of conversations on these types of subjects to have.

            Sorry, I should have linked to the post on the phenomenal concept strategy. Just in case you’re interested and haven’t fished it out already: https://selfawarepatterns.com/2020/02/02/the-phenomenal-concept-strategy-and-issues-with-conceptual-isolation/
            (Note that my use of “phenomenal” and “qualia” in this post should be understood in a weak / functional sense.)

            “Everything that we know about the brain tells us that certain mental facts won’t be included in descriptions of those brain states (like dispositional facts about visualizations)”

            Could you elaborate on this statement, particularly the example? It seems like dispositions are exactly what can be included in such descriptions. The resistance I usually see is from people who insist there is something more there than the dispositions.

            On A) and B), I do very much think Frankish believes A. (I know I do.) I think it pays to remember he’s a functionalist. If you’re assuming he’s denying something required by that functionality, then you’re likely not getting his view right. What he denies are the non-functional aspects.

            The discussion about levels of disposition seems to imply that you see mental dispositions as ontologically separate from the physical ones. (Which fits with a non-physical view.) But I see no reason to assume that since they’re both causal in nature. If you mean Frankish denies that there are mental dispositions ontologically separate from physical ones, then yes, both he and I would deny that. But I don’t see B entailed by that denial.

            Remember, Frankish is an illusionist. So he believes that the illusion of (strong) phenomenality exists, what many weak illusionists / realists would just call “phenomenality” but using that word in a deflated sense. That illusion / weak phenomenality seems sufficient to avoid B. Unless by “being experienced” you exclusively mean in a strong phenomenal sense, since that’s what Frankish is denying.

            Like

          7. I’m glad you are enjoying the conversation Mike; I feel the same way. Thanks for the link.

            About Frankish, I agree that he’s a functionalist “all the way down”, but I think it’s a little tricky how to parse this. Notice that my interpretation of his brand of illusionism isn’t actually incompatible with such a reductionist view, since I grant that he thinks there are both high-level and low-level (physical) dispositional/functional facts.

            ” I do very much think Frankish believes A. (I know I do.) I think it pays to remember he’s a functionalist. If you’re assuming he’s denying something required by that functionality, then you’re likely not getting his view right.”

            But Frankish is not just a functionalist. He’s a physicalist functionalist, which restricts the domain of functional inquiry. That means that if the low-level mental dispositional and/or relational facts are not physical, then that won’t be capturable under the illusionist purview.

            “The discussion about levels of disposition seems to imply that you see mental dispositions as ontologically separate from the physical ones.”

            No, I see them as semantically/descriptively separate. Whether this entails ontological separation will depend on what you think a brain state is. On traditional physicalism, where all there is to a brain state is capturable with physical descriptions, then yes, I think there is an ontological separation. But on a property dualist/panpsychist interpretation, there’s more to descriptions of brain states than the traditional physical descriptions, and so they would not be ontologically separate.

            “Could you elaborate on this statement, particularly the example? It seems like dispositions are exactly what can be included in such descriptions. ”

            Technically, the example I gave (Table A appearing X% larger than table B in my mind’s eye) referred to a relational fact, not a dispositional one.

            But the reason that this particular fact is not accounted for under physicalism is because, presumably, you won’t find such a fact in a description of my (traditional) physical brain state, no matter the level of detail at which you decided to analyze it. This is something that you yourself (I think) admitted. To admit this is, I feel, to concede that there exists a semantic and epistemic gap.

            I also think it’s worth analyzing this semantic gap in a bit more detail to understand how we normally bridge conceptual differences (as in the software/hardware analogies), and why the mental/physical concepts are so different.

            We might describe the way a piece of software functions with something like, “code x is implemented before code y”. It seems like such talk mixes multiple levels of abstraction. The above phrase might refer to deep and detailed descriptions of functionality in the program, which itself is broken down into low-level physical descriptors. For example, if we were talking about a graphics engine, then it seems like facts about the physical graphics would directly tell us whether the above statement was true or false. On the other hand, at a low level of abstraction, the above statement might merely refer to how certain numerals on my computer screen (corresponding to the code) are arranged.

            Once we’ve exhaustively detailed all the physical facts about the hardware (e.g. how certain numerals are portrayed on my screen, how the graphics looks), it seems like we’ve captured all that is meant to be captured by that statement of code. Notice that this is just a fact about the meaning of that statement.

            But the mental/physical case is different, and this renders the software/hardware example a disanalogy. We can start as before and note that my statement about the table could refer to an abstracted high-level function, like how I have the capability to accurately discern real life table sizes. Additionally, however, at a low-level description, my statement about tables just notes a simple relational fact about my sense data, namely that one sense datum is X% bigger than the other!

            The physicalist theory can account for the high-level abstracted statement by noting that the high-level disposition appears to be a property of some particular brain state. The problem comes when we give a low-level functional description of the representational brain state, which doesn’t match the low-level mental proposition.

            Thus, the difference with the software analogy is that physicalism really does leave out a crucial fact regarding statements like “Table A appears X% larger than table B in my mind’s eye”, namely the low-level fact.

            It would be as if you described the hardware of my computer system in great detail but ignored the display of numerals on my screen. Such a description would only capture part of the meaning of the statement concerning code (assuming that when I uttered the statement, I was also referring to the numerals that I perceived in front of me).

            “The resistance I usually see is from people who insist there is something more there than the dispositions.”

            Here I think we need to be careful. Alot of these people might reject the “dispositions/functions all the way down” view of thinking, so by definition in their view a functional account must leave out certain mental facts which they regard as intrinsic, even if you interpret them to be dispositional in reality. Another possibility is that such people conceive of functional discourse as being restricted to a high-level of abstraction, because that’s usually how it’s portrayed in the examples given.

            Like

          8. Hi Alex,
            On functionalism and physicalism, I think I noted earlier in this discussion that while they are separable in principle, unless we’re assuming some kind of completely separate causal framework for the non-physical, I’m not sure how meaningful it is to talk about non-physical functionality. Of course, if we do make them completely separate, then the non-physical aspect becomes epiphenomenal, and so not something we can either establish or refute with evidence.

            “Technically, the example I gave (Table A appearing X% larger than table B in my mind’s eye) referred to a relational fact, not a dispositional one.”

            Yes and no. The specific description here is only relational, but that’s only because it’s discussing something in isolation. In the brain, the patterns of neural firings that are the implementation of the galaxy of conclusions about table A and table B are the results of many upstream causal effects.

            And it’s important to ask Dennett’s hard question here: And then what happens? The neural firings for this don’t exist in isolation. They have further causal effects in the system, including the ones that enable / cause you to describe it, but also other affective and associational effects of how you assess the aesthetics or other properties of the imagined tables.

            All of which is to say, it’s a mistake to think there is a presentation in isolation from the overall causal chain of processing happening in the brain, from sensory stimuli to motor output and everything in between.

            “But the reason that this particular fact is not accounted for under physicalism is because, presumably, you won’t find such a fact in a description of my (traditional) physical brain state, no matter the level of detail at which you decided to analyze it. This is something that you yourself (I think) admitted.”

            Did I? If so, maybe I wasn’t being clear. Certainly if we do a brain scan, we won’t find an obvious table inside a brain that is imagining one, or even a picture of a table. But that’s the same as saying if we examine the bytes in a jpeg or gif file we won’t find a table or picture of one, even though all the information necessary for displaying a table may be there when it’s loaded into a viewer.

            Which is to say, all the information for imagining a table is in the brain (at least one that has perceived tables before). I can’t see any reason to conclude it isn’t. But if we look at the firing of individual neural circuits in isolation, we won’t see anything tableish. We have to consider both the upstream and downstream causal effects that those circuits are embedded in to understand how they manifest as a table in your imagination. Certainly the details of that haven’t been worked out yet, but we have enough experience with neural networks that there doesn’t seem to be any conceptual barrier to broadly seeing how it will be.

            This is similar to your subsequent point about describing the hardware of the computer but leaving out the hardware of other components and how they relate to patterns instantiated within the computer. In both cases, what makes a certain pattern of activations meaningful is the system’s relations and interactions with the environment. We have to judge both sides by the same standards.

            Like

          9. Hi Mike,

            I just woke up from a late afternoon nap, so forgive me if my brain is still foggy.

            “Unless we’re assuming some kind of completely separate causal framework for the non-physical, I’m not sure how meaningful it is to talk about non-physical functionality. Of course, if we do make them completely separate, then the non-physical aspect becomes epiphenomenal, and so not something we can either establish or refute with evidence”

            I’m not advocating for assuming any ontological claim. I’m merely noting that certain mental facts seem to be true but are not covered in physical discourse. Of course, once we fully consider the implications it becomes clear that the type of conscious functionality that I had in mind can’t be physical. Thus, we either have to expand our conception of the physical (panpsychism) to incorporate these mental facts as actual parts of the physical states, or admit some type of causal interaction, or we can expand our notion of what constitutes “evidence” to allow for epiphenomenalism. The latter is advocated at times by Chalmers, who argues that we can have phenomenal evidence for epiphenomenal mental states, even if they are not behavioral. But this is slightly off topic from what we were discussing.

            “Did I? If so, maybe I wasn’t being clear. Certainly if we do a brain scan, we won’t find an obvious table inside a brain that is imagining one, or even a picture of a table. But that’s the same as saying if we examine the bytes in a jpeg or gif file we won’t find a table or picture of one, even though all the information necessary for displaying a table may be there when it’s loaded into a viewer.”

            There is a crucial and obvious difference with the software analogy, which is that your analogy only partially describes the physical system. If we actually provided a complete account of your computer system’s behavior, then you would actually describe the 2D image of a table as it appears on your monitor. Remember, your claim is that my imagined table exists as part of my physical brain, so any appropriate analogy to software should include the physical hardware (monitor) that actually instantiates the image in question.

            But my claim is that even an exhaustive physical description of my entire brain history’s behavior won’t incorporate descriptive imagery of tables, and this seems to be an undeniable fact.

            “Which is to say, all the information for imagining a table is in the brain (at least one that has perceived tables before). I can’t see any reason to conclude it isn’t.”

            Yet as I have already noted, descriptions of information leave out the most important component, which is the imagined table. In order to demonstrate that a computer “stores” an image, we would have to describe the image as it appears (say on a monitor) and then causally trace back that physical image to the storage medium. Analogously, you have to start with the imagined table, and then causally trace back the imagination to the information processing centers in the brain. Then, and only then, can you assert that such a brain region stores all the information required to imagine a table.

            The problem of course is that we can’t find the imagined table (the most important component!) in a physical description of my brain, whereas we can find the actual physical image represented by a jpeg file. Not only does this mean that a physical description leaves out the description of the table, but it also imperils your claim that “all the information for imagining a table is in the brain”, since if we can’t find the table, then we can’t causally trace back the information storage medium.

            At best, we could sketch a causal pathway from my crude drawing of my imagined table to the brain regions responsible for producing it. But note that this would only describe my brain region’s capability to physically draw and/or describe tables. Of course, that’s not at all what I had in mind when I was talking about imagined tables. The statement wasn’t “I can draw/describe tables” but rather, “I can imagine tables”.

            I wasn’t talking about my mere capacity for being able to describe tables, since that would mean that all talk about imaginary tables is purely metaphorical. In my case however, I’m not speaking metaphorically as someone with aphantasia might; I really can experience the relevant sense datum that only comes with visualization.

            In other words, I’m talking about an imaginary table, which at the moment only I am experiencing, and which doesn’t refer to any drawing or reproduction on physical paper. If it did refer to anything physical, we would expect it to refer to something in my brain, but as I already said, no such thing in my brain exists.

            Like

          10. I forgot to address this point:

            “The specific description here is only relational, but that’s only because it’s discussing something in isolation. In the brain, the patterns of neural firings that are the implementation of the galaxy of conclusions about table A and table B are the results of many upstream causal effects.

            And it’s important to ask Dennett’s hard question here: And then what happens? The neural firings for this don’t exist in isolation.”

            I wanted to clarify that I’m not denying that the above relational facts exist in a much broader complex framework of causal interactions (although depending on your ontological interpretation, you could deny this). I’m just saying that those specific facts are not accounted for in any such physical system. Going on about the complex relations to other things is of no help, because it’s the low-level mental facts which seem to be missing in the physicalist picture.

            As I said, before you go on about how the computer system might store complex information about pictures, first you must find your physical picture and track it back causally to your proposed storage medium. Talking about information being stored in the brain in relation to table imagery without first accounting for the low-level mental facts is to skip the first and most vital step. Without it you would have no justification for any of your claims.

            Like

          11. Hi Alex,
            You woke up from an afternoon nap. I’m actually a bit fried from the day, but hopefully I’ll be coherent in this response.

            On finding the physical basis of the imagined table, I mentioned above that all of the information for imagining a table is in the brain, at least for one that has perceived a table before. Let’s talk about that a bit.

            Photons hit the table and are reflected, some of which impinge on the light cone cells in the retina. The patterns of activation caused by these impingements cascade through multiple layers in the retina to ganglion cells, of which there are many types. Each portion of the visual field has a collection of ganglion cells that are triggered by different things: colors, motion, edges, etc. (Not all of the different analyzers are understood yet.)

            It’s the ganglion neurons that axon to the LGN nucleus in the thalamus. So the pattern that was activated in the retina propagates to the LGN, where they synapse onto neurons that project to the visual cortex, although the signaling in the LGN may be modified by attentional dynamics.

            In the visual cortex, further analysis begins in layers, with some layers worried about things like edges and shapes, others like colors, etc. As the signals propagate through the layers, the analysis becomes progressively more abstract, and begins to accrue location independence (as in location in the visual field).

            Eventually the signals cascade into the temporal lobe where the signals converge in regions dedicated to lower level object features. If those feature regions are activated, the signals propagate to ever higher level features, until we get to regions that activate for a table, and maybe regions that activate for a kitchen table, or a computer desk, or a workbench, etc.

            That’s kind of the classic description. But neuroscientists are increasingly convinced it’s actually more of a two way conversation. The initial signals come in, and candidate object networks light up and propagate their signals back toward the earlier sensory regions. Those predictions end up being compared with incoming sensory signals in an error correction dynamic, with the result that the candidates are generally eliminated until certainty increases on one candidate.

            Physically this looks a lot like a pattern completion dynamic. As the pattern fires and endures, with recurrent signaling between the later conceptual regions and the earlier sensory regions, the perception becomes more firm.

            Ok, I can hear you thinking. That’s perception. You’re talking about imagination. But we need to consider what imagination is. To imagine something is to activate the same neural patterns that fired during the perception, but from the top down. It’s the frontal lobes initiating the pattern. And the pattern is typically less complete since it’s not interacting with an incoming sensory stream. Which fits with most of our experience of imagined objects.

            Of course, the frontal lobes can activate these patterns in different combinations than they were ever activated from sensory signaling. Which is why we can imagine something like a pink round table a mile wide, even though we’ve never perceived one. (Or at least I haven’t.) This ability to simulate things is, of course, an evolutionary adaptation that allows us to plan for combinations of things we’ve never encountered.

            A couple of points about this description. Note that it’s dispositional all the way down and up. Antonio Damasio and some other neurobiologists do argue for a distinction between image forming regions in the early sensory regions of the brain and dispositional ones, but the more neuroscience I’ve read, the less I think the brain really has that distinction.

            Mental images do exist as neural firing patterns, but those patterns are distributed throughout the occipital, parietal, and temporal lobes. There doesn’t seem to be any clean boundary between them. What we have in the early regions are early conclusions. And what we have in later regions are later conclusions, typically either more abstract ones, or affective or action planning conclusions.

            The other thing to note is that the visual perceptual processing spreads out throughout the cortex and never converges into any one presentation area. There are a huge tangle of multiple streams, which can be combined in various ways, some of which accrue enough of a coalition to make it into working memory, language centers, and get reported.

            Nothing I’m describing here is particularly controversial. I go into more detail in this post.

            Perceptions are dispositions all the way down

            This is one of the reasons I recommend people interested in consciousness don’t just read philosophy, but also read cognitive neuroscience. It is work, but doing so opens the mind to how things like imagining a table can definitely be physical processes. So another post worth checking out.

            Sources of information on neuroscience

            Liked by 1 person

          12. Mike,

            Thanks for the detailed description. But I still don’t agree that it accomplishes what you think it does. 🙂

            “A couple of points about this description. Note that it’s dispositional all the way down and up.”

            I agree with this. Note that when I described my interpretation of Frankish’s stance, for example, I mentioned that I thought he believes in both high-level and low-level dispositional facts.

            “This is one of the reasons I recommend people interested in consciousness don’t just read philosophy, but also read cognitive neuroscience.”

            I agree, far too many philosophers lack interest in most things science. Not just the special sciences, but things like physics as well. In my opinion, being well rounded in all of these domains is essential; I am trying my best of course 🙂

            Coming back to your description, I agree that learning about the brain obviously helps, but it doesn’t actually get us to the crucial description we needed, the descriptions of the relational facts about visualizations and perceptions or other phenomenal experiences.

            When we say that the brain processes visual information at higher levels of abstraction, going from V1 to V5, we presumably mean something like “the activity of these neural regions appear to be responsible/essential for certain abilities”. Abilities like the capacity to discern shape and colour when presented with visual imagery, and also to act on such information.

            The predictive processing theory is especially helpful in this regard since top-down model generation is exactly what we expect to see take place during visualizations (although I’m not sure how useful it is to describe the frontal cortex influence as acting akin to a generative model, it might be too high of a level of abstraction).

            In other words, such physical facts appear to perfectly explain our behavior, but that was never in doubt. I never questioned that descriptions of the brain help elucidate why I talk about visualizations so ably, but rather why I *experience* them. In addition to my verbal reports about visualizations, I still think it’s true, for example, that I’m experiencing table A as bigger than table B. All that takes place in my brain is causal information processing which is responsible for me verbally outputting and behaving as if I see table A as bigger than table B.

            Yet in addition to the above, I’m asking you to believe that I really do see table A as bigger than table B. As I earlier said, when I’m making such visualization claims, I’m not speaking metaphorically as someone with aphantasia might, or referring to my ability to talk about or draw tables. Rather, I’m just talking about my ability to imagine tables in my mind’s eye.

            There’s no way traditional physicalism can account for this, because the semantic meaning of what I’m talking about just doesn’t refer to brain facts as we traditionally understand them. You can either accept this and try to adopt some kind of phenomenal concept strategy to explain why I’m so far off the mark in my discourse, or just deny that my mental talk is real (which I think illusionism is meant to do). The latter would entail that relational facts like “mental table A appears bigger than mental table B” are simply wrong (hence the illusion).

            But it seems to be a bit of a dodge to skirt around the issue and say that such facts can still be true because “look our brain regions encode information of why you behave as if they do”. That’s simply not what I’m talking about when I’m making such propositional claims, and you’re just refusing to engage with what I’m really saying. You’re merely constructing an analogue proposition that physicalism can account for, but this analogue proposition isn’t what I’m actually asserting!

            Again, to reiterate my stance, I believe that:

            1. I visualize table A as white
            2. My physical brain regions encode all the information about my visualized table

            This is analogous to how:

            1b. There is an image of a 2D table on my computer screen
            2b. The physical hardware in my computer memory encodes the information about the image in 1b, which when coupled to a monitor, is responsible for producing it.

            Propositions 1 and 2 are not the same thing, anymore than 1b and 2b are the same thing. I keep telling you that I think 1 is true, and you keep reasserting “that’s great, let me tell you all the facts about 2, and you’ll see that my physicalist theory is correct”. But I never denied that physicalism could account for 2, only for 1. To say that proposition 1 equals proposition 2 is just a misunderstanding of what I’m trying to say. Again, 1 is not meant to be interpreted metaphorically, but literally. It’s the same way that 1b shouldn’t be interpreted as metaphorically saying the same thing as 2b, but as literally describing something different.

            Thus, it’s simply a mistake to assert, as you earlier asserted that:

            “Certainly if we do a brain scan, we won’t find an obvious table inside a brain that is imagining one, or even a picture of a table. But that’s the same as saying if we examine the bytes in a jpeg or gif file we won’t find a table or picture of one, even though all the information necessary for displaying a table may be there when it’s loaded into a viewer.”

            That’s because my point about 1 and 2 being separate was never meant to demonstrate that 2 is false, only that 2 and 1 are different propositions. I agree that it would be a fallacy to claim that 2b is false because it doesn’t include a description of 1b.

            But all I need to do is to demonstrate that 1 and 2 are different. Once you admit that “Certainly if we do a brain scan, we won’t find an obvious table inside a brain that is imagining one, or even a picture of a table.” you’ve then admitted that the propositions refer to different things. This then means that it could be the case that 1 is true and 2 is false for example, just as it could be true that 1b is true and 2b is false (an obvious fact).

            Thus, merely accounting for the truth of 2 doesn’t guarantee the truth of 1; we would need something more. Sorry for the long post, but it’s crucial to understand what I’m actually trying to get at, and why it’s pointless to just keep on bringing up facts about 2.

            Like

          13. Sorry I misspoke. I wrote “This then means that it could be the case that 1 is true and 2 is false for example, just as it could be true that 1b is true and 2b is false (an obvious fact)”

            When I meant to assert the converse. It could be the case that 1b is false but 2b is true (if there is no monitor hooked up), just as it could be the case that 1 is false but 2 is true (I’m a philosophical zombie).

            Once you’ve admitted that a description of 2 is not a description of 1 (because you won’t find the phenomenal table in my brain, in the same way as you won’t find the image in my jpeg file), then you’ve admitted that (~1 & 2) is conceivable.

            And it’s not true that “dispositions all the way down” gives you a free out of jail card, because as I earlier pointed out, functionalism ≠ physicalist functionalism. You previously asserted that you thought functionalism over and above physicalist functionalism was meaningless because it would entail epiphenomenalism (or if it did not, then it would be physically capturable). But that’s clearly not true, panpsychism for example doesn’t entail epiphenomenalism but wouldn’t qualify as physicalist.

            Like

          14. To summarize:

            The crucial difference between our positions is that you think that propositions 1 and 2 are asserting the same thing, whereas I perceive them to be different. Why do I think that propositions 1 and 2 are distinct? It’s simple, it’s because the description of the relational/dispositional facts of my visualizations aren’t the same thing as the descriptions of the relational/distortional facts of my brain.

            This is similar to how the relational/dispositional characteristics of an image on my computer screen (fact 1b) are not the same as the facts about my computer hardware, absent a viewing port (fact 2b). This is something that you yourself have to admit when you concede that a table-like entity (phenomenal or physical) isn’t actually going to be found in the brain.

            Once we acknowledge that they are different, the next step is to ask what can account for them. Here we realize that physicalism can only account for the facts in 2, but not more. It therefore follows that it cannot account for 1. Other theories (like panpsychism) can account for more, and hence for 1. Alternatively, we can just deny 1. What we cannot do is assert that propositions 1 and 2 are meaningfully different but both true, and that physicalism is correct.

            I gather that you want to deny that propositions 1 and 2 are meaningfully different, but I don’t see how you can square this with your concession that a description of a table-like entity (which is what 1 is) isn’t found in a description of my brain. The latter claim seems to me to be identical with the former claim.

            Like

          15. The temperature of this discussion seems to have jumped. I fear any response I provide will make it worse. Seems best we take a break. There’ll be plenty of opportunities for us to take it up again.

            Thanks for the discussion Alex.

            Like

          16. No worries, it seems I must have been pretty tired last night because my response was not nearly so coherent as I imagined, apologies if I also came off as a bit cranky. One thing I would add to summarize and then end the conversation, is that even if we did find a literal table in our brains’ visual cortex, that couldn’t be the same thing as our visualization of a table.

            Our visualization has many unique properties, for example it has a location which doesn’t appear to be physical (visualized table parts occupy a space in relation to each other, as well as our perceptions of the world, but not the physical world itself, so they are obviously not physical table-parts).

            The question I was attempting to press is whether physicalism can account for the behavior of these mental visualizations, even though I concede that it can account for my verbal and all other kinds of physical behavior. Once we’ve given a complete accounting of my physical behavior, internally and externally, it seems like we’ve left out the behavior/functionality of certain phenomenal experiences (it doesn’t have to be visualizations, I’ve just been using them for convenience).

            I leave the conversation still unsure whether you concede that we won’t find facts like “Table A is large” or lower level mental facts in descriptions of our brains. I say unsure, because while you’ve denied the concession, so far you’ve just been describing other facts, facts like what my visual cortex is doing. But this is insufficient, what the physicalist really needs to demonstrate is why the phenomenal facts are incorporated into the physical facts (or if not possible, eliminate the former), and this so far has been missing in our conversation. Facts about my visual cortex are just facts about structures which are causally relevant to my visualizations, but they are not literal descriptions of my visualizations. The question is whether such a literal description exists, if it does not then the physicalist must concede that there is no phenomenal-physical incorporation. Once she does so, then I feel that it’s really one short step to conclude that people are mostly wrong about their own mental inner life.

            You did briefly mention the argument that “If it’s dispositions all the way down, then any behavior you talk about will have to be physical, or it wouldn’t be meaningful (or causally potent)”, but I feel I already addressed that in my last.

            One last thing that I should note is that I say “table-like” out of convenience, and not to presuppose that I’m really seeing such things. It could be that I’m making a phenomenal error, for example, when I categorize my visualization as being table-like. But the important point is that we can go a level down and ask whether I am also mistaken about a color, shape being present etc… If I’m mistaken about everything, then I think it’s fair to say that I don’t have much of an inner mental life.

            The most important thing is to concentrate the conversation to attempt to account for such phenomenal behavior. The argument should be “this physical behavior is the same thing as this phenomenal behavior, see how their descriptions are so similar” and not “physicalism can account for physical behavior x,y,z”.

            Thanks as well Mike.

            Liked by 1 person

  7. Re “The sensation of redness seems like something primal, fundamental, and irreducible. It also appears impossible for me to describe my sensation of red to you, or for you to describe yours to me. It apparently can only be pointed at and agreed that yes, there is red. Which implies that the sensation is something private, inaccessible from any possible third person investigation. And it seems to be something we can’t be in error about. To believe you’re experiencing redness is to experience redness.”

    I don’t understand this. “Redness” is a sensation, a visual sensation. We can use a spectrometer to analyze the lights reflected off of “red” things and find that the mix of lights is heavy in red and light in green. That spectrum defines what we call “red” and it is something I can describe to you and you to me. What we have in common is a sensory apparatus.

    It is a well known philosophical discussion about how you would explain a color, like red, to a blind person. My approach would be to map it upon a sense they did have, such as hearing. The example of sounds heavy and light in bass tones would supply the idea, then the “subject” would be exposed to sunlight. The warmth would be ascribed to part of the radiation of the sun. (The subjects body could be rotated so they could “see” from which direction the “heat” came from–this is an analog to see with pour eyes.) Other aspects of the Sun’s EMR would then be described and the subjects familiarity with his/her nonfunctioning eyes would be related to others who have functioning eyes, eyes that cannot see infrared, but can see red, etc.

    If a blind person can grasp, even if poorly, the concept of “red,” how “primal, fundamental, and irreducible” can it be?

    Liked by 1 person

    1. I’m completely sympathetic to this reaction. That paragraph was me describing the sentiment, not my own view.

      I would go on to say that we could describe color to a blind person as something that objects seems to have, and that some colors, like red and yellow, are more striking than others, like green or blue. We could describe what particular colors allow us to recognize (like yellow for ripe bananas). In other words, we could provide a fully functional description of what red means to us. We might struggle to describe all the micro-reactions we have to particular colors, reactions that add to our overall experience of that color, but in principle it could be provided to them.

      In other words, we could make them the equivalent of a Mary the color scientist, with a knowledge of everything that red provides us, even if they themselves lack the ability to utilize it. Although it should be acknowledged just how difficult this would be to accomplish.

      Like

  8. Sensory perception (our only kind) triggers the weak-C. Who’s to say that that same perception doesn’t trigger underlying reactions in our endocrine system, thereby triggering emotional reactions which augment the central-processing-units circuits giving us a sense of the strong-C.

    We experience and interpret the world, and when we feel a mysterious sensation we attribute this to activation of some shell outside our shell. But all of these sensations are still just data.

    That warm feeling we get when we see xmas red & green, squirts of serotonin, maybe a drop of dopamine or oxytocin, are all just data splurging into our system, looping back to our data banks triggering feedback experiences of comfort and wellbeing–which are also just patterns of specific circuit activation. “I love the holidays and all the colors. They make me feel connected and whole.” DNA just tricking our too-large-brains into thinking we belong to a group.

    Maybe that’s one reason for me to remain alive — to finally see an AGI take over the world.

    Liked by 1 person

  9. Errg. Can you (plural) even listen to yourselves? I just finished “The River Why” (1983) by David James Duncan. For the 3rd time. It’s fiction and also historical fact. He’s now 70. I’m 72.
    Consciousness? How-about the ability to reflect on the past, worry the future, and understand the present?
    Maybe, so few (humans) are conscious (aware of why they do what they do) … it’s like asking “What is water?” to a fish? (See David Foster Wallace’s work.)

    Liked by 1 person

  10. I recently had a conversation with an art friend about the subjective experience of colors. We really can’t know for sure that we experience individual colors in the same way; however, the relationships between colors must still work the same for all of us. So if my perception of red is swapped for blue in my friend’s perception, then my perception of green must be swapped for orange. Otherwise the relationships of adjacent colors and complimentary colors would vary from person to person, and the color wheel would not be a useful tool for artists.

    Liked by 2 people

    1. That’s an excellent point. I actually think the relations thing goes all the way, from the difference in energy impinging on our retina cones, to the opponency colors of ganglion cells (which gets to the relationships between colors), to the resulting neural patterns that propagate to the brain and the innate and learned affective reactions that are triggered. Those relations in total are what our experience of a color are about.

      Which is to say, I think talking about whether we have different reds, greens, etc, is likely a category error, sort of like talking about whether my sweetness is your bitterness. Or whether the experienced beer drinker tastes the same thing as someone who never drinks it. We’re only tempted to think these are meaningful questions if we think there is a presentation in the brain distinct from the upstream causal prerequisites and the downstream causal effects.

      Liked by 2 people

      1. There is no color in the world because color is a feeling. The way any feeling presents—red as redness, for instance—must be fixed in the brain from the get-go, physically hard-wired, which means that red is felt as the same red by everyone, lacking a pathology of vision. It’s difficult to imagine that the brain invents what a feeling feels like the first time any stimulus is provided. Again, lacking lacking a pathology of vision, no one upon being initially presented with the frequency of reflected light that we normally feel is red feels it as yellow or as the sound of a rolling bag of rocks.

        Liked by 1 person

  11. The word “consciousness” has indeed a variety of meanings.
    The two broad categories you present are welcome as a needed tool.
    But it as a bit surprising not to find self-consciousness in the post (the Block 1994 paper takes it into account:“possession of the concept of the self and the ability to use this concept in thinking about oneself”). And introspection and phenomenal consciousness need such capability to think about oneself.
    Also, I feel that self-consciousness as object and as subject can find a place in an evolutionary perspective (https://philpapers.org/archive/MENPFA-4.pdf).
    Don’t you think that we should take self-consciousness into account when looking at the nature of human consciousness?

    Liked by 1 person

    1. I do, and my description of the two major categories shouldn’t be taken as excluding it. I actually think self consciousness is part of information consciousness. It has a functional role. In that sense, information consciousness may be a broader category than Block’s original access consciousness one. Block himself sees self-consciousness as distinct from phenomenal consciousness, arguing that we shouldn’t take an animal’s lack of self-consciousness as evidence for lack of phenomenal consciousness. Of course, animals that give us the impression of being conscious do so by showing signs of access consciousness.

      Like

      1. Positioning self-consciousness as part of information consciousness looks as a new perspective. Could you tell a bit more on that? (more precisely, I’d be interested understanding how the performance of reflectivity (an aspect of self-consciousness) can be considered as part of information consciousness).
        Block has presented phenomenal consciousness and self consciousness in distincts paragraphs, but I’m not sure that he totally separates them (if you have some information on that, I’d appreciate reading it).

        Like

        1. It’s actually not clear to me how self-consciousness could be something other than information consciousness. What is self-consciousness other than a perception about the self, a predictive model of the system itself at some level (albeit one abstracted for certain evolved roles, providing an accurate architecture of the mind apparently not one of them). But you asking the question makes me wonder if maybe we’re using the term in a different manner.

          On Block, in a section of his 1994 paper dedicated to dealing with what he sees as conflations, this is what I take to be his conception of them (at least as of that paper).

          SELF-CONSCIOUSNESS. By this term, I mean the possession of the concept of the self and the ability to use this concept in thinking about oneself. A number of higher primates show signs of recognizing that they see themselves in mirrors. They display interest in correspondences between their own actions and the movements of their mirror images. By contrast, monkeys treat their mirror images as strangers at first, slowly habituating. And the same for dogs. In one experimental paradigm, experimenters painted colored spots on the foreheads and ears of anesthetized primates, watching what happened. Chimps between ages 7 and 15 usually try to wipe the spot off (Povinelli, 1994; Gallup, 1982). Monkeys do not do this. Human babies don’t show similar behavior until the last half of their second year. Perhaps this is a test for self-consciousness. (Or perhaps it is only a test for understanding mirrors; but what is involved in understanding mirrors if not that it is oneself one is seeing?) But even if monkeys and dogs have no self-consciousness, no one should deny that they have P-conscious pains, or that there is something it is like for them to see their reflections in the mirror. P-conscious states often seem to have a “me-ishness” about them, the phenomenal content often represents the state as a state of me. But this fact does not at all suggest that we can reduce P-consciousness to self-consciousness, since such “me-ishness” is the same in states whose P-conscious content is different. For example, the experience as of red is the same as the experience as of green in self-orientation, but the two states are different in phenomenal feel. See White (1987) for an account of why self-consciousness should be firmly distinguished from P-consciousness, and why self-consciousness is more relevant to certain issues of value.

          https://web-archive.southampton.ac.uk/cogprints.org/231/1/199712004.html

          Like

          1. Self-consciousness is indeed introduced by Block as being “the possession of the concept of the self “. But the concept of self is not a clearly defined entity. It has no central idea (self as object/subject, core/autobiographical self, ….). So I would be carefull about what could be a “perception” of such an ill defined entity that cannot be (I feel) considered as “information” to be perceived.
            Our human self is, as is self-consciousness, the result of a complex evolutionary process where both have been inter-related. Don’t you think that trying to define one by the other may bring in a risk of circularity?
            (thanks for reproducing the Block 1994 text where he addresses self-consciousness and P-consciousness).

            Like

          2. I’m not sure I see the circularity danger. It does seem true that our models of self are recursive to at least some degree. Here we are talking about it. So we’re aware of our self awareness, and if you’re following, then you’re aware of your awareness of that self awareness, etc. But like any form of recursion, it’s going to be limited by resources, maybe in this case working memory.

            (No problem. Glad you found it helpful.)

            Like

    1. The problem is everyone agrees that things like deliberate control of action and self report are on the access side of the access / phenomenal distinction. So our principle evidence for conscious states in any system other than ourselves would no longer be evidence for such states.

      Of course, we could just define “phenomenal” to just be a synonym for “consciousness”, but then we’re still left with the divide between the two categories of consciousness.

      Liked by 1 person

      1. I guess I’m not following your rationale for the division.

        Conventionally we want to divide conscious along categories like perceptual qualia, memories, abstract thoughts, pain and pleasure (feelings), etc. – the sort of things that might be in a basic psychology textbook. How would categorize those sort of things into a functional/phenomenal grouping? In most cases, they seem to me like they would fall into both.

        Are actually trying to disconnect the functional from the phenomenal? For example, I smash my finger. I feel pain (phenomenal). Somewhere in the brain is the model that says the finger is injured better run some cold water on it (functional). How did the model come to that decision without the pain in the finger?

        Liked by 1 person

        1. I’m onboard with a fully functional account, including a functional version of qualia and phenomenal. (Although I’m now more convinced using those words is inviting confusion. It’s like using “ghost” or “spirit” in a non-supernatural sense. Sometimes the meaning is obvious from context, but often it gives the wrong idea.)

          There doesn’t seem to be any explanatory gap or hard problem associated with the functional account. But many people are convinced that account is missing something. (See the philosophical papers linked to in the post.) The question is what to call this putative missing thing.

          Like

          1. Really? The functional account explains it all. Can you sum up this explanation in a few sentences? It has always seem tautological to me.

            Like

          2. I think the way to falsify a tautology is through its definition(s), in this case that the mind is only as the mind does. Demonstrate that there truly is something fundamental and irreducible about the mental, and it seems like functionalism becomes untenable.

            Like

          3. It’s a nothing-burger. You can always find something mind does for any aspect one proports mind to have because the functions are simply additional descriptions of the same aspects of mind. It’s like saying the eyes serve the function of seeing and forgetting that seeing is implicit in the definition of eyes. It doesn’t explain eyes or how they work.

            Like

          4. I think the more generalized your description of what mind does the better your functionalism “works”.

            If eyes are light-capturing devices, then my iPhone is an eye but, if the function of an eye is to capture light for biological organisms then, then my iPhone isn’t an eye. It’s all in how general you want to describe the function. In neither the more general or more specific definition of function, however, does the iPhone tell us much that is useful about how eyes in organisms work.

            Like

          5. Remember that functionalism is a philosophy about what a correct theory of mind should look like. It is not itself that theory. Judging it as though it were seems unproductive.

            The function of your iPhone camera is to capture light patterns for later use. The function of an eye is to begin extracting meaning about things in the world from the light patterns. Of course, a newer model iPhone uses software to recognize people’s faces for login. If an organism did that, we’d say it was “seeing”. In that case, I think it makes sense to refer to the camera plus that software as a type of eye. But it’s an eye that only seems to come into existence at selected times.

            Like

          6. This illustrates the genius of your solution. You can just pick and choose functions, defined to the right level of generality, to prove that mind is nothing more than its functions. But that’s why as an explanation it is of no value.

            Like

          7. I agree with your final assessment of functionalism James. Functionalism is a “save face tactic” used to deflect from the simple fact that mind has evaded and will continue to evade explanation in a naive materialistic framework. So in that sense, I personally believe that functionalists are a disingenuous group.

            However, one does not have to default to some form of dualism either to account for the mysteries of mind because a proposition which asserts that mind is a quantum system is more than capable of accommodating those explanatory gaps. But first, one has to reject the wave function hypothesis of quantum mechanics as Rovelli has done and focus on the concept of relational quantum mechanics.

            It should be glaringly obvious to anyone that a full accounting of mind cannot be explained with a conventional classical mechanics dynamic; there is something more and the most logical explanation for that something more is not dualism but quantum. I don’t know about you, but the more I contemplate and think about this notion the more it is becoming self-evident……..

            Liked by 1 person

          8. Yeah. No matter how functionally equivalent in their carness are a battery powered electric engine car and an internal combustion one, it is the internals that make the difference. While they may do the same car functions, how they do them is different and consciousness is about the how not just the what.

            Like

  12. Mike,

    You say that our inability to communicate other than by ostentation what we mean by redness makes it how this feature of our mental world could be implemented even in biology, let alone otherwise. I do not understand why (see below).

    First, however, to answer your final question: I think philosophers did get that one right. Phenomenal consciousness neatly isolates the kernel of the problem that the argument is really all about: how to account for a 1st person point of view arising in a world initially lacking it. The jump appears to be a magically mysterious one, but I would suggest that the mystery is down to mixing of phyiscalist (3rd person) and mentalist (1st person) discourses. This can only be done if there is a reliable (nomological) mapping between the two, but as Davidson argued with Quine’s support, phyisicalist monism does not entail the existence of such a mapping.

    Thus the simplest solution is to note that evolution had no reason to ensure such a mapping exists and therefore the logical default assumption is that it does not exist — doubly so since assuming its existence leads to the perplexity over the status of 1st person ontological commitment to phenomenal qualities of subjective experience.

    In any case why is non-communicability of the specifics of the experience of redness (to take the canonical example) supposed to be such a mystery? What is it like to perceive redness. Surely (!), it is like *not* perceiving any other colour. This answer may seem tautological to the point of being facetious, but I don’t think it is.

    Consider: if we could perceive only the red colour in various intensities, we would not have any kind of a concept of colour. Specifically, we would not have a concept of redness (as distinct from light) at all. The concept only makes sense in relation to the whole structure of colour experiences. And we know from Land’s “retinex” experiments that this structure very much affects our actual perceptions of the world, via colour contrast mechanisms underlying the phenomenon of colour constancy. We do not experience a colour in isolation. We experience it as an instance embedded in a class of possible colour experiences with a built-in structure given to us by evolution.

    (It is curious that philosophers complaining about scientific reductivism are apparently quite happy to consider “redness” in isolation, instead of looking at the holistic structure of colour perception.)

    Redness is a “label” our visual perception puts on the experience we call “red colour” in order to differentiate it from all other possible experiences. Why dos it look “red”? Because it is not green, blue, yellow… It has to look somehow if we are to experience it at all and it must look in some distinctive way or we would not experience it as a distinct colour. We call that distinctive way “red”.

    Of course, interpersonally, that specific, distinctive way is identifiable only by ostentation, shaped, as it is, by the unique architecture details of a particular brain and by our unique experience — how we learned about colours, learned to name them and learned to associate them with non-colour concepts (e.g. red being cheerful or baleful, signifying vigour or danger, being associated with heat etc…) If by some miracle I could truly experience your perception of redness, I might say it is furry in G minor, or whatever. 🙂 (Have you ever met that story of Lafferty’s in which two people exchange their perceptions of the world?) Why is that a problem? If you take the binary code of e.g. a sort program and just relocate it to a computer with a somewhat different hardware instruction set, the result would almost certainly be nonsense. Is that a mystery? (NB: this is an analogy; it does not require equating brains with hardware and mind with software.)

    So I reckon mystery only arises if one mixes physicalist and mentalist discourses in an attempt to equate that unique, subjective experience of redness with some specific scientific formulation independent of such unknowable contingent factors. And as Davidson argued, monist physicalism does not entail this being possible. Heck, even modern computers no longer feature a necessary connection between software and hardware states! Why should we expect such a connection between minds and brains and why should we claim a lack of such nomological connection to be an argument against physicalism?

    (P.S. I realise some similar points have already been made in other ways — this took a few days to construct to my satisfaction and I can’t face re-doing it in the light of ongoing exchanges. Still, hopefully it brings something new-ish to the debate. :-))

    Liked by 2 people

    1. Thanks Mike. Hopefully it was clear from the post that I was describing the common sentiment, and that my answer to how biology pulls it off is that it doesn’t. The real question is why we think it does. Which fits with what you describe as the default assumption is that it doesn’t exist. Although my inclination is more to say it doesn’t exist with the properties commonly ascribed to it. But if we stick to the strong version, then I definitely don’t think that exists. (I rarely buy into the strong versions of concepts.)

      You say phenomenal consciousness neatly isolates the kernel of the problem. But it seems like common descriptions of it include a lot of functionality. And Block himself admits this in his paper. I think that’s why so many people feel comfortable using that phrase in a weaker sense of just referring to how the first person experience seems. But of course that represents no great difficulty, no explanatory gap or hard problem. It’s only with the stronger version that we end up with the gap.

      On mapping physicalist and mentalist discourses, I both agree and disagree. I agree that no mapping can happen with the stronger sense of mental, where we take everything implied by our experience as real. That’s like trying to map the events in the Star Wars universe to our own. It can’t be done in any coherent manner. But we can of course map the appearances of the SW universe to our own with no problem, at least if we have knowledge of modern CG and other movie making techniques.

      I’m onboard with your discussion of color experience. Although I actually think a functional description of it is possible. I don’t think Mary gains any new knowledge. It’s just that the functional description won’t be an actual exercise of the ability of color discrimination. That and the experience of any particular color includes so many associational and affective reactions that describing all of it would take an enormous amount of description. It’s said a picture is worth a thousand words. But an actual experience of a visual scene is probably worth a lot more.

      Like

  13. I like ‘phenomenal’ as opposed to ‘mysterious’, which seems too vague. Then again, I’m just a philosopher. 🙂

    To be honest, the term ‘phenomenal’ doesn’t seem confusing until illusionists come along and make it confusing. I say it’s on them to be clearer.

    “Rey speculates that, if w-consciousness is the only form of consciousness that exists, some existing simulations of pain might actually be instances of pain.” I mean…really? So he’s saying being in a simulation of pain is no different from actually experiencing pain? That just seems perverse. I can’t believe he really believes that.

    “Which makes the qualifier “phenomenal” ambiguous. My impression is that most philosophers tend to use it in the strong sense and most scientists in the weak one, with exceptions on both sides.”

    I think you’re right. I don’t think it’s terribly confusing most of the time; you just have to make allowances for who’s talking. When scientists use the term, I don’t automatically assume they’re talking about the same thing philosophers mean by it—after all, as scientists, they aren’t dealing with the phenomenal realm. Usually it turns out they’re talking about observable correlations to experienced phenomena. It’s only when they mix up the philosophical version with their scientific version that things get confusing, and that seems to happen more in specific cases when they’re tackling the concept of consciousness head-on. (As opposed to, say, when they’re merely trying to talk about animal intelligence or something like that.)

    Chalmers’ term “awareness” seems problematic to me too, though I don’t think it would be once I understood he was using it in a special way. (Although I did try reading him once. Ugh.)

    When it comes down to it, language moves and wiggles, so we all just have to keep explaining ourselves. That said, taking a term that generally means one thing and forcing it to mean the opposite just smells fishy…and if they don’t bother explaining why, I tend to think they’re deliberately trying to muddy the water.

    Liked by 2 people

    1. I do think most illusionists are reasonably clear if read at length. For example, in his paper, Frankish elaborates on the notion of phenomenal consciousness he’s attacking. But it is true that he’s not often as clear as he could be. The problem is most people don’t read illusionists at length.

      On Rey and the pain comments, I probably should just quote the relevant section from his paper. He’s saying this after quoting someone who has developed a computer model to test pain treatments.

      One wonders what they’re thinking when they run their model on a computer: if their theory were true, wouldn’t it imply that the computer would be in whatever pain state they were modelling? Perhaps the model has to be incorporated into the rest of a full psychology: but precisely how and why? Surely vast parts of our psychology are irrelevant to an experience of pain. Notice that the issue can’t be evaded by claiming that a model is always different from the phenomena it models: the puzzling question is what would one have to add to an implementation of the model on a computer for it to be in genuine pain, and why they don’t consider it in advancing what they take to be an account of pain.

      …Of course, it’s likely there are further computational processes that might well need to be added. However, in discussing the issue over several decades in Europe and America, I’ve found that this is not the worry audiences usually raise. Overwhelmingly, people feel that something crucial has been left out that is decidedly not capturable computationally, but which they also feel at a loss to otherwise specify.[6] I’m, of course, not about to argue that something of this sort has been left out. I suspect nothing has been. I’m just struck by the robustness of the intuition that something has, and wonder how this reflects on our — certainly my own — ordinary concept(ion)s of persons and minds.

      Frankish, Keith. Illusionism (Journal of Consciousness Studies) . Imprint Academic. Kindle Edition.

      So no, he doesn’t believe it, but he does view it as possible in principle. He fully acknowledges how powerfully counterintuitive that conclusion seems.

      Yeah, my issue with Chalmers’ proposed naming scheme is that intellectual honesty requires frequently reminding the reader how those words are being used. And he’s actually pretty bad about using the word “consciousness” in his proposed manner without that clarification. Once you understand what he’s doing, it puts what he’s saying in a different light. But it allows him, and many like him. to say the illusionists are denying “consciousness” or “pain”, even when they accept the functional versions of both.

      I totally agree with your final point. The language landscape is what it is. We all have to deal with it as best we can, and work to make our meaning as clear as possible. I’m actually constantly surprised by how often people disagree with this.

      Liked by 1 person

      1. I have to admit, I don’t read illusionists at length either. I guess I’m guilty of being dismissive before knowing what I’m dismissing, but I figure if I can’t understand what they’re saying well enough to get sucked in and I don’t have a class paper to write or some other reason to pull out my weed whacker, I’ll move on to something else. Life’s too short.

        Honestly, I’m not sure I followed that excerpt. My take on it wasn’t what yours was, but I might have misread something. It sounded to me like he’s pretty confident the computational model could experience pain if something were added to it—more computations—and is concerned about the ethical implications of testing simulated pain treatments. He also seems astounded that most people think computers can’t be made to feel pain.

        You’ve got a good point about having to constantly remind the reader of an unusual use of a term…and there’s also the danger that you, the writer/thinker, could forget the special way you’re using the term and get yourself into a tangle.

        Liked by 1 person

        1. Totally agreed on life being too short. There’s a lot of stuff I won’t spend time on, either because I’m not interested, or because I already investigated it and don’t see anything new. On the other hand, in areas I am interested in, I’ve found it’s usually worth at least reading some articles or papers on ideas I disagree with, even if they initially seem crazy. They may not convince me, but I usually at least learn where the proponents are coming from.

          I’d say you’re reading Rey right about the computational model. My point was based on the fact that he doubts the existing ones are complete. But he doesn’t think anything in addition to the computation is needed.

          Good point about the writer forgetting how they’re using a term. Often we take those kinds of definitional shifts to be a deliberate form of deception, but often the writer doesn’t even realize they’re making those shifts. Of course, sometimes it actually isn’t a shift, but they’re just not explaining well enough why their preferred definition still holds.

          Liked by 1 person

    2. I couldn’t agree with what you’ve said here more Tina. I think you’d enjoy the following 2011 four paragraph post of Eric Schwitzgebel. The title is “Obfuscatory Philosophy as Intellectual Authoritarianism and Cowardice”. (When reviewing his coming book earlier this year I emphasized to him that he should include this as well, though I doubt he will.) https://schwitzsplinters.blogspot.com/2011/10/obfuscatory-philosophy-as-intellectual.html?m=0

      The point is that philosophers (and I’d include soft scientists in general) are able to gain more popularity by saying things that have speculative rather than specific meanings. Here charisma is what wins the day. That’s how I consider philosophy and soft science in general to work — the end goal is to assert obscure things with questionable meanings in ways that thus make you seem clever. And how might someone who actually does have good ideas fail in philosophy, psychology, psychiatry, sociology, political science, and so on? That would be presenting your ideas in ways that others are able to grasp and thus attack. The speculative nature of these domains mandate that even good ideas can plausibly be attacked, though it’s far easier to do so when one grasps what’s being said. How many of us have academic heroes in these fields who speak to be understood? Given the associated dynamics I suspect not many of us. Thus continued failure in associated fields.

      Liked by 1 person

      1. Thanks for sharing the post. I enjoyed it.

        I think I agree with him. I do think there are some reasons to be intentionally obscure (Plato and Nietzsche come to mind as examples), but those reasons should be made clear. (Plato actually did think the highest level of wisdom—attaining the idea of the Good—was a wordless profound insight.)

        Kant is another case altogether. I totally agree with what he said there. I think Kant just needed a good editor, because I doubt he meant to contradict himself as often as he did, often in the same paragraph!

        Liked by 1 person

        1. Happy you liked it Tina. Let me also give you my assessment of Frankish’s position. It’s that he’s an illusionist about consciousness when people include notions of it that he considers dubious, like “ineffable”, “intrinsic to matter”, “private”, and so on. He once challenged people to come up with an effective conception of consciousness that’s also “substantive”, which is to say it also describes how the brain creates it. He then went on to observe ways in which each participant failed. Eric Schwitzgebel presented the only consciousness definition that he wasn’t an illusionist about, and graciously so. This didn’t quite do the job however since Eric didn’t describe how the brain creates it. His innocent/wonderful conception of consciousness can be found here if you’re interested: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/DefiningConsciousness-160712.pdf This will be included in his coming “Weirdness of the World” book, and I’d hope for it to help right a woeful consciousness field of study in general.

          I think Frankish can be beaten at his own game however. I presume you’re familiar with John Searle’s Chinese room? Apparently Frankish believes (as many in academia do), that consciousness exists by means of information processing alone. I’m an illusionist about that bit of speculation however. Just as Searle shouldn’t create a conscious speaker of Chinese with look up tables to convert certain written characters into other written characters, the right inscribed paper that’s properly converted into other inscribed paper shouldn’t create something that experiences what you do when your thumb gets whacked. Why? Because in a natural world inscribed paper should only be “information” in respect to a machine that those characters inform. Here the status quo seems to have subverted a “hard problem” by imagining that there’s merely a “programming problem”. Thus beyond just processing information the brain should also be animating some sort of consciousness physics. My own suspicion is that consciousness exists as some element of the electromagnetic fields associated with neuron firing.

          Liked by 1 person

          1. I have heard of the Chinese room, but I have to admit, I find the existence of phenomenal consciousness so bleeping obvious that I used to wonder why someone thought it necessary to come up with the thought experiment. Now I know.

            “He once challenged people to come up with an effective conception of consciousness that’s also “substantive”, which is to say it also describes how the brain creates it.”

            Assuming the brain does create it! I would ask him, why must consciousness be derived from the substantive? (I’m assuming by ‘substantive’ he means the physical, concrete, tangible, quantifiable.) I’m not saying it’s not, but it seems to me he’s making the bold and not even remotely obvious assumption. It’s up to him to defend his position.

            You’ll have to excuse me, I lean more towards phenomenology (Husserl and a tiny bit of Heidegger, not the stuff associated with existentialism, however). I’m not sold on the idea that consciousness is nothing more than computations in the brain. I take the view that consciousness most certainly exists and probably has some causal relationship to the brain, but who the hell knows how those interact. (I do hope the computer analogy falls out of favor; it’s like we’re talking about brains in vats.) If someone were to put a gun to my head and force me to solve dualism by reducing one thing to another, I’m more inclined to prioritize the supposedly mysterious phenomenal experience that follows me wherever I go.

            Liked by 1 person

  14. Mike,

    Every now and then you write something that is at the core of the problem such as this comment:

    “…I’ll just note that the only thing that needs to be true for a physical explanation is that introspection be no more reliable than any other type of perception, that it can lead to incorrect judgments and conclusions in our mind about its own operations. It doesn’t require denying all mental experience. Only that some of our judgments about it are wrong, the result of trying to use a functional self monitoring mechanism outside of its evolved role.

    Of course, the details of why we reach these incorrect conclusions do eventually need to be worked out by science. But I don’t see any deep metaphysical obstacles to that happening…”

    The reasons “why” we reach these incorrect conclusions is psychical not a matter of phsyics and yes, there is a deep metaphysical obstacle that prevents meaningful progress, and that obstacle is our functional self monitoring mechanism that is geared for self-preservation not realms beyond self-preservation. This is where I like to quote Robert Pirsig:

    “Columbus has become such a schoolbook stereotype it’s almost impossible to imagine him as a living human being anymore. But if you really try to hold back your present knowledge about the consequences of his trip and project yourself into his situation, then sometimes you can begin to see that our present moon exploration must be like a tea party compared to what he went through. Moon exploration doesn’t involve real root expansions of thought. We’ve no reason to doubt that existing forms of thought are adequate to handle it. It’s really just a branch extension of what Columbus did. A really new exploration, one that would look to us today the way the world looked to Columbus, would have to be in an entirely new direction.”

    “Like what?”

    “Like into realms beyond reason. I think present-day reason is an analogue of the flat earth of the medieval period. If you go too far beyond it you’re presumed to fall off, into insanity. And people are very much afraid of that. I think this fear of insanity is comparable to the fear people once had of falling off the edge of the world. Or the fear of heretics. There’s a very close analogue there.”

    “But what’s happening is that each year our old flat earth of conventional reason becomes less and less adequate to handle the experiences we have and this is creating widespread feelings of topsy-turviness. As a result we’re getting more and more people in irrational areas of thought…occultism, mysticism, drug changes and the like…because they feel the inadequacy of classical reason to handle what they know are real experiences.”

    ________________
    pp. 151-2/373 of ZMM — Realms beyond reason

    Liked by 1 person

    1. Lee,
      On the metaphysics, I guess we’ll have to wait and see. Certainly if physicalism is wrong, then there will be something metaphysical involved.

      Thanks for the Pirsig quote. Not sure if I’ve heard of him, although the name sounds familiar. Maybe you’ve cited him before? But the wikipedia on that book is interesting. From the synopsis, sounds like he advocates for a balance between reason and a romantic approach focused on gestalts.

      I have no problem with that when trying to come up with possible solutions. But I think when evaluating those possible solutions, reason and empiricism are both needed. I do think when romantic gestalts are all we have, despite extensive investigation, relying on them becomes problematic. But if we’re just talking about inspiration for new ideas, I’m onboard.

      Like

  15. I mentioned in the post that I wasn’t able to find Georges Rey’s 1983 paper online. It turns out that it’s in the beginning of the book below, and the main body of Rey’s paper just fits within the Amazon preview (although for the paper’s appendix, citations, and footnotes, you’d have to rent or buy the book). It is a pretty strong form of eliminativism, but with nuances, and as I mentioned, his view did moderate later. Interesting reading, once you allow for the age.

    Incidentally, the second paper in this journal / book is a paper from Bernard Baars discussing what appears to be an early version of global workspace theory, only he hasn’t found the word “workspace” yet, so “global data base” gets used instead.

    Like

  16. Apparently Husserl and phenomenology is pretty far from my own focus. I do consider introspection to be of tremendous potential value in itself, though demand that it work it’s way back to science for empirical verification in at least a conceivable capacity, if in practice not always directly. Actually I don’t know of anyone more radically so disposed than myself. I believe that science suffers horribly today without various generally accepted principles of metaphysics, epistemology, or axiology. Thus I’d like a new community of respected professionals to rise up to provide scientists with such founding principles so that it might progress in ways that it otherwise cannot. I think I’d call these professionals “meta scientists” rather than “philosophers” so that general philosophy might continue on in a traditional sense without ever achieving any broad agreements For those so inclined philosophy would effectively remain as an art to potentially appreciate. Conversely the purpose of this new society would be to reach various apparently effective agreements from which to better found science.

    (I should mention that I consider you under absolutely no social obligation whatsoever to continue engaging me here Tina. This is simply the kind of thing that I very much enjoy. I appreciate that you’ve given me an opening to generally make my case. Beyond you it is my hope that others will find my speculation interesting and perhaps even inquire.)

    On the presumption that consciousness is brain based, I have two observations. The first is that since various drugs known to affect the brain are also known to affect consciousness, this does seem likely. There’s always the potential that consciousness is only brain involved intermediately, and thus otherworldly dynamics of some kind might be giving consciousness its substance (as Descartes figured). I handle this possibility by means of the second observation, or my single principle of metaphysics. It reads, “To the extent that worldly causal dynamics fail, nothing exists for us to potentially figure out”. So we’d presume no such magic, not because this could ever be known, but rather because in the event that magic isn’t involved, science might thus figure some things out about associated worldly dynamics. (Magic inherently resides beyond the realm of science since science is only appropriate to explore worldly causal dynamics.)

    Whether spooky or not I’m with you on never denying personal phenomenal existence itself. And while there’s a common belief that consciousness exists as computer information processing alone (which I argue is spooky), consider the following computer based conception of consciousness. I’ve become quite smitten with it.

    Begin by conceptualizing the evolution of non-conscious central organism information processors in multicellular life. They accept information through sense organs , as well as process this algorithmically for instruction to output organs that effectively react. This is generally observed in how the central nervous system operates many forms of life, and by means of neurons with synapse based connections. Here life should essentially have become robotic. But just as our robots depend upon programming instructions to do what they do, these biological robots should have as well. Thus both should tend to fail under novel situations that they shouldn’t always be programmed to effectively deal with. Algorithm alone should not create an experiencer with the potential to do things on the basis of phenomenal understandings.

    In an otherwise phenomenally valueless physical world however, let’s say that there are causal dynamics by which something can feel good/bad and thus be sentient in the form of certain types of electromagnetic fields. Also observe that the small charges associated with neuron firing create electromagnetic fields. Thus what I’m proposing here is that when life evolved to essentially become biological robots, there may have been a point where certain neuron firing created a functionless (or epiphenomenal) experiencer of existence as well. So the theory is that since these bio-bots would tend to fail under more open circumstances, by chance these otherwise irrelevant experiencers (in the form of certain neuron produced EM fields) were given the opportunity to affect organism function. Though failure should generally be expected, theoretically certain versions should have made choices well enough for their mutations to get passed on. This in itself should open up the possibility for the step by step evolution of advanced phenomenal existence, as in electromagnetic fields which constitute an experiencer with detailed vision, hunger, frustration, joy, and so on from which to think about what would be in its best interests. From here such electromagnetic decisions should incite the brain to act accordingly through associated muscle function. Ultimately you and I should emerge to think about this question, and each of us would phenomenally exist in the form of certain cranial EM fields of the right parameters.

    If you do understand my meaning here, I’d expect a sign of this to be a dismissive smile on your face right now. People propose weird crap like this for consciousness all the time. What you may find unusual in this case however is the potential for falsification. Not so for dualism, panpsychism, informationism, and popular proposals in general as far as I know. Apparently none have any potential to be empirically refuted since none make claims specific enough to potentially be refuted. Conversely this proposal has reasonable potential be checked since brain produced electromagnetic fields have known parameters, and also the potential to be causally affected.

    The leading proponent for this position is UK professor at Surrey University, Johnjoe McFadden. He began developing it in 1999 I believe. Some evidence does exist that it’s true. For example the best neural correlate for consciousness we have right now is the synchronous firing associated with his theory by which these fields are thought to get into the proper electromagnetic zone. What I propose however is that scientists propagate appropriate EM fields in the heads of conscious volunteers to see if their consciousness would be affected for oral report (since waves of one kind tend to affect other waves of that kind). If enough testing were done to determine it highly unlikely that consciousness exists in the form of certain neuron produced EM fields, then fine, the theory should be wrong. But if scientists could reliably produce certain cranial fields that affect someone’s vision, smell, warmth, or any other kind of phenomenal existence for oral report, and could perhaps even produce specific phenomena on demand, then it seems to me that McFadden’s theory would become generally accepted. I’d expect success to usher in humanity’s most profound discovery to date.

    Like

  17. Mike, you (and a few of your supporters) have once again asserted that “the word ‘consciousness has a variety of meanings” and by meanings you clearly mean definitions since that’s what this post is about. As I’ve commented previously, although I have searched extensively, I’m unable to find those many definitions and, when directly asked, you seem to have discovered the same since you have so far failed to provide any definitions. I realize that others, like Chalmers, have claimed many definitions also, but they also fail to provide any of the supposed multiple definitions they are imagining.

    I’ve provided several definitions in my comments here—definitions from qualified specialists that define consciousness as feeling (starting with William James) as well as my own feeling-based definition which can be summarized as “a simulation in feelings of an organism centered in a world.” Your objection to all of those was that the word ‘feeling’ cannot be defined, which I refuted by pointing out that it’s perfectly well defined through ostensive definitions, e.g., defining by example, to which you responded that ostensive definitions were commonly used with language learners, as if that obliterated their validity, which it does not. (Note your own ostensive definition: “[Red] apparently can only be pointed at and agreed that yes, there is red.”)

    After claiming incorrectly that ‘feeling’ cannot be defined, you drop out of the conversation, never having supplied even one credible definition of consciousness. Yet, you repeatedly claim such definitions are abundant. Not only that, but, even though many recognized domain experts define consciousness as feeling, the word ‘feeling’ never appears in your own discussions of consciousness or your non-hierarchical “hierarchy” lists of items.

    From your writings I often suspect that you’re uncertain as to what a definition actually is, at least in this case what constitutes a definition of the phenomenon of consciousness. I think we can easily agree that a definition is a statement of the exact meaning of a term, facts-of-the-matter—can we not? For the word ‘consciousness,’ you appear to believe that all of these statements—about consciousness’ generation, about the contents of consciousness, about the supposed uses of consciousness, about information processing and many more—are definitions of consciousness. They are not. A definition of consciousness will clearly and precisely state the meaning of the word consciousness—will unambiguously tell us what the biological phenomenon IS.

    In this post you claim that there are two categories of consciousness definitions, functional and mysterious. So I am asking you to:

    1. Please provide two or more credible and precise definitions of “functional” consciousness and identify some supporters of that definition.

    2. Please provide two or more credible and precise definitions of “mysterious” consciousness and identify some supporters of that definition.

    I’m looking forward to your reply and, of course, to evaluating the numerous consciousness definitions you provide. And, if the definitions are not numerous, consciousness is not in the eye of the beholder.

    Like

    1. Stephen,
      On the two categories, I linked to six papers in the post about it, and a seventh in the thread. As usual, you’re either overlooking or ignoring them and claiming nothing has been provided. If you want to have a sustained conversation with me, this isn’t the way to do it.

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.