There is no phenomenality without access

How do we know whether any particular system is conscious?  In humans, we typically know because most humans can talk about their conscious experience.  Historically, if we can report on it, it’s conscious; if we can’t, it’s in the unconscious.  But this raises a difficulty for any entity that doesn’t have language, including non-human animals, infants, and brain injured people.

One example in the brain injured category are people with blindsight.  Damage to the visual cortex can lead to a condition known as cortical blindness.  However, a person with this condition may still be able to react reflexively to something in their visual field, or when forced to guess whether something is in front of them, can do so with a high degree of accuracy.  It’s called “blindsight” because the patient is consciously blind in the affected part of their visual field, but still appears able to unconsciously perceive things in that field (likely using subcortical connections).

Apparently some are now arguing that blindsight is actually conscious sight, just not conscious sight the person can introspect or report on accurately.  There is a paper in preprint (which I have to admit I’ve only glanced at) arguing that it is actually degraded conscious perception.  The argument seems to be that patients aren’t accurately reporting their experience, that they’re being too conservative in their determination of whether they perceive something.

This resonates with a movement in neuroscience to try to isolate the neural correlates of consciousness apart from the neural correlates of reporting using no-report protocols, in essence, to isolate reportability from reporting itself.  While this makes sense (the historical standard is can report, not does report), there are others arguing that even this isn’t sufficient, that we must isolate conscious perception from post-perceptual cognition.  Some even argue that we should include processing not accessible for report.

Much of this seems to be taking us further from the historical standard: consciousness is what we can report on.  The type of consciousness being pursued appears to be Ned Block’s conception of a phenomenal consciousness that is separate and apart from access consciousness.  P-consciousness is generally held to be raw experience, where a-consciousness is cognitive access to that experience for decision making and report.

A proponent of an independent p-consciousness would argue that it exists in the sensory or reactive processing that takes place prior to cognitive access.  Someone who sees p-consciousness as just a-consciousness from the inside would argue that the pre-access processing is preconscious, generating content that has the potential to be conscious, but isn’t yet.

One argument commonly put forward for the view of an independent p-consciousness are the implications for animal consciousness.  But, aside from the fact that implications are irrelevant, as Michael Woodruff’s points out in his Aeon piece this morning arguing that fish are sentient creatures, fish do have at least glimmers of cognition, indicating that there is at least some access going on in them.  We don’t need p-consciousness to be separate to regard them as minimally conscious, and if we don’t need it for them, we certainly don’t need it for mammals or birds.

All of which brings us back to the question: what is necessary for consciousness?  Where are the boundaries between the unconscious and conscious?  Animals can’t report, but many appear to have at least some of the type of cognition humans can report on.

Personally, I think the idea of a p-consciousness separate and apart from a-consciousness is incoherent, a notion we’re only tempted to hold due to the vestiges of Cartesian dualism, an assumption that if we just have the sensory processing by itself, there will still be an audience.  In a non-dualistic view, only cognitive access provides that audience.

But maybe I’ve missed something?

36 thoughts on “There is no phenomenality without access

  1. As to leaving anything out I cannot contribute, but it does seem, to me, that psychologists and physiologists in the absence of better tools studied people who were injured by accident or disease. So, the loss of some ability could shine some light on how that ability manifests itself. This was a reasonable approach … when we didn’t have better tools.

    The problem with this is that such an injury may also result in an adaption that provides some replacement of the functionality lost, but in a completely different manner that before. So, I do not know what Blindsight has to teach us, if anything.

    How “normal” people function needs to be studied in new and inventive ways, but these take time to develop, even to come up with them. It was just recently that it was shown that when we imagine things, we use the same brain locations that we do when observing the real thing. This makes sense, this jibes with what we are learning about how memories are stories. We need to explore more the line between conscious and subconscious brain activities for I think there we will learn a great deal about how consciousness manifests itself. We might also find that the subconscious (while awake) is just our conscious functioning with a a few switches turned off or on.

    Liked by 2 people

    1. We still learn a lot from people with injuries. But we probably learn the most from what they lose in the short term. Long term adaptations blur that, although in adults it’s not unlimited. In the case of blindsight patients, while they do have capabilities, they definitely lose a lot. They definitely don’t have any kind of detailed discrimination for object identification, among other things.

      Like

  2. “Personally, I think the idea of a p-consciousness separate and apart from a-consciousness is incoherent,”

    That’s my sense of it as well. It feels like p-con is just a label for the lower levels of processing, and a-con is a label for the top level.

    I saw headlines for articles about this blindsight thing, but didn’t read any of them. As a stab in the dark, maybe the brain is so cross-wired that low-level processing leaks to other systems that can react in crude physical ways. (Can a blindsighted person catch or deflect a ball tossed at their head?)

    I’m thinking how easily we walk around, say through a store, and don’t really think about how we’re weaving between obstacles. There seems some low-level stuff going on there. Maybe that’s what picks up leaked signals from the lowest level p-con of the visual system?

    Liked by 1 person

    1. I think low level processing is a good way to describe how advocates of an independent p-con see it. It’s a view that includes theoretical assumptions. But if we regard p-con as simply what consciousness is like from the inside, then the low level processing by itself isn’t sufficient.

      Blindsighted patients can successfully succeed at some forced guessing, and they can react reflexively to objects moving toward them. Some can even successfully navigate a hallway with obstacles, all while insisting they’re seeing nothing.

      But the capabilities are still limited. I don’t think they could catch a ball or do any task that required high level coordination. And from what I remember of the accounts, they wouldn’t be able to identify the objects they were reacting to, only whether they’re there or not. It’s all fairly consistent with upper brainstem functionality communicating with their limbic system, bypassing the cortical visual system, which is how I think it’s currently thought to work.

      Liked by 1 person

  3. I’m thinking what you’re missing is the consideration of different subsystems within the brain. You associate “your” consciousness with a specific subsystem, namely, the one that can report, otherwise known as the autobiographical self. But there are other subsystems.

    I propose that Consciousness is about certain kinds of events, specifically, representations. These representations can be simple or complex. P-Consciousness refers to a specific subset of such, mostly complex, representations, specifically resulting from sensory (or internal) inputs. Now there are multiple systems that can respond to these representations. One such subsystem, the global workspace, can re-represent the basic representations, alone or in combinations. And then there are (probably) multiple subsystems that can respond to the re-representation, one of which is the reporting system, but others include a memory system, an emotion generating system, etc.

    So the way I would translate into my terms what you, Mike, are saying, is that it’s only conscious if it’s a representation created by the workspace system, which gets its content by accessing the sensory et. al. systems, and is accessible by the reporting system as well as certain other systems, whether or not it is actually accessed by the reporting system. Which brings up the question, would it still be conscious if the reporting system were cut off from access (damage to Wernicke’s or Broca’s areas?), but the other systems were not cut off? What parts (systems) are necessary and sufficient?

    *

    Liked by 2 people

    1. Reading your description, my question would be, is any kind of representation, in and of itself, sufficient for consciousness? If so, what makes a neural representation more conscious than a statue in a park, or an image on my hard drive? If not, what has to be included to make that representation conscious?

      Your description of the global workspace doesn’t match my understanding. Although I’m learning that people have a wide array of understandings of GWT, even among people in cognitive studies.

      Anyway, I don’t see it as a representation created by the workspace system based on some prior sensory one. (That more resembles the HOROR variant of HOT). I see it as numerous specialty components in that system accessing the sensory representation at various levels. It’s really more about the effect that representation in the sensory cortices has throughout the thalamo-cortical system. “Fame in the brain” as Dennett says.

      Whenever we talk about content being reportable, we’re talking about a healthy complete system. If damage makes report impossible, would the resulting system still be conscious? It depends on exactly what is damaged. Lamme makes the point that in split-brain patients, the right brain (usually) has no language and so can’t report, but that we still regard it as conscious, but the right hemisphere is a complex system on its own, with its own thalamus, hippocampus, prefrontal cortex, parietal lobe, etc.

      The more components are functional, the easier it will be for us to see it as a conscious system. If too many are missing, it becomes increasingly harder. There’s no objective line between the two, no fact of the matter.

      Like

      1. Simple answer, no, representations are not sufficient for consciousness. But I would say that representation is a necessary part of a conscious event. What has to be included is an interpretation of the representation.

        As for the global workspace, I predict I’m right and you’re wrong, but time, and neuroscience, will tell. 🙂

        As for reportability, I’m pretty sure the split right brain can report, just not verbally.

        In any case, as you like to say, there’s no fact of the matter whether something is conscious. The way I like to put it is: all you can do is define your system and then ask what conscious-type things (involving representations and interpretations) can that system do. So you can define your system as the brain, or the right brain, or the visual cortex, or just the thalamus, etc. Whatever is useful.

        *

        Like

        1. Interpretation of the representation sounds interesting. If the system in question merely reacts to the representation reflexively, would that constitute an interpretation? Or does it take using the representation in some more sophisticated manner?

          Do we ever predict our own view is wrong and the other guy’s right? I’ll actually predict that we’re both wrong, but of course I think I’m less wrong. 😉

          Good point about non-verbal report, and a weakness in Lamme’s argument.

          Consciousness is definitely in the eye of the beholder. All we can do is explore the implications of a particular view of it. Since our intuitions about it aren’t consistent, those implications almost always have bullets in them.

          Like

          1. I’m glad to see you engaging with “representation”, as I think that is the core of consciousness. And it gets complicated. So when you ask if a system reacting reflexively to a representation constitutes a conscious-type interpretation, the answer depends on the nature of the “system” and the mechanism reacting. Thus, the answer is maybe.

            To be specific, I would say that if the representation was created for the purpose of carrying mutual information x (note: the representation will necessarily carry lots of different additional mutual information, such as y and z), and the responding/interpreting mechanism was created for the purpose of responding to x, as opposed to y or z, then that whole process (creating the representation and responding to the representation) is a conscious-type process/event. So a representation by itself is not Consciousness, but the standard knee-jerk reflex is a conscious-type event. I understand that you don’t see this as Consciousness if nothing else happens, which is fine. What I’m saying is that, whatever additional arrangements you require for Consciousness, this type of representation, with the above restrictions, will be at the core.

            Side note on qualia: I would say the qualia for a given experience relates to the mutual information x mentioned above. However, the word “qualia”, from Latin, is a question word meaning which type/what kind of thing/what category. The fact of the question assumes more than one possible answer, “this one” instead of “that one”. Thus, it would be hard to justify a reflex as having a qualia, as there is only one possibility. I simply think of the reflex as a degenerate case. The more normal, more interesting case comes about when you have a single system that can generate more than one representation. A semantic pointer, and I think a global workspace, would be such a system. Then it would make sense to ask which representation, and so which “x”, was being created and responded to.

            *

            Like

          2. I’ve always thought representations were crucial to consciousness. If I ever gave you the impression I didn’t, it wasn’t intended. But I definitely agree there’s more to it.

            If I understand the concept of mutual information correctly, then I agree. Although my way of putting it would be that the representation is a prediction framework. To the extent it’s isomorphic with what it represents (the same as mutual information?) it will provide accurate predictions. Sensory representations (perceptions) enable a system to react to things in the environment before physically touching or tasting them. Meaning they’re a consequence of distance senses (sight, hearing, smell).

            On knee jerk being a type of consciousness, that does feel pretty far from one to me. Where in the knee jerk is there a representation? Not that I think a representation in and of itself is sufficient. We have reflexive reactions generated by the upper brainstem which we only perceive as an impulse that we may be able to override, but have no introspective access to the details of.

            Of course, there are more than just sensory representations. There are also affects, which I take to be representations of lower level reflexive reactions.

            On qualia, if I’m understanding correctly what you’re saying, they relate to discriminations, and since the knee jerk doesn’t make any of those, then it has no qualia. I can see that. But the most common question for qualia is why they feel like something, and I think that inevitably requires both the sensory and affective representations bound together. Qualia seem like a combination of these bound representations.

            Like

          3. Mike, when you say “ the representation is a prediction framework” you are definitely using a different sense of the word than I am.

            There are (I think) two standard meanings of “representation”. One is a reference to the process of representing, which usually involves a representation vehicle (like a word on a page) and an interpreting mechanism, like a person reading the word. The other standard meaning is just as the representation vehicle, leaving the “vehicle” part off. I don’t see how a prediction framework fits either of those. A prediction framework seems like it could be part of the interpretation mechanism, or it could be part of what creates a representation vehicle. Which is it? For any given event, you should be able to identify the vehicle and the interpreter.

            So using the knee jerk example, the impact to the correct part of the knee generates a neural pulse along a nerve that ends in the spinal column, where the neuron generates neurotransmitters. Those neurotransmitters are representation vehicles. They bear mutual information with regard to the physical impact on the knee. Those neurotransmitters cause another neuron to fire, which then causes a motor neuron to fire. [I could have the physiology wrong, but I thing the reflex involves three neurons this way.] The motor neuron firing causes the muscle to flex.

            In each case, neurotransmitters are a representation vehicle, and the firing of a neuron in response is an interpretation. But when looking at a system you have to draw the boundaries that you care about, so in this case I would say the first neurotransmitter is the representation vehicle and the combination of the subsequent neurons and muscle action was the interpretation.

            Given the above, I’m wondering how you would describe sensory representation and “affect” representation. (I’m skeptical that the latter is a coherent concept.)

            *

            Like

          4. James, it definitely seems we mean different things by that word. Of course, the word “representation” is pretty protean, meaning all kinds of things in different contexts. (Consider “political representation.”) In the context of mental representation, I usually take it to mean mental imagery, neural image maps, models, schemata.

            Obviously in such a view, the neurotransmitters in the synapse between the last afferent neuron in the knee jerk reflex and the first efferent neuron isn’t a representation. I might consider it an indicator, or a sign, or a representation in some symbolic sense, but not a representation in the manner usually meant by mental representation. Importantly, the symbolic sense only gets its meaning by the causal history that led to it, and the causal effects it has. For me, that amount to just information.

            On the question between vehicle and interpretation, that seems to some extent to be a distinction between data and processing. I can’t see that the nervous system bothers with that distinction. Data and processing are entangled from the initial sensory neuron all the way through the brain and back to the neuromuscular junction (or adrenal gland). We can attempt to make those distinctions, but we should remember that that’s something we’re applying after the fact, that we’re imposing on a system that doesn’t itself make that distinction (and so will at times violate it).

            The reason I say that mental representations are prediction frameworks, is that the regions that form them have constant recurrent processing during the perception, with the early sensory regions propagating up signals from the sense organs, and the higher level regions propagating down predictions. In that sense, the representation is a prediction fine tuned by ongoing error correction (when we’re actually sensing the thing being represented). When you throw in the fact this is affected by signals coming in from the motor and affective regions, that what we are doing and feeling is one of the things that affects what we perceive, the fact that this is a process becomes more evident.

            In terms of affects being representations, I’m not clear what the basis is for your skepticism, under either of our definitions of representation. Are you aware of your hunger, fear, pain, etc? And do you use your awareness of these things in deliberations? If so, then it seems reasonable to assume there are information structures somewhere holding that information. Joseph LeDoux uses the term “fear schemata” for fear in particular, and Frankish used “response schemata”, which I thought was catchy, as an adjunct to Graziano’s attention schema. But I take it all to refer to the same concept. Or am I missing something?

            Like

          5. So yes, what I’m calling representation is what you’re calling data processing. And what you’re calling representation is just a certain kind of data processing. Specifically for sensory “models”, that data processing involves inputs from multiple sources, and outputs to multiple responses. Some of those outputs are feedback to the processes which generated the inputs in the first place, while other outputs might be to “affect” systems, and yet other outputs might be to a multi-purpose “global broadcast” system.

            So the mechanisms responsible for what you call representation are what I call unitrackers. They work largely as you describe. They take data (representation vehicles) from multiple sources and present data (a representation vehicle) to multiple processors.

            The reason I prefer my use of the term is that it provides a direct, physically measurable (in theory) reference and explanation for qualia. Specifically, qualia is a reference to the mutual information associated with the representation vehicle (data) generated by a unitracker (model processor). Note: the “experience” associated with this qualia involves, necessarily, the subsequent interpretation (further data processing) of said representation vehicle (data).

            As for affect, I’m going to assume this system: you see a tiger nearby and the unitracker for “predator” gets triggered. This signal gets to the amygdala, which sends a signal to the adrenal glands, which release Adrenalin into the bloodstream. This Adrenalin has global affects (increase blood pressure, heart rate, etc.). Some of these affects will include direct affects on neurons (spike more often, or less often, or something). The affect of “fear” is constituted by this global response. There may be a unitracker/model for “fear”, but this unitracker is not the experience of the Adrenalin. The Adrenalin causes multiple changes to our body, and these changes are experienced via interoception. Our experience of fear is the collective interoceptive experiences. We can group this collection into a concept, “fear”, and have a unitracker for that concept, but the activation of that concept is not the activation of the fear affect.

            So the question is, how do you apply “representation” to this understanding of affect? I don’t see a “model” (information processing unit) for “fear”. I only see a model for a concept of “fear” which is a reference to a collection of interoceptive models, so that we can refer to the collection as a group: the feelings I experience under certain circumstances.

            *

            Like

          6. I actually see mental representations as a hierarchy of unitrackers (if they equate with Damasio’s convergence-divergence zones), albeit perhaps ultimately converging on one unitracker for recognition of a particular object or concept.

            The model of fear you describe is James-Lange theory, that the underlying subcortical processes that trigger the physiological aspects of a survival response are only experienced by higher level circuitry through through the interoceptive system. But there are two problems with this classic view.

            First, the connections between the subcortical systems that start the whole thing and those higher level systems are abundant. (Williams James and Carl Lange in the 19th century couldn’t have known this.) There’s no good reason to suppose that the subcortical survival response isn’t communicated directly to the cortex.

            Second, Joseph LeDoux, who is an expert on the amygdala, describes experiments where all the physiological effects can be caused by subliminal imagery, but where the person is actually not conscious of it. (They may be conscious of the interoceptive effects, but they’re not conscious of actually being afraid.) That’s why he refers to the subcortical activity as survival responses.

            All of this makes sense if you think of the survival response as automatic reflexive (or habitual) reactions. But they are reactions that can be overridden, at least the motor response portion. (The heart rate, etc reactions happen regardless.)

            So, the higher level regions have to decide whether to allow or inhibit the response. Which means they receive signals about the prospective response. To make its decisions, those signals have to feed into an overall model, or collection of models, schemata, what LeDoux calls the fear schemata, of which the lowest levels are below the level of consciousness, just rising up to us as the primal feeling.

            That’s not to say that the interoceptive loop isn’t a major part of the experience. It’s married with the signalling of the lower level reactionss to produce the overall experience of the fear affect. But it also explains why just imagining a fearful situation can activate the entire affect stack in reverse, that is, thinking about something fearful, or that makes you angry, can make you feel the emotion, albeit usually with less intensity than the bottom up version.

            Like

          7. Two things.

            First, you say “ So, the higher level regions have to decide whether to allow or inhibit the response. Which means they receive signals about the prospective response.”. Do you know of evidence that the “higher level regions” necessarily receive signals about a prospective response *before* “deciding” to allow or prohibit? Because it could easily be the case that a higher level region could “decide”, before any signals are received, that *all* responses will be prohibited except for a particular set.

            Second, are you saying a representation may not be an isolatable model (which could be a model at the top of a hierarchy of models) but instead an organization of models with no “model” at the top?(In which case I would say the concept of representation is stretched too thin.) Is that a schema?

            *

            Like

          8. I know the evidence for the connectivity is there (not sure how detailed the tracing of those connections are yet), but is there evidence for the specific sequence? Not that I’m aware of. If it exists, it would have to be from animal studies such as with monkeys. The scanning technology doesn’t exist to capture that kind of activity without invasive techniques, such as surgically implanted electrodes.

            I’m not sure I’m clear what you’re asking with the second question, but I would think each identifiable mental concept would have its own unique firing pattern, although that firing pattern would heavily overlap with a lot of other concepts. For example, the dog concept and the wolf concept would have a lot of overlap, at least visually, and one of a particular dog would overlap heavily with the generic dog ones.

            Grandma or Jennifer Aniston might converge on one neuron, but that certainly doesn’t mean everything that brain knows about grandma or Aniston is stored in that one neuron. If that one neuron died, I don’t think the whole concept would disappear since it’s distributed in the hierarchy. It just means the final convergence would shift to another nearby neuron and the person’s understanding of the concept would be minutely altered.

            Like

  4. So this reminds me of the trouble astrobiologists have defining life. If we find something on another planet that moves and eats and breeds (etc), then we know it’s alive because those are things life does here on Earth. But extraterrestrial life may not do any of the things Earth life does, in which how do we know if it’s alive or not? Is that a fair analogy?

    Liked by 1 person

    1. It’s definitely in the same category of issues. The recent news of astronomers getting worked up about an object borderline too small to be a black hole but too big to be a neutron star should remind us all that nature isn’t impressed by our cute little categories. We might find a world full of sophisticated replicators with enough abundance of materials that individual homeostasis never evolved. The first example of extraterrestrial life (or something life-like) may show how limited our imagination is.

      Liked by 1 person

  5. I say let’s not race ahead and make linguistic/definition issues out of disputes that contain substantial empirical content. Luckily we don’t need to define phenomenal consciousness with words. We can point to examples instead.

    Stick your left hand in ice-water and your right hand in 40 Celsius water and let them acclimate. Now stick them into water of an unknown intermediate temperature. Guess the temperature, then check. Repeat.
    Practice until you become an expert. Now let’s say your latest verdict on an unknown bowl of water, using your left hand, was that it was 25 Celsius. But to double-check, you used your right hand, and said yes, it’s 25 Celsius. Now someone asks you: you mean the water gives you the same sensation in both hands? Of course not! The thing that’s still different, even after all your practice: that’s phenomenal consciousness.

    Once we understand how the brain implements these two “realms” of judgment, the objective temperature of the water and the subjective feeling in your hands, we will be in a much better position. At that point I suspect the linguistic-looking disputes will simply lose their attraction, at least among the scientifically literate, much as few virologists care to debate “whether viruses are alive”. But we’re not there yet, as far as I can see.

    Liked by 1 person

    1. We do know why the two hands feel different in the 25 C water. Differing sensitization and habituation of the peripheral nerves in each hand from being in different temperatures before, resulting in different signals sent to the brain. So we end up with different imagery in the right and left hemisphere for the part of the somatosensory cortex dedicated to the hand.

      But comparing the two, that is being conscious of the difference, requires that the signals go beyond the respective somatosensory cortex in each hemisphere, so that it can cross the corpus callosum if nothing else. But more than likely it requires excitation of the frontoparietal network.

      The question is, are we conscious of each individual sensation before it is accessed by the wider cortex? We know the information can’t be used for comparison, medium to long term memory, decision making, report, or even to generate affective feelings, until it goes beyond that sensory cortex. If we have any awareness of that processing, it will be without all those things. What kind of awareness would it be?

      I’m not saying we’re anywhere near a full accounting, but we have enough to reach some basic conclusions.

      Liked by 1 person

      1. Comparison, decision making, and affective feelings are all part of what we mean by consciousness. So I’m fine with calling any low-level processing “unconscious” that doesn’t reach these. But I don’t see reporting as a requirement. There are a lot of things that can block reporting – starting with a lack of language! – that don’t block consciousness.

        Like

        1. The reportability criteria does assume a healthy complete system that is naturally capable of reporting. For other systems, we have to assess whether the processing would have been reportable from a report capable system. For example, we know regulation of heartbeat isn’t reportable in humans, but that dealing with novel situations in a flexible manner is, so it’s fair to assume that’s the way it is in animals.

          Of course, there’s an inevitable “If a tree falls in the forest” aspect to this. If processing that is reportable for us happens in a system not capable of reporting, that doesn’t have the information collection mechanisms involved in reporting, is it conscious?

          I’d say it’s unknowable, but I don’t even think there’s a fact of the matter involved. I think a tree falling alone in the forest does make a noise, and I think many animals are conscious, but I can see someone taking the exact same facts and reaching the opposite conclusions, essentially a philosophical disagreement rather than a scientific one.

          Like

  6. Maybe I am missing something or should know the answer anyway from following your blog, but did you ever definitely say whether you think blind sight is consciousness or not? In other words, where exactly do you fall on the main question?

    I think you are saying the distinction is irrelevant but there certainly is a some kind of difference between the sort of sensory processing we do while driving a car but thinking about something else, for example planning a vacation at the beach. Our focus is on the vacation but we take all of the actions required to drive the car with little attention to the details.

    On a side note, if V1 is required for visual images in blind sight, doesn’t that invalidate any idea that visual consciousness arises in the frontal areas of the brain. If the frontal area produced visual images, then it should be able to create some form of visual image even with a damaged V1, especially since blind sight provides evidence that some kind of visual information is being processed anyway.

    Liked by 1 person

    1. On whether blindsight is conscious, I wouldn’t classify it that way. It seems like when we get to the point of saying people are mistaken about their subjective experience, that is, their impressions of their impressions are wrong, what we’re really talking about is their consciousness being wrong about their mental processing.

      I wouldn’t say the distinction is irrelevant. It’s clear that a portion of the mind can contemplate separate and apart from what another portion is physically doing, such as thinking about a vacation while driving. When that happens, our introspective access is to the vacation contemplation, indicating that they’re more closely related than the habitual or reflexive activity involved in driving. But I’ve had people insist that we are conscious of the driving too, but that we just don’t remember it.

      Your question about V1 and the frontal lobes assume both aren’t required. I personally doubt visual imagery happens in the frontal lobes, although I do think the frontal lobes utilize that imagery from the visual cortex in imaginative deliberations.

      When it comes to blindsight, most of the theories I’ve read see it happening subcortically. We have two major vision systems in the brain. One goes through the LGN in the thalamus to the visual cortex, the results of which we seem to have conscious access to, but the other goes to the super colliculus in the upper brainstem. Blindsight is compatible with the superior colliculus reacting to visual stimuli, and communicating that reaction subcortically to the amygdala and vmPFC, leading patients to have a feeling about whether something is in front of them with no visual experience of it.

      Liked by 1 person

      1. I just wanted to provide an anecdote with regard to the thinking while driving paradigm.

        I was driving home at night and was stopped at a very large intersection downtown. I saw an interesting purple light in a room high in a building in front of me, and I was thinking about what kind of party might be going on in that room, and suddenly there were horns blaring and headlights pointed straight at me. But in hindsight I could remember what happened. I could tell a light turned green, and I started into the intersection, going straight. But what really happened was the left turn signals on my side and the opposite side had turned green. I was looking up, so the green light was in my peripheral vision, but the automatic driving in my head just thought “green light, go”.

        I guess my point is that the peripheral green light was accessible but not accessed until something brought attention to the driving, at which point the memory of the peripheral green light was still accessible. For myself, I would say the peripheral green light was conscious and accessed by the driving system while not accessed by the memory/reporting system, that is, until the attention of the latter system was brought to the driving process.

        *
        [I really just wanted to tell that story]

        Like

        1. I like that story. Thanks for sharing it! It highlights something I’ve wrestled with from time to time. That we can lay down an unconscious memory, and later consciously retrieve it. It’s very tempting to then conclude we were conscious of it in the moment. But I think your example shows that’s not necessarily the case.

          This is important when assessing some scientific data. There’s a tendency to assume if someone can consciously remember something, then they were conscious of it while the memory was being formed. Again, not necessarily true. It shows just how difficult it is to measure consciousness.

          Like

      2. “On whether blindsight is conscious, I wouldn’t classify it that way”.

        Does that mean you would not classify it as conscious? Or, that you don’t like the idea of trying to classify it?

        In some ways, blindsight might be a little like hearing or smell in the sense that we can perceive something, a direction or location, and even have a sense of closeness or intensity of it but we have no visual image to match with it. I’m guessing a lot of people with blindsight acquired it after learning to see and, therefore, never learned to use the sense in the same way we learn to use hearing or smell. To those people, they might say they are not seeing, especially if the V1 for the other eye is intact. The “seeing” would look quite different between the two eyes. Can people with V1 lesions from birth have blindsight or do their connections end up wiring up to some other part of the cortex to do the visual processing? What I’m getting at is that there is a certain amount of learning associated with vision and critical periods for making the connections and what is going on with it may be impacted by that.

        Like

        1. I wouldn’t classify it as conscious, at least not visually conscious. It’s not even clear the feeling they receive about something in their visual field is conscious, since it’s not something they seem to be aware they have. It only comes out in forced situations. All we can say is it can affect their actions.

          On your questions about someone being born with lesions in V1, I don’t know. Plasticity can do some amazing things, particularly early in development, but it’s not unlimited. Most of the connections from the LGN project to V1, so it’s not clear how much plasticity could make up for a heavily damaged V1. I’m sure it would depend on the exact damage. Minor damage could probably be worked around (we probably all have minor lesions our brain works around), but major damage would likely still result in functional deficits.

          And it’s rare for such damage to be bilateral. Meaning the person would be aware that there was something different between their left and right field of vision. They might grow up used to it, but it seems like it would still be noticeable.

          Like

  7. Mike,
    Let’s put rubber to road regarding the premise that 1) phenomenal consciousness exists as information processing alone, as well as that 2) it exists as access consciousness from the inside. From the informationist standpoint as I understand it, when a given stack of information laden paper is properly converted into another stack of information laden paper, then something will experience the phenomenal consciousness that you know of when your thumb gets whacked. I’m not sure what is suppose to experience this, but apparently something.

    Then from your other premise, associated “thumb pain” also exists as access consciousness from the inside. So the inside or outside of what? Here we have a stack of information laden paper which is fed into a machine that properly spits out another stack of information laden paper. Something is proposed to experience what you personally do when your thumb gets whacked, but what? Where might the inner and outer sides of this paper to paper processing dynamic which is theorized to produce thumb pain, be?

    I’ve got a different account, though I shouldn’t be able to effectively show it to anyone who doesn’t consider there to be problems with the above scenario.

    Liked by 1 person

    1. Eric,
      I have to tell you I’m becoming pretty fatigued on the stack of paper discussion. Describing it in a simplistic manner such as taking one stack of paper and producing one other stack isn’t taking your own thought experiment very seriously. (Similar to how Searle and fans don’t really take his seriously.)

      If we’re going to do this, if we take the stack as the overall state of the system, then it would have to have hundreds of trillions of sheets (assuming one page per synapse), and the process would involve quadrillions of page updates (at least), along with whatever amount of time that took.

      If we do it that way, then part of the stack would represent the state of the brainstem, which would receive the equivalent of the nociceptive signals from the peripheral nervous system. Another portion would represent the thalamus, which would pass those signals on. Another portion would handle the state of the insular cortex, and yet another portion the anterior cingulate cortex, where the determination of pain happens. But to model our conscious experience of the pain, we’d have to model it with the signals from the insular and anterior cingulate cortices winning the attention competition against other signals and having effects throughout the system, as well as all the other system’s reactions to those effects.

      If we successfully did all that, we might have a system experiencing something like thumb pain, although doing it manually the way Searle describes would likely take eons, so we’d be talking about a thumb pain that might take longer than the remaining time the sun will shine. Having more than a momentary experience might bring us to the heat death of the universe.

      Draw what conclusions you will from that.

      Liked by 1 person

      1. Mike,
        Thought experiments are, by definition, not real experiments. It could be that one issue here is that you haven’t fully taken this point into account. If in any thought experiment someone says that they have a “stack” of something, then they mean something of any potential magnitude whatsoever. Perhaps it extends a lightyear high, is a square lightyear in width, and with ultra fine printed information. Doesn’t matter. And if such a stack were processed in moments into another such stack, then so be it. If it troubles you then forget about Searle’s mere pencil written notes. This is about understanding the nature of what informationists happen to propose, and regardless of any magnitude concerns. If you don’t like the paper medium for this then we could use computer chips or whatever else. The crucial point will not be the medium, but rather the informationism platform which holds that when one set of information is converted into another, something will experience qualia. This would be different, as far as I can tell, from the mechanical dynamics associated with all else confirmed by means of science. I consider it beyond causality.

        So my question remains, if phenomenal consciousness exists as access consciousness from the inside (and from whatever medium that you’d like to put my thought experiment under), where would the outside (or “access”) be, and where would the inside (or “phenomena”) be?

        I do have a solution that I consider causal, though it also demonstrates the informationism perspective to not be.

        Liked by 1 person

        1. Eric,
          From the inside simply means from the subjective perspective of the system itself. Outside means from the perspective of studying it objectively.

          So if I mash my thumb, the result is what feels like raw pain, that is, raw experience. From our perspective, it’s not a complex experience. So it’s easy to assume the neural correlates aren’t complex. But they are. I gave an overall summary above. Crucially, the final step is the information becoming widely accessible by specialty systems throughout the brain.

          Richard Brown described an incident when he was a kid playing with his sister. He noticed that she was walking with a limp. When he looked at her leg, he saw an injury there. He pointed it out to her, she looked down, and immediately started screaming and crying. I’ve had similar experiences where I injured some part of my body, but didn’t feel the pain until later because my attention was caught up in other matters.

          In these cases, the brain is receiving nociceptive signals from the injury, but our attention is elsewhere. We’re not cognitively accessing the feeling of the pain, at least until our attention has a chance to focus on it. Then suddenly we have the raw experience of pain. But we didn’t have that raw experience until we focused on it. Of course, if the pain is severe enough, its signal is strong enough to win the attention competition, and it dominates the cortex, accessed by systems throughout.

          Does that answer your question?

          Liked by 1 person

          1. Mike,
            I believe that I’ve also had experiences where I didn’t feel an injury until I actually saw it. Most of the time I seem to feel it first and then look, though sometimes not if I happen to be excited enough. For evolutionary purposes we obviously have adrenaline based influences and such. I’m not disputing the amazingly complex dynamics which go into creating something as evolved as the human. In the present case however we’re not discussing the function of an evolved creature, but rather the physics of qualia itself by means of a device that a human could potentially build. What essentially is it that nature does in order to create something that experiences qualia?

            The informationist proposal seems to imply that if a given information laden stack of paper were properly processed into another such stack, then something would experience what you do when your thumb gets whacked. By asking you where the inside and outside happen to be in such a paper to paper conversion, I was hoping to demonstrate to you that something’s missing. Yes the “inside” would be the part that actually experiences the thumb pain, but my point is that from this scenario, we seem to be missing such an element of the machine.

            For example, there is a “hard problem of gravity” in the sense that we’ll probably never understand why it is that mass attracts mass. But even here there is still a specific associated element by which gravity is proposed to exist, or the mass itself. I don’t know of any case where scientists believe that something exists, and yet without associated substrate. Thus I consider the generic information to information qualia proposal, nothing short of supernatural. While there’s nothing inherently wrong with such an explanation, and some day it might be experimentally validated, the issue I have is that people are currently being indoctrinated into this perspective under false pretenses. Thus I mean to help demonstrate to others that this proposal needs to formally be considered supernatural.

            I spoke of a natural solution to this quandary for those who’d like to go the other way. Here a substrate based mechanism must be theorized by which qualia is produced by the brain. Furthermore I don’t know what could provide sufficient neuron firing fidelity beyond the electromagnetic radiation which neurons are known to produce. So that’s my proposal for a non-supernatural account of qualia — it probably depends upon the proper em waves.

            This perspective also sets up the psychology based dual computers model of brain function that I developed long before it occurred to me that em fields might exist as qualia substrate. One implication of this model is that it provides a ready made blindsight solution. Why might certain blind people automatically react to things which could only have been “seen”, and yet remain blind in a conscious sense? Perhaps because the brain exists as a non-conscious computer which can potentially use information associated with light that enters the eye, even if the em fields associated with conscious sight, fail. One way to refute my theory would be to demonstrate that blindsight also occurs when the eye is not permitted to accept light.

            Liked by 1 person

          2. Eric,
            There actually hasn’t been a “hard problem of gravity” since Einstein. Mass warps spacetime, and spacetime controls where matter goes. You might want to read up on general relativity.

            On the rest, your reasoning seems to implicitly assume some form of dualism. You don’t see an explanation for the ghost in the machine in computational approaches, so you assume they’re unwittingly sneaking in a supernatural assumption.

            Your solution seems to posit an electromagnetic ghost, which you call a “second computer.” I’d call this “naturalistic dualism”, but I think that would confuse it with property dualists like Chalmers. Maybe a better name is em-dualism, since it depends on electromagnetic fields.

            It appears to be inconceivable to you that there is no ghost at all, just the machine and what it does, and that all our observations can be accounted for with that understanding, indeed is the only model supported by the available evidence. We’re all natural born dualists, but it’s one of many intuitions science has shown we can’t trust.

            Liked by 1 person

          3. Mike,
            I wasn’t trying to be exhaustive about the dynamics of gravity above, but even after we factor in well verified relativity theory, there does remain a “hard problem” as I see it. Why does mass have such an effect upon time and space (or the converse)? We should never get all the way to the bottom regarding such speculation, and thus there should always be a “because it does” sort of hard problem answer for us to deal with. Of course this is also the case for qualia, though science remains endlessly behind in comparison. So it goes.

            My point is that we presume substrate based causal dynamics in the case of gravity, as well as all else that I know of in science — that is except for the informationism proposal. Thus I consider there to be a red flag here in need of exploration rather than simple acceptance. As I see it this goes back to our failure regarding metaphysics, epistemology, and axiology, as well as the associated softness of our mental and behavioral sciences in general.

            I’m not disputing computationalism. Actually I’m about as strong a computationalist as they come. There’s no question in my mind that the brain accepts input information, such as a thumb whack, as well as algorithmically processes it for output function by means of neuron firing. Thus this kind of machine should indeed “compute”. How else might my dual computers model of brain function be interpreted? What I do dispute however, is that there are worldly causal dynamics by which a generic set of information that’s used to create another, will result in something with qualia. So instead of “computationalism” I give this position a name that I consider far more appropriate, or “informationism”. I consider it appropriate since the kicker is the theorized dynamics of information manipulation alone.

            Electromagnetic radiation is of course not considered supernatural, nor is there any dispute that neurons produce such radiation when they fire. Does it also produce qualia? It seems to me that since generic information without associated instantiation has never been documented to do anything of this world, that such a source should be experimentally assessed. Or would you disagree?

            We won’t settle this here of course. We’re two opposing parties who naturally defend our conflicting interests. This is exactly what my psychology based models suggests we’d do. But beyond just the standard fun that you and I tend to have testing each other through such mental sport, you also help prepare me for effective discussion with those who aren’t invested in the informationism platform. I certainly appreciate your services! But what if you end up arming someone who ultimately helps bring that platform down? 😬

            Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.