How do we know whether any particular system is conscious? In humans, we typically know because most humans can talk about their conscious experience. Historically, if we can report on it, it’s conscious; if we can’t, it’s in the unconscious. But this raises a difficulty for any entity that doesn’t have language, including non-human animals, infants, and brain injured people.
One example in the brain injured category are people with blindsight. Damage to the visual cortex can lead to a condition known as cortical blindness. However, a person with this condition may still be able to react reflexively to something in their visual field, or when forced to guess whether something is in front of them, can do so with a high degree of accuracy. It’s called “blindsight” because the patient is consciously blind in the affected part of their visual field, but still appears able to unconsciously perceive things in that field (likely using subcortical connections).
Apparently some are now arguing that blindsight is actually conscious sight, just not conscious sight the person can introspect or report on accurately. There is a paper in preprint (which I have to admit I’ve only glanced at) arguing that it is actually degraded conscious perception. The argument seems to be that patients aren’t accurately reporting their experience, that they’re being too conservative in their determination of whether they perceive something.
This resonates with a movement in neuroscience to try to isolate the neural correlates of consciousness apart from the neural correlates of reporting using no-report protocols, in essence, to isolate reportability from reporting itself. While this makes sense (the historical standard is can report, not does report), there are others arguing that even this isn’t sufficient, that we must isolate conscious perception from post-perceptual cognition. Some even argue that we should include processing not accessible for report.
Much of this seems to be taking us further from the historical standard: consciousness is what we can report on. The type of consciousness being pursued appears to be Ned Block’s conception of a phenomenal consciousness that is separate and apart from access consciousness. P-consciousness is generally held to be raw experience, where a-consciousness is cognitive access to that experience for decision making and report.
A proponent of an independent p-consciousness would argue that it exists in the sensory or reactive processing that takes place prior to cognitive access. Someone who sees p-consciousness as just a-consciousness from the inside would argue that the pre-access processing is preconscious, generating content that has the potential to be conscious, but isn’t yet.
One argument commonly put forward for the view of an independent p-consciousness are the implications for animal consciousness. But, aside from the fact that implications are irrelevant, as Michael Woodruff’s points out in his Aeon piece this morning arguing that fish are sentient creatures, fish do have at least glimmers of cognition, indicating that there is at least some access going on in them. We don’t need p-consciousness to be separate to regard them as minimally conscious, and if we don’t need it for them, we certainly don’t need it for mammals or birds.
All of which brings us back to the question: what is necessary for consciousness? Where are the boundaries between the unconscious and conscious? Animals can’t report, but many appear to have at least some of the type of cognition humans can report on.
Personally, I think the idea of a p-consciousness separate and apart from a-consciousness is incoherent, a notion we’re only tempted to hold due to the vestiges of Cartesian dualism, an assumption that if we just have the sensory processing by itself, there will still be an audience. In a non-dualistic view, only cognitive access provides that audience.
But maybe I’ve missed something?