I’ve discussed many times that the word “consciousness” has a variety of meanings. But most commonly, the various meanings can be grouped into two broad categories.
One refers to some combination of functionality, typically the information processing that happens in the brain enabling an organism to take in, assess, and use information about itself and its environment, all to further its goals. Understanding this functionality is complicated. We’re at least several decades away from a full accounting, possibly centuries. However, it’s widely recognized that these collections of functionality are amenable to scientific investigation, and eventual replication in technology.
But there is also a widespread sentiment that the first category leaves out something important. It seems to omit the presentation, the mental paint of conscious experience. When we perceive the color red, it aids in our discrimination of various objects, helping us to tell blood from water, ripe strawberries from unripe ones, or to notice stop signs and stop lights, and to categorize a perceived object as being in the category of red things. But that functional description seems to leave out the actual sensation of redness.
The sensation of redness seems like something primal, fundamental, and irreducible. It also appears impossible for me to describe my sensation of red to you, or for you to describe yours to me. It apparently can only be pointed at and agreed that yes, there is red. Which implies that the sensation is something private, inaccessible from any possible third person investigation. And it seems to be something we can’t be in error about. To believe you’re experiencing redness is to experience redness.
Reviewing these attributes, this second version seems very mysterious. It’s difficult to imagine how any physical system could produce it. Implementing this second sense of consciousness in technology seem very hard to imagine, since we can’t even imagine how biology pulls it off.
When illusionists talk about consciousness as an introspective illusion, it’s usually in reference to this second mysterious category. The question I’m pondering in this post is how to meaningfully label that category.
Georges Rey is often cited as a radical eliminativist for his 1983 paper calling into question the existence of consciousness. (Unfortunately, if that paper is online, I haven’t been able to find it.) But either Rey’s views are more nuanced or they’ve moderated over the years. He calls the first category “weak consciousness” or “w-consciousness”, and the second “strong consciousness” or “s-consciousness.” In a 2016 paper responding to Keith Frankish’s illusionism paper, Rey speculates that, if w-consciousness is the only form of consciousness that exists, some existing simulations of pain might actually be instances of pain.
In 1995, in his paper coining the “hard problem of consciousness”, David Chalmers, following the lead of Allen Newell in 1990, proposed using “awareness” to refer to the functional category and reserving the term “consciousness” for the mysterious one. Crucially, the hard problem applies to “consciousness”, but “awareness” is only subject to the “easy problems”. Apparently he received pushback about this in the commentaries on that paper, which he acknowledges in his response, and admits that the convention probably wouldn’t catch on.
The one that did catch on was Ned Block’s distinction, made in a 1994 paper, between access and phenomenal consciousness. Block characterizes “access consciousness”, or “a-consciousness” as functional, and “phenomenal consciousness”, or “p-consciousness”, as the mysterious version. This is the terminology most philosophers seem to have adopted.
But the “phenomenal” label feels problematic, because like “consciousness”, it means different things to different people. This actually appears to go back to Block’s initial description, which left open questions on certain aspects, and apparently the commentaries he received on his paper responded to the range of possible meanings.
It seems like we can talk about both strong and weak phenomenality. I think of strong phenomenality as the full package of consciousness in the mysterious sense. Weak phenomenality is the functional appearance and only the appearance of the stronger version, not the implied reality. A strong illusionist generally insists that the word “phenomenal” is misleading for weak phenomenality, that we’re really only talking about the illusion of phenomenality. Strong phenomenal realists tend to agree. But a weak illusionist or reductionist realist is more likely to use “phenomenal” in the weak and functional sense.
Which makes the qualifier “phenomenal” ambiguous. My impression is that most philosophers tend to use it in the strong sense and most scientists in the weak one, with exceptions on both sides. It’s seems easy to have debates about phenomenal consciousness where the participants talk past each other with different definitions.
Michael Graziano, in a 2019 paper on reconciling various theories of consciousness into a “standard model”, proposes “i-consciousness” to refer to the functional information processing version, and “m-consciousness” to refer to the mysterious experential one. At the time, I wondered why he introduced new terms rather than using the existing ones. But after becoming familiar with the definitional issues, I can see it. Graziano’s motivation is clarity, since he’s denying the existence of m-consciousness (except as a model within i-consciousness), and wants to avoid the implication he’s denying anything else, like perception or imagination.
I like Graziano’s terms. I also like Rey’s “strong” and “weak” ones (which I sort of adopted for “phenomenal” above) but Graziano’s seem more descriptive. And it seems to avoid the confusion of Chalmers’ “awareness” vs “consciousness” distinction, or Block’s “phenomenal” category. (I do like Block’s “access” label though. It seems to capture something important about the dynamics.)
Although I’m a bit more inclined to just call Graziano’s categories “information consciousness” and “mystery consciousness”. Of course, these aren’t completely free of problems, since people argue about what “information” means. And I could see someone making an issue of “mystery” since the information processing aspects have plenty of remaining mysteries. Although like Chalmers’ “easy problems”, which everyone understands aren’t really easy, some interpretational charity is warranted.
What do you think? Do these labels improve anything? Or did the philosophers get it right with “phenomenal”? If so, what do you think about the strong vs weak phenomenality distinction?