When it comes to my philosophy of consciousness, I’ve noted many times that I’m a functionalist, someone who sees mental states, including conscious ones, as being more about what they do, their causal roles and relations, than what they are. Since functionalism focuses on functionality exclusively, it often gets lumped in with illusionism, which typically denies that phenomenal consciousness exists.
But I’ve long been uncomfortable with the “illusionism” label. Aside from the problematic connotations of “illusion” implying that consciousness is a mistake or something maladaptive, it’s historically seemed hasty to dismiss phenomenal consciousness, at least in the sense of apparent consciousness, of how consciousness seems to us.
However, I’ve recently had a couple of conversations, one with a dualist (or at least non-physicalist) and another with an illusionist. Interestingly, the dualist was onboard with the concept of illusionism, even if he didn’t agree with it, and thought I was using incorrect definitions. The illusionist said something similar, and pointed out that phrases like “phenomenal consciousness” and “qualia” have to be assessed in terms of their historical usage, not the literal definitions or etymology of the words. In other words, trying to use those words in a theory-neutral or “innocent” fashion is ignoring too much of the history behind them.
I thought this final point was interesting. As someone who strives to use words in their most commonly accepted manner, and to be clear when I’m not, I decided to investigate.
The history of “qualia” does turn out to be complicated. The singular “quale”, when first introduced by C.S. Peirce in 1866, may have been relatively theory-neutral. But the plural “qualia” introduced by C.I. Lewis in 1929 wasn’t, and the term has had different meanings since then.
Michael Tye, in the SEP article on qualia, identifies the simplest use of the term as being “phenomenal character”, as there being “something it is like” to undergo a particular experience. Interestingly enough, Tye doesn’t associate Thomas Nagel with this meaning, even though he uses the phrase Nagel coined. Instead he associates Nagel with qualia as intrinsic non-representational qualities. As we’ll see below, this may be a distinction without a difference.
The term “phenomenal consciousness” has been around for centuries, but according to Google’s Ngram viewer, its use spiked after Ned Block’s paper that made the distinction between phenomenal consciousness and access consciousness, indicating most of the contemporary usage refers to Block’s version. Block admits in that paper that he can’t define “phenomenal consciousness” in any non-circular manner, that he has to use synonyms. But he states that what makes a state phenomenally conscious is that there’s “something it is like” to be in that state, and he cites Nagel explicitly.
So we appear to have the conventional contemporary meaning of “qualia” and “phenomenal consciousness” both being based on Nagel’s “something it is like” standard, even though both those terms predate it. It seems like philosophers take the “something it is like” or “like something” phrase to be a theory-neutral or “innocent” way to reference consciousness.
But while “qualia” can be taken from its Latin roots to mean “what kind” (fitting my categorizing conclusion treatment a few posts back) and “phenomenal consciousness” as apparent consciousness, it’s not clear what “like something” can mean. It seems to express a similarity to an unspecified entity, which taken by itself is meaningless. It only seems able to function as a tag. The question is, a tag for what?
And that takes us to Nagel’s famous 1974 paper: What is it Like to be a Bat? Classic interactionist dualism is considered to have been taken out as a reputable intellectual position by Gilbert Ryle’s 1949 book: The Concept of the Mind. Nagel’s paper seems to begin a revival of property dualism and similar outlooks, a more modest form for philosophers unhappy with physicalism.
Along those lines, I think the best thing for me to do is quote what I see as the key passage from Nagel’s paper.
Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism.
We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons.
I do not deny that conscious mental states and events cause behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis.
Nagel goes on to conduct his famous discussion about how we can never know what it’s like to be a bat, a creature that perceives the world through echolocation.
This is far from a theory-neutral view of consciousness. Going through the paper, I detect a number of theoretical commitments.
- Fundamental: The implication is that it is either like something to be a particular organism or it isn’t. There’s no mention of something possibly being partially like something.
- Epiphenomenal: At least to some extent, Nagel’s conception seems epiphenomenal, a version of consciousness with no causal effects in the world.
- Biocentric: At least in this paper, Nagel’s conception seems to assume consciousness only exists in living things. The implications are that machines can’t be conscious.
- Intrinsic: Conscious states are “unanalyzable” in terms of functionality or intentionality. In other words, they’re not representational or relational.
- Private: The bat discussion states that we can never know a bat’s experience, no matter how much we learn about its nervous system. So this isn’t a limitation of technology, but a fundamental one.
As a functionalist, I don’t think any of these are true. 1 doesn’t seem to hold up under the light of brain injury or pathology cases, mind altering substances, or evolution. 2 seems incompatible with making any assertions about what might or might not be conscious, which seems to make 3 moot.
4 could be considered true subjectively, that is, we’re unable to analyze these states from within our experience, but I see no reason to assume it holds objectively.
5 could be more plausibly seen as the situation today with the current state of technology, although that’s less true now than in 1974 and is constantly changing. It could also be seen as absolutely true in the far more limited sense that we can never have a bat’s experience. We can never be a bat. But that’s no different than saying my laptop can never be an iPhone. It might be able to have the iPhone’s state in a virtual machine, but it would always be a laptop with a (virtual) iPhone inside, never an iPhone itself.
It all seems like theoretical commitments based only on intuition. By Nagel’s own admission, there can’t be any evidence for it, which also means there can’t be any evidence against it. It’s a metaphysical add-on we can choose to believe in or ignore, without it making any detectable difference in the world.
All of which is to say that the people I was talking with were right. The most common usage of “qualia” and “phenomenal consciousness” are based on Nagel’s “like something” concept. Call me an eliminativist or illusionist if you want, but I think this group of phrases, what Pete Mandik calls a “synonym circle”, refers to a version of consciousness that doesn’t exist.
Of course, I continue to think the functionality we label “consciousness” exists, the mechanisms and capabilities that can be scientifically studied, so there’s no change in ontological view here. But I’ll probably take Mandik’s advise and stop using these terms, at least without careful qualification. They just seem to be inviting confusion.
What do you think? Are there reasons I’m missing to resist the common usage of these words? Or am I misinterpreting Nagel’s conception of what “like something” means? Or missing something else?