One of the things about consciousness I’ve tried to call attention to on this blog is the ambiguity of its most common definitions, such as Thomas Nagel’s definition of it being “like something” for a particular system. The problem is that when people try to get more specific, they come up with a wide variety of answers, and then end up debating past each other with those different definitions.
Externally, consciousness is being responsive to the environment, or it’s goal directed behavior, or deliberation, or language. Internally it’s the results of self reflection, or attention, or it’s all perception regardless of whether it’s currently being attended to or reflected upon. Which one sounds right to you has a big impact on which scientific theory of consciousness you might favor, and on your attitude toward how widespread consciousness might be in the animal kingdom, or beyond.
All of which has long led me to conclude that consciousness is in the eye of the beholder. If forced to come up with my own brief definition, I typically replace Nagel’s “like something“, which seems literally meaningless to me, with “like us“, a label we slap on systems with impulses we recognize as similar to ours, and with a similar ability to process information.
So it was with some interest that I read Jacy Reese Anthis’ paper: Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness (Note this link is to the preprint, since the official Springer version is paywalled.)
Anthis’ goal is to step around the typical “intuition jousting” that goes on in these discussions, and come up with a formal argument. His core argument, as I understand it, is that the most common definitions are imprecise, yet typical use of the word “consciousness” implies precision, therefore that most common version doesn’t exist.
It’s worth noting that Anthis makes a distinction between a couple of different notions:
He sees the first as undeniable. It’s the second that he’s claiming doesn’t exist. I actually see the first as meeting my “like us” definition above, or at least in the same neighborhood. But the second is definitely along the lines of Nagel’s definition.
He also spends some time on the word “exist”, providing a specific definition for it.
Existence: A property exists if and only if, given all relevant knowledge and power, we could categorize the vast majority of entities in terms of whether and to what extent, if any, they possess that property.
Overall, the point is that even if we examined a system as an omniscient observer, there would be no fact of the matter on whether consciousness as a property exists within that system. Therefore, this property doesn’t exist, and consciousness, in this sense, doesn’t exist.
As usual, whenever I discuss variants of eliminativism or illusionism, I have to admit I agree with the ontology, but not the language used to describe it. In other words, my difference with eliminativism is what Chalmers calls a “verbal dispute”.
It’s true that I don’t think certain versions of consciousness exist. But then anyone who thinks about consciousness will think that certain versions exist while others don’t. That’s what it means to disagree about the nature of something. The problem is using the phrase “consciousness doesn’t exist” implies that none of them exist. I’ve yet to meet an eliminativist who actually thinks this, which is why I disagree with using that phrase.
I understand the idea of trying to challenge people’s intuitions, but in my experience, it almost always derails the discussion, turning it from what may or may not be the nature of consciousness, to whether the eliminativist is claiming there is no such thing as pain, suffering, joy, etc. Again, I haven’t encountered anyone who actually thinks these things don’t exist, so using that language doesn’t seem productive.
It’s worth noting that Anthis, in the conclusions section of the paper, makes clear he’s not suggesting that we discontinue use of the word “consciousness”. Just that, as a goal in scientific investigation, we’re better off focusing on specific capabilities like sensory discrimination, reportability, affective evaluations, metacognition, etc, essentially Chalmers’ “easy problems”. Focusing on the “hard problem” is unlikely to be productive.
In this view, the concept of consciousness is like the concept of life in biology. Anthis points out that biologists don’t agonize over the distinction between life and non-life, they instead investigate replication, homeostasis, metabolism, and a host of related processes. While it’s an interesting question to ask whether something like a virus is alive, most biologists consider it a philosophical one, not a scientific one. They’re more interested in just studying how viruses work.
This is very similar to Anil Seth’s point that consciousness is more like life than it’s like temperature. Temperature is a relatively simple emergent property measurable as a single number. Life is a complex one that defies simple characterizations. Consciousness seems to be in the same category.
So, in terms of predicting what is likely to be fruitful in scientific research, I completely agree with Anthis. Although my stance is less categorical. I’m not a fan of telling scientists it’s a waste of time to study areas they’re interested in. Despite the direction the evidence has been trending for some time, people like Anthis and I could still conceivably turn out to be wrong. If we are, it’s likely to be discovered by someone exploring alternatives we find unpromising.
What do you think about consciousness semanticism? Or my language dispute with eliminativists / illusionists?