This week, while working through my podcast backlog, I came across an interview of Jacy Reese Anthis. We discussed Anthis’ paper on consciousness semanticism a few months ago. Like me, Anthis sees the term “consciousness” as ambiguous, one that has had a variety of different meanings over the centuries, and continues to have a range of meanings for different people today.
Consequently, asking whether a particular animal or system is conscious, as though consciousness is a definite property that is present or absent, is meaningless, at least with the most common ambiguous definitions of “consciousness”. Anthis’ takeaway from this is that consciousness, as commonly understood, doesn’t exist. He’s an eliminativist and strong illusionist.
My own takeaway is that while some versions of consciousness don’t exist, others do, particularly in terms of functional capabilities like perception, attention, learning, deliberation, or self reflection. But then my inclinations are generally more reconstructionist and weak illusionist.
Anyway, the interview spurred me to reread his paper, and check out his website, including a blog post covering his stance on “the big questions”. One thing I didn’t fully appreciate on the first reading of the paper is that he uses the semanticism approach for much more than consciousness.
From his blog post:
The best approach to virtually all problems in contemporary philosophy is to treat them as pseudo-problems by making their semantics precise then delineating the often straightforward solutions to each precisification.
This is similar to my own approach to dealing with philosophical questions, although I never thought to give the strategy a name like “semanticism”. I’m not sure it renders “virtually all” problems in philosophy “pseudo-problems”, but clarifying definitions does seem to make many problems much more tractable.
So when we ask whether fish are conscious, it helps if we specify whether by “conscious” we mean perception, attention, and reinforcement learning, or something more demanding such as episodic memory and self reflection, or even something non-physical distinct from any functionality. Or in a discussion on free will, it seems to matter whether we mean a will free to use foresight to select which desires to inhibit or indulge, or a will that is, to some extent, free of the laws of physics.
Failing to make these kinds of clarifications often leads to what David Chalmers calls verbal disputes, where the participants are talking past each other with different definitions. In a straight verbal dispute, the participants agree on all the facts, but not on the meaning of key terms. In other cases, there may be actual fact-of-the-matter differences, but their nature is clouded by the different semantics.
In my experience these verbal disputes are so common, that it makes sense anytime a philosophical question comes up to ensure we clarify our terms.
It’s a bad tactic, I think, to simply insist on a particular definition for a concept. It seems more productive to admit that there are multiple meanings at hand and label each one. And then to address one or more of those meanings. Usually, as Anthis notes, the more precise meanings are easier to deal with than the initial ambiguous one. It ends up being a divide and conquer strategy.
Often this move is resisted. Sometimes it comes from epistemic caution. Someone may want to stick with the initial ambiguous term because any of the more precise ones involve theoretical commitments they feel are premature. They may simply not find any of the current precise options worth considering and want to keep the door open to additional alternatives. In other cases, there may simply be resistance to ceding definitional ground to any alternate versions.
In these cases, it seems crucial to avoid the bad tactic. Acknowledging each version with its own label may make that version’s partisans more likely to tolerate the alternatives, at least for purposes of discussion. And acknowledging that the listed options may not exhaust all the possibilities can avoid giving the impression that accepting them is premature.
Of course, sometimes this type of analysis is simply rejected out of hand, or ignored. The more grounded and defensible versions of a concept often provide rhetorical cover for the more dubious ones, making intense partisans of the dubious versions hostile to clarification. It’s difficult to have productive conversations with someone in this mindset.
A broader aversion may be because it often seems to lead to eliminativism toward the original concept. But eliminativism is always a judgment call. As Chalmers noted, when we discover that the scientific image of a concept is different from the manifest one, we always have options, which do include eliminativism, but also reconstruction of the original concept.
Most of us are eliminativist toward terms like “ghost” or “fairy”. We’ve judged that those concepts no longer do any real work, except maybe in metaphors. On the other hand, our notion of what stars and planets are, is radically different from the pre-modern ones, yet we’ve held on to the concepts because the words “star” and “planet” continue to do work. We’ve reconstructed “star” and “planet” to mean something different.
Do terms like “consciousness”, “free will”, “religion”, or “morality” continue to do work? I think an argument can be made that they do, that we’d have to find alternate terminology to describe what they mean, at least in their more grounded forms. Of course, the more grounded forms usually aren’t the ones with deep metaphysical mysteries. And it is the ones associated with those mysteries that are often the most vulnerable to the eliminativist conclusion.
Still, it’s hard to see that we’re going wrong when we’re clarifying things. It seems like the whole point of clarifications is to give us a perch for a possible revision of our views.
But, as always, I may be missing something. Are there issues with the semanticist strategy that I’m not seeing? Or better strategies we should consider instead?