Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.
I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.
On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position. However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article. Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.
It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.
Broadly speaking, I agree with Shermer on the hard problem. But I agree with an important caveat. In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it. In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it. The issue is that none of them are accepted by the people who talk about it. In other words, for me, this seems more of a sociological problem than a metaphysical one.
What are these plausible solutions? I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.
Usually when I’ve put these forward, I’m told that I’m missing the point. One person told me I was talking about explanations of intelligence or cognition rather than consciousness. But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.
The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain. This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe. It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it. (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)
It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it. Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.
Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”. This is the question of why we think there is a hard problem. I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.
It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist. They are apparitions arising from a data model we build in our brains, an internal model of how our minds work. But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.
Incidentally, this isn’t because of any defect in the model. It serves its purpose. But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience. This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain. But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.
Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness. To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions. (See quantum mechanics or general relativity.) We shouldn’t require it for theories of consciousness.
This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have, is a somewhat meaningless question. It’s really a question of how similar their information processing and primal drives are to ours. In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.
Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious. A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish. And any organism with a brain currently has far more than any technological system, although that may change in time.
But none have the full package, because they’re not human. We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.
So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain. The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist. The best we can do is study our perceptions of these things.
Unless of course, I’m missing something. Am I being too hasty in dismissing the hard-problem version of consciousness? If so, why? What about subjective experience implies anything non-physical?