An interesting paper by Matthias Michel on the underdetermined nature of theories of consciousness.
Consciousness scientists have not reached consensus on two of the most central questions in their field: first, on whether consciousness overflows reportability; second, on the physical basis of consciousness. I review the scientific literature of the 19th century to provide evidence that disagreement on these questions has been a feature of the scientific study of consciousness for a long time. Based on this historical review, I hypothesize that a unifying explanation of disagreement on these questions, up to this day, is that scientific theories of consciousness are underdetermined by the evidence, namely, that they can be preserved “come what may” in front of (seemingly) disconfirming evidence. Consciousness scientists may have to find a way of solving the persistent underdetermination of theories of consciousness to make further progress.
Michel looks at scientific thought on consciousness in the 19th century. Interestingly, many of the same debates we have today raged on back then, with people arguing about definitions of consciousness, eschewing metaphysical issues for cognitively accessible ones, and whether consciousness resides in the cortex, the thalamus, midbrain, or somewhere else.
Apparently some scientists in the 19th century thought consciousness might reside in the spinal cord. Experiments done on animals, such as surgically decapitating frogs but keeping the body alive, showed them still capable of complex motor responses, such as a frog’s body attempting to rub acid off its thigh with its foot, inviting many to speculate that they retained a “feeling consciousness”. Glancing at the history of spinal cord injuries, scientists back then probably didn’t have many, if any, live patients with severed spinal cords as a data point.
Another debate that goes back to that period is whether consciousness “overflows” reportability. In other words, are there aspects of consciousness that can’t be self reported? We saw an example of this idea recently with Ned Block’s contention that phenomenal consciousness holds detailed images that can’t be accessed for self report.
Michel’s main thesis is how difficult it is to assess many theories of consciousness. Many seem to go on after seemingly being falsified. An example is IIT (Integrated Information Theory) after it was shown (by Scott Aaronson and others) to indicate consciousness in systems that give no indication of being conscious. Giulio Tononi simply bit the bullet and asserted that those systems were in fact conscious.
The issue, Michel points out, are standards of detection, such as self report or appropriate behavior. If someone has a theory of consciousness, and it indicates a particular system is conscious, but that system fails a detection, a proponent of that theory can often simply discount that method of detection.
Thus Block discounts self report as a valid detection standard when it fails to capture the contents he claims are in phenomenal consciousness, and Tononi discounts, well, apparently all methods of detection, when he asserts a set of inactive logic gates are conscious.
That’s not to say that some detection methods shouldn’t be challenged. Some scientists assert that fish cannot be conscious because they lack a cortex. But no invertebrates have a forebrain, much less a cortex, yet many display complex behaviors indicating some level of consciousness. So using the absence of a particular structure, just because it’s implicated in mammalian consciousness, doesn’t seem justified.
It seems to me that, for healthy humans, self report should be the gold standard of detection. Once we know that a particular activity (behavioral or brain scan activity) is associated with self report in humans, we can then test for similar activity in injured humans or non-human animals. If someone can’t cite a chain of evidence back to self report, we should be skeptical.
But the detection issues seem like a consequence of a deeper issue, the lack of consensus on what consciousness is. This isn’t so much about a lack of understanding of what it is, as much as trouble even agreeing what we’re talking about, except in the most vague manners in which it can be expressed.
This is why I sometimes wonder if science isn’t better off focusing on specific capabilities, such as perception, memory, metacognition, etc, and leaving consciousness itself to the philosophers. But maybe it’s enough as long as scientific theories are clear which type of consciousness, or which aspects of it they’re addressing.