A preprint came up a few times in my feeds, titled: Falsification and consciousness. The paper argues that all the major scientific theories of consciousness are either already falsified or unfalsifiable. One neuroscientist, Ryota Kanai, calls it a mathematical proof of the hard problem. Based on that description, it was hard to resist looking at it.
This paper could be seen as building on an argument in a paper I shared last year. The “unfolding argument” was used to attack the Integrated Information Theory (IIT) of consciousness, focusing on IIT’s requirement for feedback processing, and pointed out that the same input-output relation could always be obtained from an “unfolded” feed forward version of the neural network in question. The argument was that the requirement made IIT unfalsifiable and therefore unscientific. (Some IIT researchers later responded.)
The current paper gets into a mathematical formalism, but the gist seems to be that the authors make distinctions between physical systems like a brain, our observations of those systems, and the predictions and inferences we make about what conscious experience is happening in those systems.
In the paper, an inference is what an experimenter infers about conscious experience based on behavior, such as a report from a test subject. A prediction on the other hand, is what the theory predicts about conscious experience based on the result of a brain scan or similar observation. The authors note that for many theories, inferred and predicted experience are seen as independent variables, albeit ones that hopefully converge in experiments.
But, the authors point out, the output of such a system can always be produced by another physical system with different internal structures and processes, a substitute architecture, leading to the same inferences but not the same predictions. Therefore, they argue, all such theories are already falsified. They emphasize that this affects not just IIT, but other scientific theories such as Global Workspace Theories (GWT), or Higher Order Thought Theories (HOTT). They call this the substitution argument.
The authors also take on theories that posit a strict dependence between inference and prediction, such as behaviorism, and access only interpretations of GWT, HOTT, and others. These theories or interpretations typically are strictly in terms of behavioral dispositions or access dynamics. The issue, it seems, is the lack of an explicit accounting for phenomenal experience. Here, it’s argued, the theories, while not already falsified, are unfalsifiable.
If the authors are right, then scientific theories of consciousness start to look hopeless. In their concluding remarks, they argue that theories could try to avoid the strict independence that leads to falsification, or the strict dependence that leads to unfalsifiability, by trying for something they call “lenient dependency”, although it’s unclear what this would look like. Or they could assume that physics is not causally closed, a move few scientists would likely be enthusiastic about.
However, I think the paper’s argument is developed under an implicit assumption, that a theory claims to be the one and only identity explanation for consciousness. IIT, as advocated for by Christof Koch, falls in this category. Koch argues that only systems with a high Φ (phi) are conscious, and that every system with a high Φ is conscious, even if it doesn’t appear to be.
Conversely, Koch argues that although a Von Neumann type computer (like the device you’re using right now), or an unfolded network, can in principle reproduce all the behavior of a conscious system, the fact that it has a low Φ means it will not be conscious, that it will in fact be a behavioral zombie (a weaker version of a philosophical zombie). Under the paper’s argument, such a system would falsify IIT. I’m sure Koch and other IITers would disagree, simply arguing for the zombie scenario.
I think the paper’s argument has less force for theories which make no claim to universality, that only aim to be an explanation of consciousness in human or vertebrate brains. And most advocates of GWT, HOTT, and other similar theories are functionalists, that is, people who see consciousness as functionality. In principle, functionality can always be reproduced with alternate mechanisms.
To a functionalist, these theories aren’t about finding the one and only way to produce consciousness, just how it is produced in the systems we can currently study. A functionalist will usually be open to the same functionality being produced by alternate means, such as a sufficiently fast Von Neumann machine.
There is the charge of unfalsifiability leveled at behaviorist and access theories. But this is a long standing criticism of these theories, and it assumes that something other than a functional explanation is required for phenomenal experience. (A notion which itself seems unfalsifiable.)
All of which is to say, it’s not clear any of these theories are taking hits here they haven’t already taken. Some will see added credibility due to the formalism, but the formalism only makes the criticisms more precise. You still have to buy the underlying assumptions for them to have force.
Unless of course I’m missing something?