(Warning: consciousness theory weeds.)
A new paper in the journal Cognitive Neuroscience: Hard criteria for empirical theories of consciousness, takes a shot at proposing criteria for assessing scientific theories of consciousness. The authors make clear at the beginning that they’re aiming their criteria at empirical theories, rather than metaphysical ones. So they make no attempt to provide criteria for straight metaphysical theories like panpsychism, property dualism, idealism, or similar notions.
The first set of criteria involve explaining what they call “paradigm cases of consciousness and the unconscious alternative”. This involves addressing all the standard empirical data, such as the results of visual masking perception tests or binocular rivalry experiments. It’s not enough to just address the conscious part of these cases, but the unconscious portions as well. For example, when masking prevents a person from consciously perceiving an image, the theory must address what happens.
Most major theories do address these cases, but a notable exception is Penrose’s Orchestrated Objective Reduction theory. Orch-OR really isn’t widely accepted in the scientific community, but it’s included here as an example.
(One point to note about this criteria. The authors assume these cases provide evidence for consciousness. But it’s worth noting what is actually being measured is subject report. Even in no-report paradigms, it amounts to delayed report, or behavior or activity previously associated with report. Many don’t want to acknowledge it, but the scientific study of consciousness is ultimately the study of report.)
The second criteria is the unfolding argument. I covered this awhile back. Causal structure theories such as Integrated Information Theory (IIT) or Recurrent Processing Theory equate consciousness with recurrent processing. But the same causal output can be produced by an “unfolded” version of the neural network in question, making this stipulation of the theories untestable.
The third criteria is the small and large network arguments. Many theories, if taken literally, can be implemented in trivially small networks of, say, a dozen neurons. Some theories, like IIT, explicitly accept this, but others such as Global Workspace Theory (GWT), Higher Order Thought Theories (HOTT), and Predictive Processing Theory (PPT), don’t. Accepting the small network situation makes a theory panpsychist in nature. The authors argue that theories subject to the small network argument that don’t accept panpsycism are effectively incomplete.
I’m not sure this is a fair criticism of GWT, HOTT, or PPT. These theories aren’t meant to be standalone, but to supplement neuroscience models. IIT on the other hand, does claim to stand alone, being prepared to label a rock with high phi as conscious. But the others are meant to be interpreted as explaining why brain like systems are conscious. Judging them without that seems strawmannish to me.
A close corollary of the small network argument is the large network argument. A large network is composed of many small networks. If each of the small ones are conscious, it leaves the large network with a combination problem to explain. (Similar to the well known issue for panpsychism.)
The last criteria is the “other systems argument”. What does the theory say about the consciousness of systems other than awake humans? The theory should be generalizable to other systems, or provide a strong argument why only humans can be conscious. Some of the theories they tag with not adequately addressing this, such as Graziano’s Attention Schema Theory, doesn’t fit with what the proponents actually say; in his writing, Graziano explicitly speculates about AI systems having their own attention schemas.
I actually thought the unfolding argument was a special case of the other systems argument. I’m not sure why the authors kept them separate.
None of the theories emerge from these criteria unscathed, as this table from the paper shows:
The conclusion section makes this point (ToC=theory of consciousness):
Maybe the plethora of ToCs simply reflects the fact that we have too few experimental constraints. It is possible that with more data and a more detailed view of the subprocesses of consciousness, the mystery will evaporate, similarly to what happened with the discussion about the ‘nature’ of life. Nowadays biologists understand what life is, but there is no ‘theory of life’ (Machery, 2012). It is the entirety of subprocesses such as homeostasis, reproduction, etc., that differentiates life from non-life.
As someone who has done his share of reading both in the consciousness and neuroscience literature, this has been my suspicion for some time, that we will never have just one theory of consciousness as though consciousness were just one objective thing in the brain. Similar to life, we’ll eventually have an understanding of the complex array of processes and capabilities that make up our phenomenal experience, and that provide the intuition that other systems share it, but there won’t be just one theory for the whole thing.
That’s not to say many of theories can’t provide insights into aspects of consciousness, what the authors called “subprocesses”, but we shouldn’t expect any one of these theories to be the whole story. By themselves, they will always predict either too few or too many systems as conscious.
Unless maybe I’ve overlooked something?