There’s an interesting new paper in Consciousness and Cognition on why causal theories such as IIT (integrated information theory) or RPT (recurrent processing theory) aren’t scientific:
How can we explain consciousness? This question has become a vibrant topic of neuroscience research in recent decades. A large body of empirical results has been accumulated, and many theories have been proposed. Certain theories suggest that consciousness should be explained in terms of brain functions, such as accessing information in a global workspace, applying higher order to lower order representations, or predictive coding. These functions could be realized by a variety of patterns of brain connectivity. Other theories, such as Information Integration Theory (IIT) and Recurrent Processing Theory (RPT), identify causal structure with consciousness. For example, according to these theories, feedforward systems are never conscious, and feedback systems always are. Here, using theorems from the theory of computation, we show that causal structure theories are either false or outside the realm of science.
To be clear, the main assertion of IIT and RPT, is that it isn’t the functionality of these neural networks that make them conscious, but their specific causal structures. According to these theories, something about those specific recurrent feedback structures generate subjective experience.
The main point the authors make is that any output that can be produced from a recurrent feedback network, can also be produced from a “unfolded” feedforward network with ϕ (phi), the metric IIT uses to supposedly measure the amount of consciousness present, equal to zero. In addition, any output that can be produced by a feedforward network, can also be produced by a “folded” feedback network organized to have arbitrarily high levels of ϕ. They discuss these assertions in detail in the appendices to the paper.
Of course, the proponents of IIT and RPT can always claim that even though the output (spike trains, behavior, etc) are identical, the feedforward networks are not conscious. But the problem is that this leaves consciousness as purely epiphenomenal, a condition that has no causal influence. A lot of people do accept this notion of consciousness, but as the paper authors note, it leaves these theories completely outside of any scientific ability to validate or falsify them. It makes them unscientific.
Conscious seeming behavior can, in principle, be produced by networks that these theories would say are not conscious. This brings back our old friend (nemesis?), the philosophical zombie (the behavioral variant), along with all the problems with that concept.
Why then do we see recurrent feedback networks in the brain? Efficiency. If you’ve ever done any computer programming, you’ll know that program loops can save a lot of memory and code. Recurrent feedback loops have a similar role. They enable a lot more processing to take place than could otherwise happen with the available substrate. Although it’s always possible to “unfold” the network, at least in principle, into a larger one and accomplish the same result.
But just as in computer programming, this comes at a cost in complexity and performance. Which is probably why the neural networks in the cerebellum are predominantly feedforward. For what that part of the brain needs to do, speed and reliability are the priorities. But for much of what happens in the cortex, the additional computational capacity is well worth the trade off.
But this does set up a conundrum. It means that consciousness is more likely to be associated with the recurrent feedback regions in the brain. Not because recurrence is equivalent to consciousness, but because cognition requires a lot of processing in a tight space. That means ϕ could still end up being a usable measure of whether a particular brain is currently conscious, but not necessarily for telling whether other systems are conscious, a nuanced distinction that I fear the proponents of IIT will ignore.
This paper gets at a nagging issue I’ve long had with IIT. It takes an aspect of how neural systems are organized and posits that that aspect, integration, in and of itself, somehow produces subjective experience. How exactly? Essentially the answer seems to be magic. It’s more a theory aimed at explaining the ghost in the machine than the functionality. And as I’ve indicated before, there’s no evidence, at least not yet, for any ghost, whether generated or channeled by the brain. There’s just the brain and what it does.
Unless of course I’m missing something?
The paper does make clear that functionalist theories of consciousness, such as GWT (global workspace theory), HOTT (higher order thought theory), or PPT (predictive processing theory) are unaffected by the unfolding argument.