Victor Lamme’s recurrent processing theory (RPT) remains on the short list of theories considered plausible by the consciousness science community. It’s something of a dark horse candidate, without the support of global workspace theory (GWT) or integrated information theory (IIT), but it gets more support among consciousness researchers than among general enthusiasts. The Michael Cohen study reminded me that I hadn’t really made an effort to understand RPT. I decided to rectify that this week.
Lamme put out an opinion paper in 2006 that laid out the basics, and a more detailed paper in 2010 (warning: paywall). But the basic idea is quickly summarized in the SEP article on the neuroscience of consciousness.
RPT posits that processing in sensory regions of the brain are sufficient for conscious experience. This is so, according to Lamme, even when that processing isn’t accessible for introspection or report.
Lamme points out that requiring report as evidence can be problematic. He cites the example of split brain patients. These are patients who’ve had their cerebral hemispheres separated to control severe epileptic seizures. After the procedure, they’re usually able to function normally. However careful experiments can show that the hemispheres no longer communicate with each other, and that the left hemisphere isn’t aware of sensory input that goes to the right hemisphere, and vice-verse.
Usually only the left hemisphere has language and can verbally report its experience. But Lamme asks, do we regard the right hemisphere as conscious? Most people do. (Although some scientists, such as Joseph LeDoux, do question whether the right hemisphere is actually conscious due to its lack of language.)
If we do regard the right hemisphere as having conscious experience, then Lamme argues, we should be open to the possibility that other parts of the brain may be as well. In particular, we should be open to it existing in any region where recurrent processing happens.
Communication in neural networks can be feedforward, or it can include feedback. Feedforward involves signals coming into the input layer and progressing up through the processing hierarchy one way, going toward the higher order regions. Feedback processing is in the other direction, higher regions responding with signals back down to the lower regions. This can lead to a resonance where feedforward signals cause feedback signals which cause new feedforward signals, etc, a loop, or recurrent pattern of signalling.
Lamme identifies four stages of sensory processing that can lead to the ability to report.
- The initial stimulus comes in and leads to superficial feedforward processing in the sensory region. There’s no guarantee the signal gets beyond this stage. Unattended and brief stimuli, for example, wouldn’t.
- The feedforward signal make it beyond the sensory region, sweeping throughout the cortex, reaching even the frontal regions. This processing is not conscious, but it can lead to unconscious priming.
- Superficial or local recurrent processing in the sensory regions. Higher order parts of these regions respond with feedback signalling and a recurrent process is established.
- Widespread recurrent processing throughout the cortex in relation to the stimulus. This leads to binding of related content and an overall focusing of cortical processes on the stimulus. This is equivalent to entering the workspace in GWT.
Lamme accepts that stage 4 is a state of consciousness. But what, he asks, makes it conscious? He considers that it can either be the location of the processing or the type of processing. But for the location, he points out that the initial feed forward sweep in stage 2 that reaches widely throughout the brain doesn’t produce conscious experience.
Therefore, it must be the type of processing, the recurrent processing that exists in stages 3 and 4. But then, why relegate consciousness only to stage 4? Stage 3 has the same type of processing as stage 4, just in a smaller scope. If recurrent processing is the necessary and sufficient condition for conscious experience, then that condition can exist in the sensory regions alone.
But what about recurrent processing, in and of itself, makes it conscious? Lamme’s answer is that synaptic plasticity is greatly enhanced in recurrent processing. In other words, we’re much more likely to remember something, to be changed by the sensory input, if it reaches a recurrent processing stage.
Lamme also argues from an IIT perspective, pointing out that IIT’s Φ (phi), the calculated quotient of consciousness, would be higher in a recurrent region than in one only doing feedforward processing. (IIT does see feedback as crucial, but I think this paper was written before later versions of IIT used the Φmax postulate to rule out talk of pieces of the system being conscious.)
Lamme points out that if recurrent processing leads to conscious experience, then that puts consciousness on strong ontological ground, and makes it easy to detect. Just look for recurrent processing. Indeed, a big part of Lamme’s argument is that we should stop letting introspection and report define our notions of consciousness and should adopt a neuroscience centered view, one that lets the neuroscience speak for itself rather than cramming it into preconceived psychological notions.
This is an interesting theory, and as usual, when explored in detail, it turns out to be more plausible than it initially sounded. But, it seems to me, it hinges on how lenient we’re prepared to be in defining consciousness. Lamme argues for a version of experience that we can’t introspect or know about, except through careful experiment. For a lot of people, this is simply discussion about the unconscious, or at most, the preconscious.
Lamme’s point is that we can remember this local recurrent processing, albeit briefly, therefore it was conscious. But this defines consciousness as simply the ability to remember something. Is that sufficient? This is a philosophical question rather than an empirical one.
In considering it, I think we should also bear in mind what’s absent. There’s no affective reaction. In other words, it doesn’t feel like anything to have this type of processing. That requires bringing in other regions of the brain which aren’t likely to be elicited until stage 4: the global workspace. (GWT does allow that it could be elicited through peripheral unconscious propagation, but it’s less likely and takes longer.)
It’s also arguable that considering the sensory regions alone outside of their role in the overall framework is artificial. Often the function of consciousness is described as enabling learning or planning. Ryota Kanai, in a blog post discussing his information generation theory of consciousness (which I highlighted a few weeks ago), argues that the function of consciousness is essentially imagination.
These functional descriptions, which often fit our intuitive grasp of what consciousness is about, require participation from the full cortex, in other words, Lamme’s stage 4. In this sense, it’s not the locations that matter, but what functionality those locations provide, something I think Lamme overlooks in his analysis.
Finally, similar to IIT’s Φ issue, I think tying consciousness only to recurrent processing risks labeling a lot of systems conscious that no one regards as conscious. For instance, it might require us to see an artificial recurrent neural network as having conscious experience.
But this theory highlights the point I made in the post on the Michael Cohen study, that there is no one finish line for consciousness. We might be able to talk about a finish line for short term iconic memory (which is largely what RPT is about), another for working memory, one for affective reactions and availability for longer term memory, and perhaps yet another for availability for report. Stage 4 may quickly enable all of these, but it seems possible for a signal to propagate along the edges and get to some of them. Whether it becomes conscious seems like something we can only determine retrospectively.
Unless of course I’m missing something? What do you think of RPT? Or of Lamme’s points about the problems of relying on introspection and self report? Should we just let the neuroscience speak for itself?