After the global workspace theory (GWT) post, someone asked me if I’m now down on higher order theories (HOT). It’s fair to say I’m less enthusiastic about them than I used to be. They still might describe important components of consciousness, but the stronger assertion that they provide the primary explanation now seems dubious.
A quick reminder. GWT posits that conscious content is information that has made it into a global workspace, that is, content that has won the competition and, for a short time, is globally shared throughout the brain, either exclusively or with only a few other coherently compatible concepts. It becomes conscious by all the various processes collectively having access to it and each adapting to it within their narrow scope.
HOT on the other hand, posits that conscious content is information for which a higher order thought of some type has been formed for it. In most HOTs, the higher order processing is thought to happen primarily in the prefrontal cortex.
As the paper I highlighted a while back covered, there are actually numerous theories out there under the higher order banner. But it seems like they fall into two broad camps.
In the first are versions which say that a perception is not conscious unless there is a higher order representation of that perception. Crucially in this camp, the entire conscious perception is in the higher order version. If, due to some injury or pathology, the higher order representation were to be different than the lower order one, most advocates of these theories say that its the higher order one we’d be conscious of, even if the lower order one was missing entirely.
Even prior to reading up on GWT, I had a couple of issues with this version of HOT. My first concern is that it seems computationally expensive and redundant. Why would the nervous system evolve to have formed the same imagery twice? We know neural processing is metabolically expensive. It seems unlikely evolution would have settled on such an arrangement, at least unless there was substantial value to it, which hasn’t been demonstrated yet.
It also raises an interesting question. If we can be conscious of a higher order representation without the lower order one, why then, from an explanatory strategy point of view, do we need the lower order one? In other words, why do we need the two tier system if one (the higher tier) is sufficient? Why not just have one sufficient tier, the lower order one?
The HOTs I found more plausible were in the second camp, and are often referred to as dispositional or dual content theories. In these theories, the higher order thought or representation doesn’t completely replace the lower order one. It just adds additional elements. This has the benefit of making the redundancy issue disappear. In this version, most of the conscious perception comes from the lower order representations, with the higher order ones adding feelings or judgments. This content becomes conscious by its availability to the higher order processing regions.
But this then raises another question. What about the higher order region makes it conscious? By making the region itself, the location, the crucial factor, we find ourselves skirting with Cartesian materialism, physical dualism, the idea that consciousness happens in a relatively small portion of the brain. (Other versions of this type of thinking locate consciousness in various locations such as the brainstem, thalamus, or hippocampus.)
The issue here is that we still face the same problem we had when considering the whole brain. What about the processing of that particular region makes it a conscious audience? Only, since now we’re dealing with a subset of the brain, the challenge is tougher, because it has to be solved with less substrate. (A lot less with many of the other versions. At least the prefrontal cortex in humans is relatively vast.)
We can get around this issue by positing that the higher order regions make their results available back into the global workspace, that is, by making the entire brain the audience. It’s not the higher order region itself which is conscious. Its contents become conscious by being made accessible to the vast collection of unconscious processes throughout the brain, each of which act on it in its own manner, collectively making that content conscious.
But now we’re back to consciousness involving the workspace and its audience processes. HOT has dissolved into simply being part of the overall GWT framework. In other words, we don’t need it, at least not as a theory, in and of itself, that explains consciousness.
None of this is to say higher order processing isn’t a major part of human consciousness. Michael Graziano’s attention schema theory, for instance, might well still have a role to play in providing top down control of attention, and providing our intuitive sense of how it works. The other higher order processes provide metacognition, imagination, and what Baars calls “feelings of knowing,” among many other things.
They’re just not the sole domain of consciousness. If many of them were knocked out, the resulting system would still be able to have experiences, experiences that could lay down new memories. It’s just that the experience would be simpler, less rich.
Finally, it’s worth noting that David Rosenthal, the original author of HOT, makes this point in response to Michael Graziano’s attempted synthesis of HOT, GWT, and his own attention schema theory (AST):
Graziano and colleagues see this synthesis as supporting their claim that AST, GWT, and HO theory “should not be viewed as rivals, but as partial perspectives on a deeper mechanism.” But the HO theory that figures in this synthesis only nominally resembles contemporary HO theories of consciousness. Those theories rely not on an internal model of information processing, but on our awareness of psychological states that we naturally classify as conscious. HO theories rely on what I have called (2005) the transitivity principle, which holds that a psychological state is conscious only if one is in some suitable way aware of that state.
This implies that consciousness is introspection. Admittedly, there is precedent going back to John Locke for defining consciousness as introspection. (Locke’s specific definition was “the perception of what passes in a man’s own mind”.) Doing so dramatically reduces the number of species that we consider to be conscious, perhaps down to just humans, non-infant humans to be precise. I toyed with this definition a few years ago, before deciding that it doesn’t fit most people’s intuitions. (And when it comes to definitions of consciousness, our intuitions are really all we have.)
It ignores the fact that we are often not introspecting while we’re conscious. And much of what we introspect goes on in animals (in varying degrees depending on species), or human babies for that matter, even if they themselves can’t introspect it. It also ignores the fact that if a human, through brain injury or pathology, loses the ability to introspect, but still shows an awareness of their world, we’re going to regard them as conscious.
So HOT doesn’t hold the appeal for me it did throughout much of 2019. Although new empirical results could always change that in the future.
What do you think? Am I missing benefits of HOT? Or issues with GWT?