Pitts describes the intention of this competition as “to kill one or both theories,” but adds that while he is unsure that either will be definitively disproved, both theories have a good chance of being critically challenged. It’s expected to take three years for the experiments to be conducted and the data to be analyzed before a verdict is reached.
Three years. And of course there remains no guarantee the results will be decisive. Sigh.
I don’t particularly need the results to tell me which theory is more plausible. GWT is probably not the final theory, but it currently feels more grounded than IIT.
Still it would be nice to get insights on the back vs front of the brain matter. I’m expecting the answer to be the whole brain is usually involved, but that consciousness can get by in reduced form with only the posterior regions. (Assuming the necessary subcortical regions remains functional.) Having only those regions might be a sensory consciousness with no emotional feeling, a type of extreme akinetic mutism.
I suppose it’s conceivable consciousness could also get by with just the frontal regions, but it seems like it would be without any mental imagery (except for olfactory ones), blind feelings, which seems pretty desolate. On the other hand, it might amount to what the forebrain of fish and amphibians have, since their non-smell senses only go directly to their midbrain.
Oh well. I’m sure we’ll have plenty of other studies and papers to entertain us in the meantime!
After the global workspace theory (GWT) post, someone asked me if I’m now down on higher order theories (HOT). It’s fair to say I’m less enthusiastic about them than I used to be. They still might describe important components of consciousness, but the stronger assertion that they provide the primary explanation now seems dubious.
A quick reminder. GWT posits that conscious content is information that has made it into a global workspace, that is, content that has won the competition and, for a short time, is globally shared throughout the brain, either exclusively or with only a few other coherently compatible concepts. It becomes conscious by all the various processes collectively having access to it and each adapting to it within their narrow scope.
HOT on the other hand, posits that conscious content is information for which a higher order thought of some type has been formed for it. In most HOTs, the higher order processing is thought to happen primarily in the prefrontal cortex.
As the paper I highlighted a while back covered, there are actually numerous theories out there under the higher order banner. But it seems like they fall into two broad camps.
In the first are versions which say that a perception is not conscious unless there is a higher order representation of that perception. Crucially in this camp, the entire conscious perception is in the higher order version. If, due to some injury or pathology, the higher order representation were to be different than the lower order one, most advocates of these theories say that its the higher order one we’d be conscious of, even if the lower order one was missing entirely.
Even prior to reading up on GWT, I had a couple of issues with this version of HOT. My first concern is that it seems computationally expensive and redundant. Why would the nervous system evolve to have formed the same imagery twice? We know neural processing is metabolically expensive. It seems unlikely evolution would have settled on such an arrangement, at least unless there was substantial value to it, which hasn’t been demonstrated yet.
It also raises an interesting question. If we can be conscious of a higher order representation without the lower order one, why then, from an explanatory strategy point of view, do we need the lower order one? In other words, why do we need the two tier system if one (the higher tier) is sufficient? Why not just have one sufficient tier, the lower order one?
The HOTs I found more plausible were in the second camp, and are often referred to as dispositional or dual content theories. In these theories, the higher order thought or representation doesn’t completely replace the lower order one. It just adds additional elements. This has the benefit of making the redundancy issue disappear. In this version, most of the conscious perception comes from the lower order representations, with the higher order ones adding feelings or judgments. This content becomes conscious by its availability to the higher order processing regions.
But this then raises another question. What about the higher order region makes it conscious? By making the region itself, the location, the crucial factor, we find ourselves skirting with Cartesian materialism, physical dualism, the idea that consciousness happens in a relatively small portion of the brain. (Other versions of this type of thinking locate consciousness in various locations such as the brainstem, thalamus, or hippocampus.)
The issue here is that we still face the same problem we had when considering the whole brain. What about the processing of that particular region makes it a conscious audience? Only, since now we’re dealing with a subset of the brain, the challenge is tougher, because it has to be solved with less substrate. (A lot less with many of the other versions. At least the prefrontal cortex in humans is relatively vast.)
We can get around this issue by positing that the higher order regions make their results available back into the global workspace, that is, by making the entire brain the audience. It’s not the higher order region itself which is conscious. Its contents become conscious by being made accessible to the vast collection of unconscious processes throughout the brain, each of which act on it in its own manner, collectively making that content conscious.
But now we’re back to consciousness involving the workspace and its audience processes. HOT has dissolved into simply being part of the overall GWT framework. In other words, we don’t need it, at least not as a theory, in and of itself, that explains consciousness.
None of this is to say higher order processing isn’t a major part of human consciousness. Michael Graziano’s attention schema theory, for instance, might well still have a role to play in providing top down control of attention, and providing our intuitive sense of how it works. The other higher order processes provide metacognition, imagination, and what Baars calls “feelings of knowing,” among many other things.
They’re just not the sole domain of consciousness. If many of them were knocked out, the resulting system would still be able to have experiences, experiences that could lay down new memories. It’s just that the experience would be simpler, less rich.
Graziano and colleagues see this synthesis as supporting their claim that AST, GWT, and HO theory “should not be viewed as rivals, but as partial perspectives on a deeper mechanism.” But the HO theory that figures in this synthesis only nominally resembles contemporary HO theories of consciousness. Those theories rely not on an internal model of information processing, but on our awareness of psychological states that we naturally classify as conscious. HO theories rely on what I have called (2005) the transitivity principle, which holds that a psychological state is conscious only if one is in some suitable way aware of that state.
This implies that consciousness is introspection. Admittedly, there is precedent going back to John Locke for defining consciousness as introspection. (Locke’s specific definition was “the perception of what passes in a man’s own mind”.) Doing so dramatically reduces the number of species that we consider to be conscious, perhaps down to just humans, non-infant humans to be precise. I toyed with this definition a few years ago, before deciding that it doesn’t fit most people’s intuitions. (And when it comes to definitions of consciousness, our intuitions are really all we have.)
It ignores the fact that we are often not introspecting while we’re conscious. And much of what we introspect goes on in animals (in varying degrees depending on species), or human babies for that matter, even if they themselves can’t introspect it. It also ignores the fact that if a human, through brain injury or pathology, loses the ability to introspect, but still shows an awareness of their world, we’re going to regard them as conscious.
So HOT doesn’t hold the appeal for me it did throughout much of 2019. Although new empirical results could always change that in the future.
What do you think? Am I missing benefits of HOT? Or issues with GWT?
Lately I’ve been reading up on global workspace theory (GWT). In a survey published last year, among general consciousness enthusiasts, integrated information theory (IIT) was the most popular theory, followed closely by GWT. However, among active consciousness researchers, GWT was seen as the most promising by far (although no theory garnered a majority). Since seeing those results, I’ve been curious about why.
One reason might be that GWT has been around a long time, having first been proposed by Bernard Baars in 1988, with periodic updates all recently republished in his new book. It’s received a lot of development and has spawned numerous variants. Daniel Dennett’s multiple drafts model is one. But perhaps the one with the most current support is Stanislas Dehaene’s global neuronal workspace, which I read and wrote about earlier this year.
All of the variants posit that for an item to make it into consciousness, it has to enter a global workspace in the brain. This is most commonly described using a theater metaphor.
Imagine a play in progress in a theater. A light shines down on the stage on the currently most relevant actor or events, the light of conscious. The backstage personnel enabling the play, along with the director and other controlling personnel, are not in the light. They’re in the unconscious dark. The audience, likewise, is in the dark. That is, the audience members are unconscious information processing modules.
This last point is crucial, because this is not the infamous Cartesian theater, with an audience of one conscious homunculus, a little person observing events. Such a notion merely defers the explanation. If the homunculus provides consciousness, then does it too have its own homunculus? And that one yet its own? With infinite regression? By stipulating that the audience is not conscious, we avoid this circular trap.
That said, one issue I have with this metaphor is the passivity of the audience. Consider instead a large meeting room with a lot of rowdy people. There is someone chairing the meeting, but their control is tenuous, with lots of people attempting to talk. Every so often, someone manages to gain the floor and make a speech, conveying their message throughout the room. At least until the next person, or coalition of people, either adds to their message, or shouts them down and takes over the floor.
Most of the talking in the room is taking place in low level side conversations. But the general room “consciousness”, that is, the common things everyone is aware of, are only of what’s conveyed in the speeches, even though all the side conversations are constantly changing the tenor and state of people’s opinions throughout the room, and could effect future speeches.
I think this alternate metaphor makes it more clear what it means to enter the workspace. In all of the theories, the workspace is not a particular location in the brain. To “enter” it is to be broadcast throughout the brain, or at least the cortical-thalamic system.
How does a piece of information, or a coalition of information, accomplish this? There is a competition. Various modules in the brain attempt to propagate their signals. In many cases, actually in most cases, they are able to connect up to one or a few other modules and accomplish a task (the side conversations). If they do, the processing involved is unconscious.
But in some cases, the signal from a particular module resonates with information from other modules, and a coalition is formed, which results in the information dominating one of the major integration hubs in the brain and brings the competition to the next level.
At some point, a signal succeeds in dominating the frontoparietal network, all competing signals are massively inhibited, and the winning signal is broadcast throughout the cortical-thalamic system, with binding recurrent connections forming circuits between the originating and receiving regions . The signal achieves what Daniel Dennett calls “fame in the brain”. It is made available to all the unconscious specialty modules.
Many of these modules will respond with their own information, which again might be used by one or more other modules unconsciously. Or the new information might excite enough other modules to win the competition and be the next broadcast throughout the workspace. The stream of consciousness is the series of images, concepts, feelings, or impulses that win the competition.
One question that has long concerned me about GWT: why does simply being in the workspace cause something to be conscious? I think the answer is it’s the audience that collectively makes it so.
Consider Dennett’s “fame in the brain” metaphor. If you were to meet a famous person, would you find anything about the person, in an of themselves, that indicated fame? They might be attractive, an athlete, funny, or extraordinary in some other fashion, but in all cases you could meet non-famous people with those same traits. What then gives them the quality of fame? The fact that large numbers of other people know who they are. Fame isn’t something they exude. It’s a quality they are granted by large numbers of people, which often give the famous person causal influence in society.
Similarly, there’s nothing about a piece of information in the brain, in and of itself, that makes it either conscious or unconscious. It becomes a piece of conscious content when it is accessible by several systems throughout the brain, memory systems that might flag it for long term retention, affect systems that might provide valenced reactions, action systems that might use it in planning, or introspective and language systems that might use it for self report. All of these systems end up giving the information far more causal influence than it would have had if it remained isolated and unconscious.
Admittedly, this is a description of access consciousness. Someone might ask how this implies phenomenal consciousness. GWT proponents tend to dismiss the philosophical idea that phenomenal consciousness is something separate and apart from access. I agree with them. To me, phenomenal consciousness is what access consciousness is like from the inside.
But I realize many people don’t see it that way. I suspect many might accept GWT but feel the need to supplement it with additional philosophy to address the phenomenal issue. Peter Carruthers, in his latest book, attempts to philosophically demonstrate how GWT explains phenomenal experience, but since he’s a “qualia irrealist”, I’m not sure many people seeking that kind of explanation will find his persuasive.
There are a lot of nuanced differences between the various global workspace theories. For example, Baars most often speaks of the workspace as being the entire cortical-thalamic core. Dehaene tends to emphasize the role of the prefrontal cortex, although he admits that parietal, temporal, and other regions in the frontoparietal network are major players.
Baars emphasizes that processing in any one region of the cortical-thalamic core can be conscious or unconscious. Any region can potentially win the competition and get its contents into the workspace.
Dehaene is more reserved, noting that some regions, particularly executive ones, have more connectivity than others, and that very early sensory regions don’t necessarily seem capable of generating workspace content, except indirectly through later sensory layers.
Both agree that subcortical regions generally can’t contribute directly to the workspace. Although Baars sees the hippocampus as a possible exception.
Both Dehaene and Baars think it’s likely that many other animal species have global workspaces and are therefore conscious. Baars seems confident that any animal with a cortex or a pallium has a workspace, which I think would include all vertebrates. Dehaene is again a bit more cautious, but he sees all mammals as likely having a workspace, and possibly birds. Peter Carruthers, who converted from his own particular higher order theory to GWT, doesn’t think there’s a fact of the matter on animal consciousness.
A common criticism of GWTs is that they are theories of cognition rather than consciousness. Since to me, any scientific theory of consciousness is going to be a cognitive one, I don’t see that as a drawback. And I realized while reading about them that they also function as theories of general intelligence, the holy grail of AI research. Which fits since GWT actually has origins in AI research.
GWTs also seem able to account for situations where large parts of the cortex are injured or destroyed. Unlike higher order theories (HOT), most of which seem dependent on the prefrontal cortex, if large parts of the frontal regions were lost, the workspace would be dramatically reduced but not eliminated. Capabilities would be lost, but consciousness would still exist in a reduced form.
I also now understand why the overview paper earlier this year on HOT classified GWTs as first order theories, since first order representations can win the workspace competition as well as higher order or executive ones. This allows GWTs to avoid many of the computational redundancies implicit in HOT, redundancies that might seem unlikely from an evolutionary perspective.
And I’ve recently realized that GWT resonates with my own intuition from reading cognitive neuroscience, which I described in a post a while back, that subjective experience is communication between the sensory, affective, and planning regions of the brain. The broadcasting workspace seems like the medium of that communication.
GWTs are scientific theories, so they’ll either succeed or fall on empirical research. I was impressed with the wealth of empirical data discussed in Dehaene’s and Baars’ books. Only time will tell, but I now understand why so many consciousness experts are in this camp.
What do you think? Does this theory sound promising? Or do you see problems with it? What stands out to you as either its strengths or weaknesses?
Back in 2013, in his book, Consciousness and the Social Brain, he pointed out that it’s pretty common for theories of consciousness to explain things up to a certain point, then have a magic step. For example, integrated information theory posits that structural integration is consciousness, the various recurrent theories posit that the recurrence itself is consciousness, and quantum theories often assert that consciousness is in the wave function collapse. Why are these things in particular conscious? It’s usually left unsaid, something that’s supposed to simply be accepted.
Christof Koch, in his book, Consciousness: Confessions of a Romantic Reductionist, relates that once when presenting a theory about layer 5 neurons in the visual cortex firing rhythmically possibly being related to consciousness, he was asked by the neurologist Volker Henn how his theory was really any different from Descartes’ locating the soul in the pineal gland. Koch’s language and concepts were more modern, Henn argued, but exactly how consciousness arose from that activity was still just as mysterious as how it was supposed to have arisen from the pineal gland.
Koch said he responded to Henn with a promissory note, an IOU, that eventually science would get to the full causal explanation. However, Koch goes on to describe that he eventually concluded it was hopeless, that subjectivity was too radically different to actually emerge from physical systems. It led him to panpsychism and integrated information theory (IIT). (Although in his more recent book, he seems to have backed off of panpsychism, now seeing IIT as an alternative to, rather than elaboration of, panpsychism.)
Koch’s conclusion was in many ways similar to David Chalmers’ conclusion, that consciousness is irreducible and fundamental, making property dualism inevitable, and leading Chalmers to coin the famous “hard problem” of consciousness. These conclusions also caused Chalmers to flirt with panpsychism.
Graziano, in acknowledging the magic step that exists in most consciousness theories, argued that such theories were incomplete. A successful theory, he argued, needed to avoid such a step. But is this possible? Arguably every theory of consciousness has these promissory notes, these IOUs. The question might be how small can we make them.
Graziano’s approach was to ask, what exactly are we trying to explain? How do we know that’s what needs to be explained? We can say “consciousness”, but what does that mean? How do we know we’re conscious? Someone could reply that the only way we could even ask that question is as a conscious entity, but that’s begging the question. What exactly are we talking about here?
It’s commonly understood that our senses can be fooled. We’ve all seen the visual illusions that, as hard as we try, we can’t see through. Our lower level visual circuitry simply won’t allow it. And the possibility that we might be a brain in a vat somewhere, or be living in a simulation, is often taken seriously by a lot of people.
What people have a much harder time accepting is the idea that our inner senses might have the same limitations. Our sense of what happens in our own mind feels direct and privileged in a manner that outer senses don’t. In many ways, what these inner senses are telling us seem like the most primal thing we can ever know. But if these senses aren’t accurate, much like the visual illusions, these are not things we can see through, no matter how hard we try.
In his new book, Rethinking Consciousness: A Scientific Theory of Subjective Experience, Graziano discusses an interesting example. Lord Horatio Nelson, the great British admiral, lost an arm in combat. Like many amputees, he suffered from phantom limb syndrome, painful sensations from the nonexistent limb. He famously claimed that he had proved the existence of an afterlife, since if his arm could have a ghost, then so could the rest of him.
Phantom limb syndrome appears to arise from a contradiction between the brain’s body schema, its model of the body, and its actual body. Strangely enough, as V. S. Ramachandran discussed in his book, The Tell-Tale Brain, the reverse can also happen after a stroke or other brain injury. A patient’s body schema can become damaged so that it no longer includes a limb that’s physically still there. They no longer feel the limb is really theirs anymore. For some, the feeling is so strong that they seek to have the limb amputated.
Importantly, in both cases, the person is unable to see past the issue. The body schema is simply too powerful, too primal, and operates are a pre-conscious level. It can be doubted intellectually, but not intuitively, not at a primal level.
If the body schema exerts that kind of power, imagine what power a schema that tells us about our own mental life must exert.
So for Graziano, the question isn’t how to explain what our intuitive understanding of consciousness tells us about. Instead, what needs to be explained is why we have that intuitive understanding. In many ways, Graziano described what Chalmers would later call the “meta-problem of consciousness“, not the hard problem, but the problem of why we think there is a hard problem. (If Graziano had Chalmers’ talent for naming philosophical concepts, we might have started talking about the meta-problem in 2013.)
Of course, Graziano’s answer is that we have a model of the messy and emergent process of attention, a schema, a higher order representation of it at the highest global workspace level, which we use to control it in top down fashion. But while the model is effective in providing that feedback and control, it doesn’t provide accurate information for actually understanding the mind. Indeed, it’s simplified model of attention, portraying it as an ethereal fluid or energy that can be concentrated in or around the head, but not necessarily of it, is actively misleading. There’s a reason why we are all intuitive dualists.
At this point we reach a crucial juncture, a fork in the road. You will either conclude that Graziano’s contention (and similar ones from other cognitive scientists) is an attempt to pull a fast one, a cheat, a dodge from confronting the real problem, or that it’s plausible. If you can’t accept it, then consciousness likely remains an intractable mystery for you, and concepts like IIT, panpsychism, quantum consciousness, and a host of other exotic solutions may appear necessary.
But if you can accept that introspection is unreliable, then a host of grounded neuroscience theories, such as global workspace and higher order thought, including the attention schema, become plausible. Consciousness looks scientifically tractable, in a manner that could someday result in conscious machines, and maybe even mind uploading.
I long ago took the fork that accepts the limits of introspection, and the views I’ve expressed on this blog reflect it. But I’ve been reminded in recent conversations that this is a fork many of you haven’t taken. It leads to very different underlying assumptions, something we should be cognizant of in our discussions.
So which fork have you taken? And why do you think it’s the correct choice? Or do you think there even is a real choice here?
Drawing upon empirical research into consciousness, we propose a hypothesis that a function of consciousness is to internally generate counterfactual representations detached from the current sensory events.
Interactions with generated representations allow an agent to perform a variety of non-reflexive behaviors associated with consciousness such as cognitive functions enabled by consciousness such as intention, imagination, planning, short-term memory, attention, curiosity, and creativity.
Applying the predictive coding framework, we propose that information generation is performed by top-down predictions in the brain.
The hypothesis suggests that consciousness emerged in evolution when organisms gained the ability to perform internal simulations using generative models.
The theory described broadly matches an idea I’ve pondered several times, that consciousness is about planning, that is, about performing simulations of action-sensory scenarios, enabling non-reflexive behavior. Although the authors situate their ideas within the frameworks put forth by global workspace and predictive coding theories.
But their theory is more specifically about the internal generation of content, content that might be a prediction about current sensory input, or that might be counter-factual, that is, content not currently being sensed. In many ways this is similar to the predictive coding framework, but it’s not identical.
In the brain, sensory information flows into early sensory processing regions, and proceeds up through neural processing layers into higher order regions. But this encoding stage is only a small fraction of the processing happening in these regions. Most of it is feedback, top down decoding processing from the higher order areas back down to the early sensory regions.
In predictive coding theory, this feedback propagation is the prediction about what is being sensed. And the feedforward portion is actually error correction to the prediction. The idea being that, early in life, most of what we get is error correction, until the models get built up, and gradually as we mature, the predictions becomes more dominant.
Importantly, when we’re imagining something, that imagined counterfactual content is all feedback propagation, since what’s being imagined generally has little to no relation to sensory data coming in. Imagination is less vivid than perceiving current sensations because the error correction doesn’t reinforce the imagery. (Interestingly, the authors argue that imagery in dreams are more vivid because there’s no sensory error correction to dilute the imagery.)
The information generation theory is that this prediction feedback is what gives rise to conscious experience. This theory could be seen as similar to recurrent processing theories, although the authors seem to deliberately distance themselves from such thinking by making their point with a non-recurrent example, specifically splitting the encoding and decoding stages into two feedforward only networks.
The authors note that there is a strong and weak version to their hypothesis. The weaker version is that this kind of processing is a necessary component of consciousness, and is therefore an indicator of it. The stronger version is that this kind of information generation is consciousness. They argue that further research is necessary to test both versions.
The hypothesis does fit with several other theories (each having their own strong and weak claim). The authors even try to fit it with integrated information theory, although they admit the details are problematic.
This is an interesting paper and theory. My initial reaction is that their weaker hypothesis seems far more plausible, although in that stance it could be seen as an elaboration of other theories, albeit one that identifies important causal factors. The stronger hypothesis, I think, would require substantially more justification as to why that kind of processing, in and of itself, is conscious.
That’s my initial view. It could change on further reflection. What do you think?
When perception differs from the physical stimulus, as it does for visual illusions and binocular rivalry, the opportunity arises to localize where perception emerges in the visual processing hierarchy. Representations prior to that stage differ from the eventual conscious percept even though they provide input to it. Here, we investigate where and how a remarkable misperception of position emerges in the brain. This “double-drift” illusion causes a dramatic mismatch between retinal and perceived location, producing a perceived motion path that can differ from its physical path by 45° or more. The deviations in the perceived trajectory can accumulate over at least a second, whereas other motion-induced position shifts accumulate over 80–100 ms before saturating. Using fMRI and multivariate pattern analysis, we find that the illusory path does not share activity patterns with a matched physical path in any early visual areas. In contrast, a whole-brain searchlight analysis reveals a shared representation in anterior regions of the brain. These higher-order areas would have the longer time constants required to accumulate the small moment-to-moment position offsets that presumably originate in early visual cortical areas and then transform these sensory inputs into a final conscious percept. The dissociation between perception and the activity in early sensory cortex suggests that consciously perceived position does not emerge in what is traditionally regarded as the visual system but instead emerges at a higher level.
Subjects were shown a visual stimulus that led to a perceptual illusion. But the illusion correlated more with activity in the frontal lobes than in the visual cortex in the back of the brain. Activity in the visual cortex correlated with the actual visual stimuli rather than the illusion.
And, from the Discussion section of the paper:
Interestingly, the significant cross-classification clusters found in our searchlight analyses were primarily in anterior parts of the brain, such as the lateral prefrontal cortex (LPFC), dACC (the cingulo-opercular control network), and medial prefrontal cortex (MPFC), that are known to be involved in executive control [18, 19, 20, 21, 22] and working-memory-related processing [23, 24, 25, 26]
This seems pretty much in line with predictions from both global workspace (GWT) and higher order theories (HOT) of consciousness. And it seems like a strike against integrated information (IIT) and local first order theories. At least if the results hold up. I imagine the proponents of posterior consciousness theories will be combing through the methodology to see if there are any cracks.
This is actually a stronger result than I would have expected. I was open to the possibility that conscious visual perception happened in the posterior regions, but that the full package, including associated affects, adding the felt quality of the experience, didn’t get integrated until the information reached the frontal lobes. But it’s looking like the frontal lobes might have it all as far as conscious perception.
It’ll be interesting to see if the Templeton funded competition comes to the same conclusion, and if not, what differences in the data lead to the discrepancy. But right now PFC centered theories seem to have a head start.
One of the ongoing debates in neuroscience is on the nature of emotions, where they originate, where they are felt, and how innate versus learned they are.
One view, championed by the late Jaak Panksepp and his followers, see emotions as innate, primal, and subcortical. They allow that the more complex social emotions, such as shame, involve social learning, but see states such as fear, anger, and grief as innate and physiological. They generally do not make a distinction between the feeling of the emotion and the underlying reflexive circuitry.
In many ways, this view resembles classical James-Lange theory, which held that a stimulus causes physiological changes, that we then interoceptively feel and interpret, with that interpretation being the felt emotion. But modern constructivists are not adherents of James-Lange. The brain has extensive connections between the regions that initiate the physiological changes and the ones where the feeling of those changes occur. The interoceptive resonance is undoubtedly an important input of the experience, but it’s only a part of it.
In the past, when discussing this debate between basic emotions and constructed ones, I’ve noted that much of it, perhaps all of it, comes down to definition disputes. Ledoux himself seemed to acknowledge this in a podcast interview, where he noted that he and his friend, Antonio Damasio, who is more in the basic emotion camp, agree on all the scientific facts. They just disagree on how to interpret them.
In other words, it may be that there isn’t a fact of the matter answer to this debate. When faced with these scenarios, I think there’s value in laying out the different positions and their relations. Yes, we’re talking layers and hierarchy again. These frameworks are a simplification, perhaps an oversimplification, but they help me keep things straight.
As I noted when discussing the layers of consciousness, this is not any kind of new theory. I fully confess to not having the expertise for that. It’s really just a way of relating the major views.
The hierarchy of emotional feeling:
Survival circuits: A stimulus comes in and triggers reflexive survival circuits. This causes physiological changes: heart rate, blood pressure, breathing rate, arousal levels, etc. If not inhibited by higher level circuitry, it may lead to automatic action.
Communication from the survival circuitry to higher level circuitry: The signals rise up from subcortical regions in the brainstem and limbic system, including the amygdala, to cortical regions. It’s important to note that this communication is two way. The higher level circuitry, particularly the prefrontal cortex, has the ability to inhibit the motor action aspects of the survival circuits.
Interoception loop: The effects of the physiological changes are interoceptively sensed, adding to and reinforcing the signal in 2. (Note: “interoception” refers to sensing the internal state of the body.)
Construction of the representation: A mental representation of 2 and 3, along with what they might mean in a broader autobiographical context, is built. Ledoux calls this a “schema”, positing that we have a fear schema, an anger schema, etc. Whatever we call it, it’s a model, or galaxy of models related to the signal from the reflexive survival circuits.
Utilization of the representation: The representation from 4 is available for cognitive access. In humans, this typically involves using it in action-sensory scenario simulations, although it may be used for much simpler processing: single prediction of cause and effect. The result of this is that some survival circuit actions are inhibited and some allowed to continue. Note that sometimes all of them are inhibited. (The animal freezes.)
Introspective access to the representation: For a species that has introspection (humans and possibly some great apes and other relatively intelligent species), this allows knowledge that the feeling is being experienced.
Note that this list is in the typical “feed forward” order where a feeling is externally stimulated. It’s possible for someone to initially have 4 and 5 in the absence of 1-3, which can cause a “feed back” signal back down to 1, and then up again through the layers. In other words, thinking about something that makes you angry, can set up the loop that makes you feel angry.
So, where in this is “the emotion”? A basic emotion advocate will see it happening in 1, although whether it is a conscious feeling at this stage depends on which one you ask. Damasio seems to see this stage as pre-conscious. He explicitly defines “emotion” as the survival sequence, the “action program.”
Constructivists like Barrett and Ledoux seem to only see the conscious emotional feeling as existing in layer 6. But this seems to require consciousness in the full autonoetic (meta-aware) fashion that only humans and perhaps a few other species possess. In other words, in their view, only humans and maybe a few other species have emotions.
My own view is that the conscious feeling of the emotion happens in 5, whether or not it’s being introspectively accessed. This substantially widens the number of species who can be regarded as having emotional feelings, inclusive of all mammals and birds, although the complexity of the feeling varies tremendously depending on the intelligence of the species. A mouse’s emotional feelings are far simpler than a chimpanzee’s.
To me, it feels like the Panksepp camp’s attribution of consciousness to layer 1 is stretching the concept of “consciousness” too far. On the other hand, Barrett’s and Ledoux’ requirement for full autonoetic consciousness goes too far in the other restrictive direction. And they seem reluctant to admit that 2 does provide a link between the higher level representations / schema / concept and the lower level impulses.
My view is that consciousness is composed of cognition that, in humans, is within the scope of introspection. Much of that same cognition also exists in other species, with varying levels of sophistication, even if they themselves can’t introspect it. That means that a dog can be angry, although anger doesn’t have the same scope of meaning for them as it does for us.
But the thing to understand is that these are philosophical conclusions, not scientific ones. As far as I know, what I laid out in the layers represents the current scientific consensus of both camps. Consciousness is in the eye of the beholder, and so, it appears, are conscious feelings.
Unless of course I’m missing something? What do you think about the layers? Where in them do you see the emotion, and if separate, the conscious feeling? And what makes you see it that way?
I went to the NYU Consciousness site this morning hoping to see if the recent debate on the relationship of prefrontal activity to consciousness had been posted yet. It hasn’t, and based on what I can see, it might be a while.
Just a reminder: split-brain patients were people who, in order to control severe epileptic seizures, had their corpus callosum, the connections between the two cerebral hemispheres of their brain separated. The procedure left them remarkably functional in day to day life, but careful tests show that the two sides of their brain have limited if any communication with each other, although recent experiments by one of the debaters complicates that understanding.
This debate includes David Chalmers as the MC, Elizabeth Schechter, who argues that split-brain patients do have two minds, Yair Pinto, who argues that they don’t, and Joseph Ledoux, who argues that the issue is complicated and depends on the specific patients and which definitions we’re using.
Just so you know what you’re getting into, the video is two hours long, although the initial statements from the participants are done within the first hour.
I’m not sure if I’d heard of Schechter before. I found her views interesting and might have to explore them at some point. Unfortunately, her book appears to be pricey. I noted above that she argues for two minds, but her view turns out be nuanced and not really too different from Ledoux’s. She sees there being two minds, but one person.
My own conclusions on this is that I don’t think it’s productive to talk about there being two separate minds. All of the tests do show that communication between the hemispheres are limited to some degree or another, but I think it’s more accurate to talk about fragments of a mind whose communications are disrupted.
Pinto makes much about the fact that the two hemispheres don’t seem to notice or be bothered by the separation. But I think that is only an issue if we regard each hemisphere as its own separate self. However, the hemispheres didn’t evolve that way. They evolved to be a portion of a self, and it’s clear that’s what they expect to be.
V.S. Ramachandran in his book, The Tell-Tale Brain, relates the power of the brain’s expectations of its body plan. When that expectation, due to some brain injury, becomes disrupted, in a condition known as apotemnophila, it can lead to conditions where people no longer feel a limb is theirs, with an intensity that can lead them seek to have it amputated.
If each hemisphere has a plan for its side of the body, then it wouldn’t expect to receive signals from the other side, much less be able to control it. Indeed, if it suddenly found that it could, it might result in the apotemnophila condition.
So we end up with fragments of a mind whose communications have become limited. Over time, split-brains either recruit remaining subcortical pathways, or learn to use subtle external behavioral cues to make up for the limitation. This seems to bridge the results from the old Sperry / Gazzaniga / Ledoux results and the newer ones from Pinto.
It’s also a stark reminder of just how married mental processes are to their embodiment.
In the ongoing debate in neuroscience between those who see consciousness being in the back part of the brain, among the sensory processing regions, or in the front of the brain, in the cognitive action planning regions, there are issues confounding the evidence. Most experiments testing for conscious perception depend on self report from the test subjects, but this causes a problem since the frontal lobes are necessary for any method of self report (speech production, pressing a button, etc), so when those frontal lobes light up in brain scans in correlation with conscious perceptions, the possibility exists that they only light up due to the self report requirement.
So, an experimental protocol was developed: the no-report paradigm. One group of subjects are given a stimulus and asked to report if they consciously perceive it while their brains are being scanned. Another group are given the same stimulus that led the first group to report conscious awareness, but the second group is not required to self report, also while being scanned. The scans of the groups are compared to see if the frontal lobes still light up in the second group. Generally, although there is variation, the frontal lobes do still light up, implicating frontal regions in conscious perception.
However, Ned Block, a philosopher who thinks it likely that phenomenal consciousness “overflows” the self report from access consciousness, sees an issue that he describes in a paper in the journal, Trends in Cognitive Sciences. (Warning: paywall) Block points out that potential confounds remain, because we can’t rule out that the test subject isn’t thinking about reporting their perception, or cognitively processing the perception in some manner, causing the frontal lobes to light up for reasons other than the conscious perception itself.
Block points out that the real fundamental distinction here is between those who see cognition (in the frontal lobes) as necessary for consciousness versus those who see perceptual processing (in the back regions) as sufficient. Global workspace and higher order thought theories are cognitive accounts, while integrated information and local recurrent loop theories are more sensory oriented.
Block argues that the no-report paradigm needs to be replaced with a no-cognition paradigm, or to avoid begging the question against cognitive accounts, a “no-post-perceptual cognition” paradigm. But how can cognition be eliminated from subjects who have perceptions? Short of selectively anesthetizing the frontal lobes (which would be invasive, risky, and unlikely to get past IRBs), is this even possible?
Block focuses on a study of binocular rivalry, by Jan Brascamp and colleagues, as a possible solution. Binocular rivalry is the phenomenon that, when the eyes are shown very different images, conscious visual perception alternates between the two images, rather than blending them together. (Blending can happen, but only if the images are similar.) The goal of Brascamp’s study is to determine whether the selection between the rival images happens in the back or front of the brain.
To do this, the study constructs rival images of random dots such that, although they are different enough to lead to binocular rivalry (the dots in one image move left vs moving right in the other image), they are similar enough that the subject’s attention isn’t called to the switching between the images and so can’t report it.
For subjects who aren’t required to report what they’re seeing, brain scans show variations correlated with the image switching in the back of the brain, but not in the front. In other words, the study shows that the selection of which image to momentarily “win” in the binocular rivalry happens in the back of the brain.
Block sees the methodology here as an example of the “no-post-perceptual cognition” paradigm, and the specific results as indicating that the frontal lobes aren’t necessarily involved in conscious perception of the images. He focuses on the fact that subjects could, if queried, identify whether the dots were moving left or right, indicating that they were conscious of the specific image at the moment.
I think there are problems with this interpretation. By Block’s own description, the subjects didn’t notice and couldn’t self report the oscillations between the rival images, so we shouldn’t expect to see correlated changes in the frontal lobes for those changes. The subjects may have become conscious of some details in the images when asked to report, but when they weren’t asked to report, it seems more likely they were only conscious of an overall “gist” of what was there, a gist that worked for both images, and so didn’t need to oscillate with them.
The Brascamp et al. study is hard core functional neuroscience, aimed at narrowing the location of a specific function in the brain. They succeed at establishing that the selection happens in the back of the brain. But I don’t think a “frontalist” (as Block labels them) should be concerned about this. A pre-conscious selection happening in the back of the brain doesn’t really seem to challenge their view.
And Brascamp et al. actually seem to come to a different conclusion than Block. From the final paragraph in their discussion section:
A parsimonious conceptualization of these results frames awareness of sensory input as intimately related to the planning of motor actions, regardless of whether those actions are, in fact, executed. In this view a perceptual change of which the observer is aware might be one that alters candidate motor plans or sensorimotor contingencies. This view also marries the present evidence against a driving role of fronto-parietal regions in perceptual switches to the notion that these regions do play a central role in visual awareness when viewing a conflicting or ambiguous stimulus, a switch in perception may arise within the visual system, but noticing the change may rely on brain regions dedicated to behavioral responses.
So, while the study succeeded in its aims, I can’t see that that the results mean what Block takes them to mean, or that the methodology accomplishes the no-post-perceptual cognition paradigm he’s looking for. That doesn’t necessarily mean that sensory consciousness isn’t a back of the brain phenomenon. It just means getting evidence for it is very tricky.
This front vs back debate is a major issue in the neuroscience of consciousness. One I’m hoping that Templeton contest does succeed in shedding some light on. Myself, I suspect the frontalists are right, but wouldn’t be surprised if it’s a mix, with maybe sensory consciousness in the back, but emotional and introspective consciousness in the front, with our overall experience being a conjunction of all of them.
What do you think? Is consciousness a cognitive phenomenon? Or is perceptual awareness independent of cognition? Or in a system where the components evolved to work closely together, is this even a well posed question?
I think most of you know I’m not a fan of integrated information theory (IIT). However, it is a theory proposed by scientists, and I’ve always had a mildly guilty conscience over not having read about it other than through articles and papers. Some years ago, I tried to read Giuilio Tononi’s book, PHI: A Voyage from the Brain to the Soul, but was repelled by its parable format and low information density, and so never finished it. So when Christof Koch’s new book, The Feeling of Life Itself, was announced, and that it would be an exploration of IIT, I decided I needed to read it.
Koch starts off by defining consciousness as experience, “the feeling of life itself.” He muses that the challenge of defining it this way is that it’s only meaningful to other conscious entities.
He then discusses the properties of experience, properties that eventually end up being axioms of the theory.
Experience exists for itself, without need for anything external, such as an observer.
It is structured, that is, it has distinctions, being composed of many internal phenomenal distinctions.
It’s informative, distinct in the way it is, contains a great deal of detail, and is bound together in certain ways.
It’s integrated, irreducible to its independent components.
It’s definite in content and spatiotemporal grain, and is unmistakable.
These then map to postulates of the theory.
Intrinsic Existence: the set of physical elements must specify a set of “differences that make a difference” to the set itself.
Composition: since any experience is structured, this structure must be reflected in the mechanisms that compose the system specifying the experience.
Information: a mechanism contributes to experience only if it specified “differences that make a difference” within the system itself. A system in its current state generates information to the extent that it specifies the state of a system that could be its possible cause in the past and its effect in the future.
Integrated: the cause-effect structure specified by the system must be unified and irreducible, that is, the system can’t be reduced to independent non-interacting components without losing something essential.
Exclusion: only the set of elements that is maximally irreducible exists for itself, rather than any of its supersets or subsets.
All of this feeds into the “central identity of IIT”, which I’ll quote directly from the book.
The central identity of IIT, a metaphysical statement, makes a strong ontological claim. Not that Φmax merely correlates with experience. Nor the stronger claim that a maximally irreducible cause-effect structure is a necessary and sufficient condition for any one experience. Rather, IIT asserts that any experience is identical to the irreducible, causal interaction of the interdependent physical mechanism that make up the Whole. It is an identity relationship—every facet of any experience maps completely onto the associated maximally irreducible cause-effect structure with nothing left over on either side.
Koch, Christof. The Feeling of Life Itself (The MIT Press) . The MIT Press. Kindle Edition.
All of this factors into the calculation of Φ (pronounced “phi”), a value which indicates the extent to which a system meets all the postulates. However, as noted in the postulates, there can be Φ values for subsets and supersets of the system. What we’re interested in is Φmax, the combination of elements that produce the maximum amount of Φ. According to the Exclusion postulate, only this particular combination is conscious.
The Exclusion postulate allows IIT to avoid talking about multiple consciousnesses within one brain, or of group consciousnesses. Although it doesn’t rule out scenarios where splitting or combining systems results in new consciousnesses, such as what happens with split-brain patients, or what might happen if two people’s brains were somehow integrated together.
Not all of the brain is necessarily included in its Φmax, but a particular subset. Koch thinks this is a region he calls the posterior cortical hot zone, including regions in the parietal, temporal, and occipital lobes. In essence, it’s the overall sensory cortex, the sensorium, as opposed to the action cortex, or motorium at the front of the brain, which is why that Templeton contest between IIT and global workspace theories (GWT) is focused on whether consciousness is more associated with the back or front of the brain.
Koch discusses the evolution of consciousness. He sees it going back to the reptiles, when the sensory cortex first started to develop. (Somewhere around the rise of reptiles, or mammals and birds, seems to be where most biologists see consciousness arising, excluding fish, amphibians, and most invertebrates, although as always, a lot depends on the definition of consciousnss being considered.)
Koch in his earlier book, Consciousness: Confessions of a Romantic Reductionist, evinced a comfort level with panpsychism. In the disccusion of IIT in that book, he implied that IIT and panpsychism were compatible. But in this book, I got the feeling that he now views IIT more as an alternative to panpsychism, one which resolves some of panpschism’s issues, such as the combination problem.
As noted above, I’m not a fan of IIT, and I can’t say that this book helped much. All the axioms and postulates make it feel more like philosophy than science. It continues to feel very abstract and disconnected from actual neuroscience. Some of the axioms, such as structure and information, seem vague and redundant to me. (The book adds examples, but I didn’t find them to help much.) And others, such as the exclusion principle, seem arbitrary, included to save appearances.
The intrinsic existence one seems to imply metacognitive self awareness, but the theory simply asumes that it emerges somehow from integration, ignoring the actual neuroscience of the regions in the brain associated with introspection. The postulate also ends up attibuting self awareness to all animals going back to reptiles, despite the lack of any empirical support.
IIT also posits that the feeling of all this emerges from the integration, again ignoring all the neuroscience on affects and survival circuits. Bringing in all that neuroscience inescapably leads us to the front of the brain, which Koch rules out as having a role in consciousness.
And Scott Aaronson’s classic takedown of the theory remains in my mind. Koch mentions Aaronson’s criticism, but like Tononi, doubles down and accepts that the arbitrary systems with trivially high Φ that Aaronson envisages are in fact conscious. If the theory’s designations of consciousness aren’t going to match up with our ability to detect it, how scientific is it really?
But I think my biggest issue with IIT is it inherently attempts to explain the ghost in the machine, particularly how it’s generated. Most of the other theories I find plausible simply dismiss the idea of the ghost, I think rightly so. There’s no evidence for a ghost, either spiritual, electromagnetic, or any other variety. The evidence we have is of the brain and how it functions.
I’ll be happy to go back to IIT if it manages to rack up empirical support. Until then, it seems like a dead end.
To be clear, I do think integration is crucial, just not in the specific way IIT envisages it. There are many integration regions in the brain, regions which are themselves integrated with each other. But Antonio Damasio’s convergence-divergence zones and convergence-divergence regions seem to model this in a much more grounded manner than IIT.
What do you think? Am I too skeptical of IIT? Are there virtues of the theory that I’m missing?