Postdictive perception and the consciousness finish line

(Warning: neuroscience weeds)

Recently I noted that one of the current debates in cognitive science is between those who see phenomenal and access consciousness as separate things, and those who see them as different aspects of the same thing.  Closely related, perhaps actually identical, is the debate between local and global theories of consciousness.

Diagram of the brain showing sensory and motor cortices
Image credit: via Wikipedia: Blausen.com staff (2014). “Medical gallery of Blausen Medical 2014”. WikiJournal of Medicine 1 (2). DOI:10.15347/wjm/2014.010. ISSN 2002-4436.

Local theories tend to see processing in sensory cortices (visual, auditory, etc) as sufficient for conscious perception of whatever sensory impression they’re processing.  One example is micro-consciousness theory, which holds that processing equals perception.  So formation of a neural image map in sensory cortices equals consciousness of that image.

(A lot of people seem to hold this processing equals perception idea intuitively.  It fits with ideas from people like Jaak Panksepp or Bjorn Merker.)

Another more sophisticated version is Victor Lamme’s local recurrent processing theory, which adds the requirement of recurrent processing in the sensory cortex, processing that involves feed forward signalling as well as feed back signalling in loops.  I discussed local recurrent theory a while back.

Global cognitive theories require that content have large scale effects throughout the brain to become conscious.  Similar to Lamme’s theory, they often see recurrent processing as a prerequisite, but require that it span regions throughout the thalamo-cortical system, either to have wide scale causal effects (global workspace theories) or to reach certain regions (higher order thought theories).

I mentioned that the local vs global debate may be identical to the phenomenal vs access one.  This is primarily because local theories only make sense if phenomenal consciousness is happening in the local sensory cortices independent of access consciousness.  That processing is inherently pre-access, before the content is available for reasoning, action, and report.

The latest shot in this debate is a paper by Matthias Michel and Adrien Doerig, which will be appearing in Mind & Language: A new empirical challenge for local theories of consciousness.

Michel and Doerig look at a type of perception they call “long-lasting postdiction”.  An example of postdiction is when subjects are shown a red disk followed in rapid succession by a green disk 20 ms (milliseconds) later, resulting in perceptual fusion, where the subject perceives a yellow disk.  This is an example of short-lasting postdiction.  In order for the fusion to occur, the red image needs to be processed, and then the green one, and then the two fused.

Short lasting postdiction could represent a problem for micro-consciousness theory, since it results in formation of image maps for each image in rapid succession but not in a way the leads to each being consciously perceived.  (It’s not clear to me this is necessarily true.  I can see maybe the two types of processing simply bleeding into each other.  And see below for one possible response from the micro-consciousness camp.)

Short-lasting postdictions are less of a problem for local recurrent theory, because it takes more time for the recurrent processing to spin up.  (The paper has a quick but fascinating discussion of the time it takes for information to get from the retina to the cortex and then to various regions, along with citations I may have to check out.)

It’s the long-lasting postdictions that are a problem for local recurrence.  The paper discusses images of pairs of vernier lines that are shown to test subjects.  The images are in various positions, changing every 50 ms or so across a period of 170-450 ms, resulting in a perception of the lines moving.

There are variations where one of the vernier lines are skewed slightly, but only on the first image, resulting in the subject perceiving the skew for the entire moving sequence, even though the verniers in the later images are aligned.  Another variation has two images, one early in the sequence and one toward the end, skewed in opposite directions, resulting in the two skewed images being averaged together and the averaged shape being perceived throughout the sequence.

The main takeaway is that the conscious perception of the sequence appears to be formed after the sequence.  Given the relatively lengthy sequence time of up to 450 ms, this is thought to exceed the time it takes for local recurrent processing to happen, and bleeds over into the ignition of global processing, representing a challenge for local recurrent theory.

The authors note that local theorists have two possible outs.  One is to say that, for some reason, the relevant local processing actually doesn’t happen in these sequences.  This would require identifying some other mechanism that unconsciously holds the intermediate images.  The other is to say that the images are phenomenally experienced, but then subsequently replaced in the access stage.  But this would result in phenomenal experience that has no chance of ever being reportable.  (What type of consciousness are we now talking about?)  And it would make local theories extremely difficult to test.

Interestingly, the authors note that the longer time scales may need to be reconciled with the ones in cognitive theories, such as the global neuronal workspace, which identifies 300 ms as a crucial point in conscious perception.  In other words, while this is an issue for local theories, it could be one even for global theories.

All of this reminds me of the phi illusion Daniel Dennett discussed in his book Consciousness Explained.  He describes this illusion in the build up to discussing his own multiple drafts model, a variation of global workspace theory.  Dennett’s variation is that there’s no absolute moment when a perception becomes conscious.  His interpretation of the phi illusion, which seems like another case of postdiction, is that there is no consciousness finish line.  We only recognize that a perception is conscious retroactively, when it achieves “fame in the brain” and influences memory and report.

Anyway, I personally think the main flaw with local theories is that the processing in question is too isolated, too much a fragment of what we think of as an experience, which usually includes at least the sensory perception and a corresponding affective feeling.  The affect part requires that regions far from the sensory cortices become stimulated.  Even if the localists can find an answer to the postdiction issue, I think this broader one will remain.

Unless of course I’m missing something.

Is there a conscious perception finish line?

Global workspace theory (GWT) is the proposition that consciousness is composed of contents broadcast throughout the brain.  Various specialty processes compete for the limited capacity of the broadcasting mechanisms, to have their content broadcast to the all the other specialty processes.

Global neuronal workspace (GNW) is a variant of that theory, popularly promoted by Stanislas Dehaene, which I’ve covered before.  GNW is more specific than generic GWT on the physical mechanisms involved.  It relies on empirical work done over the years demonstrating that conscious reportability involves wide scale activation of the cortex.

One of the observed stages is a massive surge about 300 milliseconds after a stimulus, called the P3b wave.  Previous work seemed to establish that the P3b wave is a neural correlate of consciousness.  Dehaene theorized that it represents the stage where one of the signals achieves a threshold and wins domination, with all the other signals being inhibited.  Indeed, the distinguishing mark of the P3b is that it is massively negative in amplitude, indicating that most of it comes from inhibitory action.

The P3b has been replicated extensively and been seen as a pretty established phenomenon associated with attention and consciousness.  But this is science, and any result is always provisional.  Michael Cohen and colleagues have put out a preprint of a study that may demonstrate that the P3b wave is not associated with conscious perception, but with post perceptual processing.

The study tests the perception of subjects, showing various images while measuring their brain waves via EEG.  Using a no-report protocol, in half of the tests, the subjects were asked to report on whether they saw something, but in the other half they were not asked to report.  Crucially, the P3b wave only manifested in the reported cases, never in the non-report ones, even when the non-report image were exactly the same as the ones that did generate affirmative reports.

Image showing P3 wave presence for report and absence for non-report tests
Image from the study: https://www.biorxiv.org/content/10.1101/2020.01.15.908400v1.full

To control for the possibility that the subjects weren’t actually conscious of the image in the non-report cases, the subjects were given a memory test after a batch of non-report events, checking to see what they remember perceiving.  Their memories of the perception correlated with the results in the report versions.

So, the P3b wave, a major piller of GNW, may be knocked down.  The study authors are careful to make clear that this does not invalidate GWT or other cognitive theories of consciousness.  They didn’t test for all the other ways the information may have propagated throughout the cortex.  Strictly speaking, it doesn’t even invalidate GNW itself, but it does seem to knock out a major piece of evidence for it.

However, this is a more interesting discussion if we ask, what would it mean if all cortical communication beyond the sensory regions were ruled out, that the ability to acquire a memory of a sight only required the local sensory cortices?  It might seem like a validation of views like Victor Lamme’s local recurrent processing theory, which holds that local processing in the sensory cortices is sufficient for conscious perception.

But would it be?  Dehaene, when discussing his theory, is clear that it’s a theory of conscious access.  For him, something isn’t conscious until it becomes accessible by the rest of the brain.  Content in sensory cortices may form, but it isn’t conscious until it’s accessible.  Dehaene refers to this content as preconscious.  It isn’t yet conscious, but it has the potential to become so.

In that view, the content of what the subjects perceived in the non-report tests may have been preconscious, unless and until their memories were probed, at which point it became conscious.

This may be another case where the concept of consciousness is causing people to argue about nothing.  If we describe the situation without reference to it, the facts seem clear.

Sensory representations form in the local sensory cortex.  A temporary memory of that representation may persist in that region, so if probed soon enough afterward, a report about the representation can be extracted from it.  But until there is a need for a report or other usage, it is not available to the rest of the system, and none of the activity, including the P3b, normally associated with that kind of access is evident.

This reminds me of Daniel Dennett’s multiple drafts theory (MDT) of consciousness.  MDT is a variant of GWT, but minus the idea that there is any one event where content becomes conscious.  It’s only when the system is probed in certain ways that one of the streams, one of the drafts, become selected, generally one of the ones that has managed to leave its effects throughout the brain, that has achieved “fame in the brain.”

In other words, Dennett denies that there is any one finish line where content that was previously unconscious becomes conscious.  In his view, the search for that line is meaningless.  In that sense, the P3b wave may be a measure of availability, but calling it a measure of consciousness is probably not accurate.  And it’s not accurate to say that the Lamme’s local recurrent processing is conscious, although it’s also not accurate to relegate it completely to the unconscious.  What we can say is that it’s at a particular point in the stream where it may become relevant for behavior, including report.

Maybe this view is too ecumenical and I’m papering over important differences.  But it seems like giving up the idea of one finish line for consciousness turns a lot of theories that look incompatible into models of different aspects of the same overall system.

None of this is to say that GWT or any of its variants might not be invalidated at some point.  These are scientific theories and are always subject to falsification on new data.  But if or when that happens, we should be clear about exactly what is being invalidated.

Unless of course I’m missing something?

The battle between integration and workspace will take a while

Well, I find this a bit disappointing.  I was hoping that the contest between global workspace theory (GWT) and integrated information theory (IIT) would be announced sometime this year.  Apparently, I’m going to have to wait awhile:

Pitts describes the intention of this competition as “to kill one or both theories,” but adds that while he is unsure that either will be definitively disproved, both theories have a good chance of being  critically challenged. It’s expected to take three years for the experiments to be conducted and the data to be analyzed before a verdict is reached.

Three years.  And of course there remains no guarantee the results will be decisive.  Sigh.

I don’t particularly need the results to tell me which theory is more plausible.  GWT is probably not the final theory, but it currently feels more grounded than IIT.

Still it would be nice to get insights on the back vs front of the brain matter.  I’m expecting the answer to be the whole brain is usually involved, but that consciousness can get by in reduced form with only the posterior regions.  (Assuming the necessary subcortical regions remains functional.)  Having only those regions might be a sensory consciousness with no emotional feeling, a type of extreme akinetic mutism.

I suppose it’s conceivable consciousness could also get by with just the frontal regions, but it seems like it would be without any mental imagery (except for olfactory ones), blind feelings, which seems pretty desolate.  On the other hand, it might amount to what the forebrain of fish and amphibians have, since their non-smell senses only go directly to their  midbrain.

Oh well.  I’m sure we’ll have plenty of other studies and papers to entertain us in the meantime!

Global workspace theory: consciousness as brain wide information sharing

Lately I’ve been reading up on global workspace theory (GWT).  In a survey published last year, among general consciousness enthusiasts, integrated information theory (IIT) was the most popular theory, followed closely by GWT.  However, among active consciousness researchers, GWT was seen as the most promising by far (although no theory garnered a majority).  Since seeing those results, I’ve been curious about why.

One reason might be that GWT has been around a long time, having first been proposed by Bernard Baars in 1988, with periodic updates all recently republished in his new book.  It’s received a lot of development and has spawned numerous variants.  Daniel Dennett’s multiple drafts model is one.  But perhaps the one with the most current support is Stanislas Dehaene’s global neuronal workspace, which I read and wrote about earlier this year.

All of the variants posit that for an item to make it into consciousness, it has to enter a global workspace in the brain.  This is most commonly described using a theater metaphor.

Imagine a play in progress in a theater.  A light shines down on the stage on the currently most relevant actor or events, the light of conscious.  The backstage personnel enabling the play, along with the director and other controlling personnel, are not in the light.  They’re in the unconscious dark.  The audience, likewise, is in the dark.  That is, the audience members are unconscious information processing modules.

This last point is crucial, because this is not the infamous Cartesian theater, with an audience of one conscious homunculus, a little person observing events.  Such a notion merely defers the explanation.  If the homunculus provides consciousness, then does it too have its own homunculus?  And that one yet its own?  With infinite regression?  By stipulating that the audience is not conscious, we avoid this circular trap.

That said, one issue I have with this metaphor is the passivity of the audience.  Consider instead a large meeting room with a lot of rowdy people.  There is someone chairing the meeting, but their control is tenuous, with lots of people attempting to talk.  Every so often, someone manages to gain the floor and make a speech, conveying their message throughout the room.  At least until the next person, or coalition of people, either adds to their message, or shouts them down and takes over the floor.

Most of the talking in the room is taking place in low level side conversations.  But the general room “consciousness”, that is, the common things everyone is aware of, are only of what’s conveyed in the speeches, even though all the side conversations are constantly changing the tenor and state of people’s opinions throughout the room, and could effect future speeches.

I think this alternate metaphor makes it more clear what it means to enter the workspace.  In all of the theories, the workspace is not a particular location in the brain.  To “enter” it is to be broadcast throughout the brain, or at least the cortical-thalamic system.

Diagram showing the regions of the brain
Lobes of the brain
Image credit: BruceBlaus via Wikipedia

How does a piece of information, or a coalition of information, accomplish this?  There is a competition.  Various modules in the brain attempt to propagate their signals.  In many cases, actually in most cases, they are able to connect up to one or a few other modules and accomplish a task (the side conversations).  If they do, the processing involved is unconscious.

But in some cases, the signal from a particular module resonates with information from other modules, and a coalition is formed, which results in the information dominating one of the major integration hubs in the brain and brings the competition to the next level.

At some point, a signal succeeds in dominating the frontoparietal network, all competing signals are massively inhibited, and the winning signal is broadcast throughout the cortical-thalamic system, with binding recurrent connections forming circuits between the originating and receiving regions .  The signal achieves what Daniel Dennett calls “fame in the brain”.  It is made available to all the unconscious specialty modules.

Many of these modules will respond with their own information, which again might be used by one or more other modules unconsciously.  Or the new information might excite enough other modules to win the competition and be the next broadcast throughout the workspace.  The stream of consciousness is the series of images, concepts, feelings, or impulses that win the competition.

One question that has long concerned me about GWT: why does simply being in the workspace cause something to be conscious?  I think the answer is it’s the audience that collectively makes it so.

Consider Dennett’s “fame in the brain” metaphor.  If you were to meet a famous person, would you find anything about the person, in an of themselves, that indicated fame?  They might be attractive, an athlete, funny, or extraordinary in some other fashion, but in all cases you could meet non-famous people with those same traits.  What then gives them the quality of fame?  The fact that large numbers of other people know who they are.  Fame isn’t something they exude.  It’s a quality they are granted by large numbers of people, which often give the famous person causal influence in society.

Similarly, there’s nothing about a piece of information in the brain, in and of itself, that makes it either conscious or unconscious.  It becomes a piece of conscious content when it is accessible by several systems throughout the brain, memory systems that might flag it for long term retention, affect systems that might provide valenced reactions, action systems that might use it in planning, or introspective and language systems that might use it for self report.  All of these systems end up giving the information far more causal influence than it would have had if it remained isolated and unconscious.

Admittedly, this is a description of access consciousness.  Someone might ask how this implies phenomenal consciousness.  GWT proponents tend to dismiss the philosophical idea that phenomenal consciousness is something separate and apart from access.  I agree with them.  To me, phenomenal consciousness is what access consciousness is like from the inside.

But I realize many people don’t see it that way.  I suspect many might accept GWT but feel the need to supplement it with additional philosophy to address the phenomenal issue.  Peter Carruthers, in his latest book, attempts to philosophically demonstrate how GWT explains phenomenal experience, but since he’s a “qualia irrealist”, I’m not sure many people seeking that kind of explanation will find his persuasive.

There are a lot of nuanced differences between the various global workspace theories.  For example, Baars most often speaks of the workspace as being the entire cortical-thalamic core.  Dehaene tends to emphasize the role of the prefrontal cortex, although he admits that parietal, temporal, and other regions in the frontoparietal network are major players.

Subcortical structures of the brain
Image credit: OpenStax College via Wikipedia

Baars emphasizes that processing in any one region of the cortical-thalamic core can be conscious or unconscious.  Any region can potentially win the competition and get its contents into the workspace.

Dehaene is more reserved, noting that some regions, particularly executive ones, have more connectivity than others, and that very early sensory regions don’t necessarily seem capable of generating workspace content, except indirectly through later sensory layers.

Both agree that subcortical regions generally can’t contribute directly to the workspace.  Although Baars sees the hippocampus as a possible exception.

Both Dehaene and Baars think it’s likely that many other animal species have global workspaces and are therefore conscious.  Baars seems confident that any animal with a cortex or a pallium has a workspace, which I think would include all vertebrates.  Dehaene is again a bit more cautious, but he sees all mammals as likely having  a workspace, and possibly birds.  Peter Carruthers, who converted from his own particular higher order theory to GWT, doesn’t think there’s a fact of the matter on animal consciousness.

A common criticism of GWTs is that they are theories of cognition rather than consciousness.  Since to me, any scientific theory of consciousness is going to be a cognitive one, I don’t see that as a drawback.  And I realized while reading about them that they also function as theories of general intelligence, the holy grail of AI research.  Which fits since GWT actually has origins in AI research.

GWTs also seem able to account for situations where large parts of the cortex are injured or destroyed.  Unlike higher order theories (HOT), most of which seem dependent on the prefrontal cortex, if large parts of the frontal regions were lost, the workspace would be dramatically reduced but not eliminated.  Capabilities would be lost, but consciousness would still exist in a reduced form.

I also now understand why the overview paper earlier this year on HOT classified GWTs as first order theories, since first order representations can win the workspace competition as well as higher order or executive ones.  This allows GWTs to avoid many of the computational redundancies implicit in HOT, redundancies that might seem unlikely from an evolutionary perspective.

And I’ve recently realized that GWT resonates with my own intuition from reading cognitive neuroscience, which I described in a post a while back, that subjective experience is communication between the sensory, affective, and planning regions of the brain.  The broadcasting workspace seems like the medium of that communication.

GWTs are scientific theories, so they’ll either succeed or fall on empirical research.  I was impressed with the wealth of empirical data discussed in Dehaene’s and Baars’ books.  Only time will tell, but I now understand why so many consciousness experts are in this camp.

What do you think?  Does this theory sound promising?  Or do you see problems with it?  What stands out to you as either its strengths or weaknesses?

A competition between integration and workspace

Back in March, I did a post on a proposed Templeton Foundation project to test major scientific theories of consciousness.  The idea was to start with a head to head competition between the integration information theory (IIT) and global workspace theory (GWT).  Apparently that project got funded and, according to a Science Magazine article, there are now active plans to move forward with it.

The first two contenders are the global workspace theory (GWT), championed by Stanislas Dehaene of the Collège de France in Paris, and the integrated information theory (IIT), proposed by Giulio Tononi of the University of Wisconsin in Madison. The GWT says the brain’s prefrontal cortex, which controls higher order cognitive processes like decision-making, acts as a central computer that collects and prioritizes information from sensory input. It then broadcasts the information to other parts of the brain that carry out tasks. Dehaene thinks this selection process is what we perceive as consciousness. By contrast, the IIT proposes that consciousness arises from the interconnectedness of brain networks. The more neurons interact with one another, the more a being feels conscious—even without sensory input. IIT proponents suspect this process occurs in the back of the brain, where neurons connect in a gridlike structure.

To test the schemes, six labs will run experiments with a total of more than 500 participants, costing the foundation $5 million. The labs, in the United States, Germany, the United Kingdom, and China, will use three techniques to record brain activity as volunteers perform consciousness-related tasks: functional magnetic resonance imaging, electroencephalography, and electrocorticography (a form of EEG done during brain surgery, in which electrodes are placed directly on the brain). In one experiment, researchers will measure the brain’s response when a person becomes aware of an image. The GWT predicts the front of the brain will suddenly become active, whereas the IIT says the back of the brain will be consistently active.

Tononi and Dehaene have agreed to parameters for the experiments and have registered their predictions. To avoid conflicts of interest, the scientists will neither collect nor interpret the data. If the results appear to disprove one theory, each has agreed to admit he was wrong—at least to some extent.

The whole thing has a bit of a publicity stunt feel to it.  As I noted back in March, both of these theories make differing philosophical assumptions about what consciousness fundamentally is, and the authors of both theories used empirical data, as it existed at the time, when formulating their theory.   So I’m not expecting the results to be overwhelmingly conclusive.  (Although it’d be good to be proven wrong on this.)

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

What might be interesting is the front of the brain vs back of the brain thing.  I’ve noted this debate before.  Some scientists, notably people like Tononi and Christof Koch, see consciousness as concentrated in the back part of the brain, in the sensory processing regions including the temporal and parietal lobes.  Others, such as Dehaene, Joseph LeDoux, and Hakwan Lau, think we don’t become conscious of something until it reaches the prefrontal cortex.

This also has relevance on the distinction between first order and higher order theories, that is, between theories that hold that the representations and processing in sensory regions are conscious ones, versus theories that hold that further “higher order” processing in the prefrontal cortex is necessary for us to be conscious of them.

Part of the difficulty is that scientists depend on subject self report to know when those subjects are conscious of something.  However, self report requires the frontal lobes.  There are protocols to minimize the confounding role of self report, such as comparing brain scans of people who see something and report being conscious of it with people who see the same thing but without the requirement to report it.  But a first order advocate can always insist that any remaining frontal activations are superfluous, that all that’s needed for actual consciousness is the posterior activity.

My own money is that the frontal regions are important, perhaps crucial.  But this is complicated.  It’s possible for sensory information from the back part of the brain to trigger sub-cortical activity, such as habitual or reflexive action, without frontal lobe involvement.  It’s even possible to remember what happened during that behavior and consciously retrieve it later, giving us the impression we were conscious of the event during the event, even if we weren’t.

But if we insist that consciousness must include emotional feelings, then I think the frontal lobes become unavoidable.  The survival circuit activations that drive these feelings happen in subcortical regions in the front part of the brain, which have excitatory connections to the prefrontal cortex.  Of course, you could insist that the felt emotions lie in those subcortical circuits rather than the cortex, but severing the connections between those circuits and the prefrontal cortex (like what reportedly used to happen with lobotomies) typically results in deadened emotions.

And all of this is aside from the fact that the introspection machinery is in the very front part of the prefrontal cortex (the frontal poles).  Are we conscious of it if we can’t introspect it?

As I said, complicated.  A lot of this will depend on the assumptions and definitions the experimenters are using.

Still, I’m curious on exactly what they plan to do to test the back versus front paradigms.  If they do figure out a way to conclusively isolate conscious perception with one or the other, it might answer a lot of questions.  And if they do plan to eventually move on to testing theories like local recurrent processing or higher order thought theories, this work might provide a head start.

What do you think?  Am I being too pessimistic on whether these experiments will validate or falsify IIT or GWT?  Or are these theories all hopelessly underdetermined, and we’ll still be arguing over them months after the experimental results are published?

Dehaene’s global neuronal workspace theory

I just finished reading Stanislas Dehaene’s Consciousness and the Brain.  Dehaene is a French psychologist and cognitive neuroscientist who is bullish on the idea of consciousness being something that can be scientifically investigated.  It’s an interesting book, one that I recommend for anyone interested in the science of consciousness.

Dehaene accomplishes his scientific investigation by focusing on what he calls “conscious access”, which is roughly equivalent to Ned Block’s access consciousness and David Chalmers’ easy problems, that is, the ability of our minds to hold content available for reasoning, decision making, and verbal report.  He holds this distinct from the sense of self and metacognition, which he sees being built on top of it.

And as I mentioned in the last post, he is not concerned with phenomenal awareness, typically characterized as raw experience.  He actually barely mentions it in the book, being largely dismissive of it.  He characterizes the hard problem as ill defined, and the idea of qualia, pure mental experiences detached from any information processing role, as an idea that in time will go the way of vitalism.  (Those of you who know me may wonder if I’m projecting my own views here, but no, my views in this area just happen to match his pretty closely, right down to using similar language.)

After discussing the empirical tests able to identify what kinds of stimulus lead to conscious perception, as opposed to unconscious ones, he identifies four signatures of conscious access:

  1. The amplification of an early sensory signal leading it to an “ignition” of circuits in the parietal and prefrontal circuits
  2. The appearance of a slow P3 wave in an electroencephalogram, a slow massive wave throughout the parietal and prefrontal regions about 300 milliseconds after the stimulus
  3. A late and sudden burst of high frequency oscillations, gamma band power
  4. A synchronization of information exchanges across distant brain regions with oscillations in sync

All of this is referred to as “the conscious avalanche”, the widespread activation of neural activity in a network including the prefrontal and parietal regions whenever a perception makes it into consciousness.

Which leads to Dehaene’s theory of consciousness, the global neuronal workspace, a variation of Barnard Baars’ global workspace theory.  The main idea is summed up by Dehaene as:

When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain. Among the millions of mental representations that constantly crisscross our brains in an unconscious manner, one is selected because of its relevance to our present goals. Consciousness makes it globally available to all our high-level decision systems. We possess a mental router, an evolved architecture for extracting relevant information and dispatching it.

…According to this theory, consciousness is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That’s because our brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it; this is why the capacity to report is a key feature of a conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information, I argue, is a characteristic property of the conscious state.

Dehaene, Stanislas. Consciousness and the Brain (p. 163-164). Penguin Publishing Group. Kindle Edition.

Credit: OpenStax College via Wikipedia

The workspace is held to be in a network of regions including the prefrontal cortex, the anterior cingulate cortex, and parietal regions.  I’ve discussed before that the middle parietal regions are the central integration of sensory association processing in the cortex, and the prefrontal cortex is the central integration of motor planning, so it makes sense that these regions would be highly interconnected and interactive, with ongoing recurrent loops of communication between the sensorium and the motorium.

I’ve also written about the central importance of the prefrontal cortex in imagination, the simulation and evaluation of action scenarios.  This fits with Dehaene’s view of the central importance of this region for the workplace.  Indeed, Dehaene implies that conscious access is centered on the prefrontal cortex, which fits with its role as the executive center of the brain.

So this theory appears to have a lot going for it.  And viewed from a purely instrumental perspective, it seems predictive of a lot of observations.  Unlike Tononi’s Integration Information Theory, it doesn’t aspire to recognize consciousness in systems outside of the brain, just in the brain itself, particularly in human or primate brains.  (Although Dehaene does think it could provide insights into possible architectures for consciousness in AI systems.)

And yet, like most grounded scientific theories of consciousness, I tend to think it captures aspects of the reality, but not the whole reality itself.  Even if we restrict ourselves to conscious access, the descriptions of the workplace feel a bit too simple to me.  It’s described like its one big thing, like a type of giant data bus.

This isn’t to say I think conscious access doesn’t involve wholesale activation of the regions that Dehaene discusses, but I’m not sure I buy the implicit description of it as one unified whole.  Based on all the reading I’ve done, it strikes me more as a complex web of disparate subsystems communicating with each other, with cross talk between the streams creating an emergent thing that may resemble the global workspace, but more messy, noisy, and less coherent than the theory implies.

But maybe I’m just quibbling here.  Dehaene might argue that it’s the final result that matters, and he’d be right.  And the global neuronal workspace seems general enough to be compatible with a lot of observations, as well as other theories such as HOT (Higher Order Theory).  I suspect before it’s over we’ll need a collection of theories to account for all observations.  But only time and more research will tell.