Is there a conscious perception finish line?

Global workspace theory (GWT) is the proposition that consciousness is composed of contents broadcast throughout the brain.  Various specialty processes compete for the limited capacity of the broadcasting mechanisms, to have their content broadcast to the all the other specialty processes.

Global neuronal workspace (GNW) is a variant of that theory, popularly promoted by Stanislas Dehaene, which I’ve covered before.  GNW is more specific than generic GWT on the physical mechanisms involved.  It relies on empirical work done over the years demonstrating that conscious reportability involves wide scale activation of the cortex.

One of the observed stages is a massive surge about 300 milliseconds after a stimulus, called the P3b wave.  Previous work seemed to establish that the P3b wave is a neural correlate of consciousness.  Dehaene theorized that it represents the stage where one of the signals achieves a threshold and wins domination, with all the other signals being inhibited.  Indeed, the distinguishing mark of the P3b is that it is massively negative in amplitude, indicating that most of it comes from inhibitory action.

The P3b has been replicated extensively and been seen as a pretty established phenomenon associated with attention and consciousness.  But this is science, and any result is always provisional.  Michael Cohen and colleagues have put out a preprint of a study that may demonstrate that the P3b wave is not associated with conscious perception, but with post perceptual processing.

The study tests the perception of subjects, showing various images while measuring their brain waves via EEG.  Using a no-report protocol, in half of the tests, the subjects were asked to report on whether they saw something, but in the other half they were not asked to report.  Crucially, the P3b wave only manifested in the reported cases, never in the non-report ones, even when the non-report image were exactly the same as the ones that did generate affirmative reports.

To control for the possibility that the subjects weren’t actually conscious of the image in the non-report cases, the subjects were given a memory test after a batch of non-report events, checking to see what they remember perceiving.  Their memories of the perception correlated with the results in the report versions.

So, the P3b wave, a major piller of GNW, may be knocked down.  The study authors are careful to make clear that this does not invalidate GWT or other cognitive theories of consciousness.  They didn’t test for all the other ways the information may have propagated throughout the cortex.  Strictly speaking, it doesn’t even invalidate GNW itself, but it does seem to knock out a major piece of evidence for it.

However, this is a more interesting discussion if we ask, what would it mean if all cortical communication beyond the sensory regions were ruled out, that the ability to acquire a memory of a sight only required the local sensory cortices?  It might seem like a validation of views like Victor Lamme’s local recurrent processing theory, which holds that local processing in the sensory cortices is sufficient for conscious perception.

But would it be?  Dehaene, when discussing his theory, is clear that it’s a theory of conscious access.  For him, something isn’t conscious until it becomes accessible by the rest of the brain.  Content in sensory cortices may form, but it isn’t conscious until it’s accessible.  Dehaene refers to this content as preconscious.  It isn’t yet conscious, but it has the potential to become so.

In that view, the content of what the subjects perceived in the non-report tests may have been preconscious, unless and until their memories were probed, at which point it became conscious.

This may be another case where the concept of consciousness is causing people to argue about nothing.  If we describe the situation without reference to it, the facts seem clear.

Sensory representations form in the local sensory cortex.  A temporary memory of that representation may persist in that region, so if probed soon enough afterward, a report about the representation can be extracted from it.  But until there is a need for a report or other usage, it is not available to the rest of the system, and none of the activity, including the P3b, normally associated with that kind of access is evident.

This reminds me of Daniel Dennett’s multiple drafts theory (MDT) of consciousness.  MDT is a variant of GWT, but minus the idea that there is any one event where content becomes conscious.  It’s only when the system is probed in certain ways that one of the streams, one of the drafts, become selected, generally one of the ones that has managed to leave its effects throughout the brain, that has achieved “fame in the brain.”

In other words, Dennett denies that there is any one finish line where content that was previously unconscious becomes conscious.  In his view, the search for that line is meaningless.  In that sense, the P3b wave may be a measure of availability, but calling it a measure of consciousness is probably not accurate.  And it’s not accurate to say that the Lamme’s local recurrent processing is conscious, although it’s also not accurate to relegate it completely to the unconscious.  What we can say is that it’s at a particular point in the stream where it may become relevant for behavior, including report.

Maybe this view is too ecumenical and I’m papering over important differences.  But it seems like giving up the idea of one finish line for consciousness turns a lot of theories that look incompatible into models of different aspects of the same overall system.

None of this is to say that GWT or any of its variants might not be invalidated at some point.  These are scientific theories and are always subject to falsification on new data.  But if or when that happens, we should be clear about exactly what is being invalidated.

Unless of course I’m missing something?

Posted in Mind and AI | Tagged , , , , , , | 36 Comments

For animal consciousness, is there a fact of the matter?

Book cover of Human and Animal MindsPeter Carruthers has been blogging this week on the thesis of his new book, Human and Animal Minds: The Consciousness Question Laid to Rest.  I mentioned Carruthers’ book in my post on global workspace theory (GWT), but didn’t get into the details.  While I had been considering taking a fresh look at GWT, his book was the final spur that kicked me into action.

Carruthers used to be an advocate for higher order theories (HOT) of consciousness.  He formulated the dual content version that I thought was more plausible.  As an advocate for HOT, he seemed skeptical of animal consciousness.  But in recent years, he’s abandoned HOT in favor of GWT: the idea that conscious content is the result of processes that have won the competition to have their results globally broadcast to systems throughout the brain.

Most GWT proponents will admit that it’s a theory of access consciousness, that it doesn’t directly address phenomenal consciousness, which usually isn’t seen as a problem because most people in this camp see them as the same thing, with phenomenal consciousness being access consciousness from the inside.   In other words, the idea that phenomenal consciousness is something separate and apart from access consciousness is rejected.

Carruthers isn’t completely outside of this view, but his is a bit more nuanced.  He sees phenomenal consciousness as a subset of access consciousness, the portion of it that includes nonconceptual content, content that is irreducible, such as the color yellow.  (Of course, objectively the content of yellow is reducible to patterns of neural spikes originating from M and L cones in the retina, but only the irreducible sensation of yellow makes it into the workspace.)  This is in contrast to conceptual data, such as the perception of a dog, that is reducible to more primitive experience.

So Carruthers sees phenomenal experience as globally broadcast nonconceptual content, in humans.  Why the stipulation at the end?  He points out that phenomenal experience is inherently a first person account, in that discussing it is basically an invitation for each of us to access our own internal experience.

Asking whether another system has that same internal experience is asking how much like us they are.  Other species may have processes that resemble our global broadcasting mechanisms, to greater or lesser extents, and the collection of competing and receiving processes may resemble our own, again to greater or lesser extent.  In both cases, the farther we move away humans in taxonomy, the less like us they are.

Which means that no other species will have the exact same components of our experience.  Whether what they have amounts to phenomenal experience, our first person experience, depends on which aspects of that experience we judge to be essential.  In other words, there isn’t a fact of the matter.

I pointed out to Carruthers that this also applies to many humans, notably brain injured patients, whose global broadcasting mechanism or collection of competing and receiving processes no longer match that of a common healthy human.  Carruthers, to his credit, bites this bullet and acknowledges that there isn’t a fact of the matter when it comes to whether human infants or brain injured patients are phenomenally conscious.

Carruthers’ overall point is that it doesn’t matter, because nothing magical happens at any stage.  Nothing changes.  There are just capabilities that are either present or absent.  In his view, the focus on consciousness is a mistake.  Broadly speaking, I think he’s right.  There is no fact of the matter.  Consciousness is in the eye of the beholder.

But it’s worth discussing why that’s so, arguably the reason why anything ever fails to be a fact of the matter: ambiguity.  In this case, ambiguity about what we mean by terms like “phenomenal experience”, for it to be “like something”.  “Like” to what degree?  And what “thing”?

As soon as we try to nail down a specific definition, we run into trouble, because no specific definition is widely accepted.  The vague terminology masks wide differences.  It refers to a vast, hazy, and inconsistent collection of capabilities we’ve all agreed to put under one label, but without agreeing on the specifics.  It’s like lawmakers who can’t agree on precisely what a law should say, so instead write something vague that can be agreed on, and leave it to the courts to hash out later.

And yet, I think we can still recognize the ways various species process information will be similar to the way we do, to varying degrees.  As Carruthers notes, there is no magical line, no point where we can clearly say consciousness begins.  But great apes are a lot closer to us than dogs, which are closer than mice, which in turn are closer than frogs, fish, etc, all of which are much closer than plants, rocks, storm systems, or electrons.

And there’s something to be said for focusing on systems that do have irreducible sensory and affective content that are globally integrated into their processing.  This matches the definition many biologists use for primary consciousness.  Primary consciousness appears to be widespread among mammals and birds, and possibly among all vertebrates and arthropods.

But primary consciousness omits aspects of our experience many will insist are essential, such as metacognitive self awareness or imaginative deliberation, capabilities that dramatically expand our appreciation of the contents of primary consciousness.  Such a view dramatically reduces the number of species that are conscious, perhaps only to humans and maybe some great apes.  Which view is right?  To Carruthers’ point, there is no fact of the matter.

Incidentally, even primary consciousness gets into definitional difficulties.  For example, fish and amphibians can be demonstrated to have both sensory and affective content, but the architecture of their brains makes it unclear just how integrated the affective contents are with much of the sensory content.  Does this then still count as primary consciousness?  I personally think the answer is yes since there is at least some integration, but can easily see why many might conclude otherwise.

What do you think?  Is Carruthers being too stingy in his conclusion?  Is there a way we can establish a fact of the matter we can all agree on?  Or is the best we can do is recognize the partial and varying commonalities we have with other species?

Posted in Mind and AI | Tagged , , , , , | 70 Comments

Peter Carruthers on the problems of consciousness

Peter Carruthers is posting this week at The Brains Blog on his new book, Human and Animal Minds, which I mentioned in my post on global workspace theory.  His first post focuses on two issues: latent dualism and terminological confusion.

I think he’s right on both counts.  On the latent dualism issue, I’m reminded of something Elkhonon Goldberg said in his book on the frontal lobes: The New Executive Brain:

Why, then, have neuroscientists, and certainly the general public, been so committed to the concept of consciousness and to the axiomatic assumption of its centrality in the workings of the mind? My answer to this question is shockingly embarrassing in its implications: because old gods die hard. Instead of representing a leap forward, the quest for the mechanisms of consciousness represents a leap backward. The dualism of body and soul has been rejected in name but not in substance. We no longer talk about soul; we now call it consciousnes, just as in some circles people no longer talk about creation, they talk about “intelligent design.” We may feel embarrassed by certain old, tired explanatory constructs, and feel intellectually obligated to discard them, but they are often too ingrained for us to truly purge them from our own mental makeup. We give them different names and sneak them right in through the back door. Like many recent converts, we continue to honor the old gods in secret—the god of soul in the guise of consciousness.

Goldberg, Elkhonon. The New Executive Brain: Frontal Lobes in a Complex World (pp. 35-36). Oxford University Press. Kindle Edition.

Carruthers doesn’t seem to go quite that far, but he does note that tacit dualism makes people take thought experiments like zombies and Mary’s room far more seriously than they should.  Amen.

(I should note that Goldberg, despite his skepticism, summarily describes a neural theory about conscious that is essentially the global workspace theory.)

On the terminological issue, Carruthers makes distinctions between different meanings of the word “conscious”, distinguishing between:

  1. wakefulness
  2. perception of things in the environment
  3. access consciousness
  4. phenomenal consciousness.

For him, it’s not controversial that animals have 1-3, but 4 is more questionable.  In a comment, I challenged him that the notion that access and phenomenal consciousness are something other than different perspectives on the same thing, is itself latent dualism, and that we should expect phenomenal consciousness to be present to the extent access consciousness is.

He responded that he’ll address this in later posts this week.  Having read large sections of his book, I’m pretty familiar with what his answer will be.  (And he alludes to it in his response.)  But I’ll hold off commenting until he does address it.

What do you think of his points?  Or of Goldberg’s?

Posted in Zeitgeist | Tagged , , , , , | 13 Comments

The battle between integration and workspace will take a while

Well, I find this a bit disappointing.  I was hoping that the contest between global workspace theory (GWT) and integrated information theory (IIT) would be announced sometime this year.  Apparently, I’m going to have to wait awhile:

Pitts describes the intention of this competition as “to kill one or both theories,” but adds that while he is unsure that either will be definitively disproved, both theories have a good chance of being  critically challenged. It’s expected to take three years for the experiments to be conducted and the data to be analyzed before a verdict is reached.

Three years.  And of course there remains no guarantee the results will be decisive.  Sigh.

I don’t particularly need the results to tell me which theory is more plausible.  GWT is probably not the final theory, but it currently feels more grounded than IIT.

Still it would be nice to get insights on the back vs front of the brain matter.  I’m expecting the answer to be the whole brain is usually involved, but that consciousness can get by in reduced form with only the posterior regions.  (Assuming the necessary subcortical regions remains functional.)  Having only those regions might be a sensory consciousness with no emotional feeling, a type of extreme akinetic mutism.

I suppose it’s conceivable consciousness could also get by with just the frontal regions, but it seems like it would be without any mental imagery (except for olfactory ones), blind feelings, which seems pretty desolate.  On the other hand, it might amount to what the forebrain of fish and amphibians have, since their non-smell senses only go directly to their  midbrain.

Oh well.  I’m sure we’ll have plenty of other studies and papers to entertain us in the meantime!

Posted in Zeitgeist | Tagged , , , , , | 30 Comments

Link on Daniel Dennett video updated

The video embed on the previous post, which had gone dead, is now updated.  Hopefully this one will endure.

Posted in Zeitgeist

Daniel Dennett on consciousness and the hard question

This interview is pretty much classic Daniel Dennett.  He starts off pointing out that introspection is unreliable, that our beliefs about our inner experience are what need to be explained, not necessarily what the beliefs purport to be reality.  He doesn’t name the meta-problem, but it’s clear that, and related concepts, are what he’s talking about.

What’s worth noting here is the discussion on the hard question: “and then what happens?”  At its root, this question gets at the fact that phenomenal experience can’t be considered in isolation, but has to be assessed in terms of the its downstream effects, how it fits in overall survival framework.  He uses this to disseminate the nature of something like pain, and of seeing a blue sky.

(This video is about 41 minutes.)

Dennett did a paper on the hard question, which I’ve been meaning to read, although I really already buy it’s main premise.

The so-called hard problem of consciousness is a chimera, a distraction from the hard question of consciousness, which is once some content reaches consciousness, ‘then what happens?’. This question is seldom properly asked, for reasons good and bad, but when asked it opens up avenues of research that promise to dissolve the hard problem and secure a scientifically sound theory of how the human brain produces the (sometimes illusory) convictions that mislead us.

Unfortunately, despite the obvious play on Chalmers’ hard problem, I doubt if this concept will spread like it did.  The hard problem seems to affirm our own importance, the hard question sheds a clarifying light that often does the opposite.  Still, for anyone trying to understand this stuff, it’s an important question.

People always seem to have strong opinions about Dennett, probably related to his role in the New Atheism movement, but I’ve generally found his views on consciousness to be far more informed than most in the philosophy of mind.  But then I often agree with him, so I would.

What do you think of the hard question?  Or about the other topics discussed in the video?

Posted in Zeitgeist | Tagged , , , , , | 40 Comments

A response to the unfolding argument: a defense of Integrated Information Theory

Back in May,  I shared a paper that made a blistering attack on the integrated information theory (IIT) of consciousness.  A major point of IIT is that a specific causal structure is necessary to generate phenomenal experience, namely a feedback or recurrent neural network, that is, a neural network with structural loops.  To be clear, IIT asserts that this causal structure must be physical.  Implementing it in a software neural network wouldn’t be sufficient.

However, the unfolding paper mathematically demonstrated that any output that can be produced from a recurrent network, can also be produced from an equivalent “unfolded” feed forward network.

If so, there might be no observable differences in the behavior between a system IIT predicts is conscious vs a computationally identical but unfolded one that IIT predicted would not be conscious.  In other words, IIT would be labeling the feed forward system a philosophical zombie (a behavioral one).  Zombies are untestable, making this aspect of IIT untestable, leading the authors to label IIT as unscientific.

Now, a group of IIT researchers have produced a response paper.  One of the response authors shared what appears to be a draft version.  (Which means this link might go dead at some point.  I’ll try to remember to update it when the actual paper gets published.)

The response authors start by labeling the stance of the unfolding paper as one of methodological behaviorism, one they say advocates studying the external behavior of subjects and basing theories entirely on that.  Their stance, on the other hand, depends on “the link – known to each of us by first-hand experience – between conscious experience and behavioral reports,” and builds theories based on conscious experience.

There is some discussion about assessing the consciousness of a human compared to a black box system of some kind.  They think it’s fair game to take what is known about the human’s internal structure to privilege it in this assessment.  This seems like an implicit argument against the Turing test.

The response authors go on to assert that the analysis of the unfolding paper was done within the behaviorist and functionalist mindset.  They take the stated conclusions from that paper and amend language to them, adding in effect: “except for consciousness.”

For example:

“(P3): Two systems that have identical input-output functions cannot be distinguished by any experiment that relies on a physical measurement (other than a measurement of brain activity itself or of other internal workings of the system).”

becomes:

P3’ “Two systems that have identical input-output functions (except where conscious experience is concerned) cannot be distinguished by any experiment that purely relies on input-output measurements. We can distinguish these two systems, however, by understanding the internal working of the respective systems, through the systematic investigation of the links between reports about consciousness and physical stimulations/measurements of the internal mechanisms.”

It’s worth noting that their proposed methods still crucially depend on reports, which is just a form of output measurement.  In other words, they’re still doing what the unfolding authors say is the only way to proceed.  They’re just implicitly asserting that the specific internal structure details will make a difference in those measurements.

But if they turn out to be wrong about that, and the output of some system is the same irrespective of specific internal structure, then under IIT, they’d be forced to regard that system as a zombie, which again brings us to the point that the unfolding authors made.  We still only have the output of the system as evidence.  We can only use internal structures as an indicator if they happen to match that of another system whose behavior has already convinced us of its consciousness.

Interestingly, the response authors could have attacked this at a computational level, asserting that another physical causal structure that is identical in terms of inputs and outputs might not be identical in terms of efficiency or performance.  It might be that IIT is wrong in principle but right in terms of effectiveness.  But that would have meant engaging the argument on functional grounds.

Finally, the response authors make the case that just because IIT makes some untestable predictions is no reason to label it unscientific.  Many scientific theories make such predictions.  The question is whether they make other predictions that actually are testable.  Theories, they point out, should be judged on their testable predictions.

I actually felt like this was the strongest part of their argument.  In retrospect, I think the unfolding authors overstated the case against IIT.  The predictions they discussed are untestable, but not all of IIT’s predictions are.

On the other hand, the response to the specific issues raised didn’t strike me as successful.  And theories should also be judged on whether they’re the simplest explanation for the evidence.

Admittedly, this comes from the perspective of a functionalist who finds little theoretical merit in IIT.  But maybe I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 11 Comments

The issues with higher order theories of consciousness

After the global workspace theory (GWT) post, someone asked me if I’m now down on higher order theories (HOT).  It’s fair to say I’m less enthusiastic about them than I used to be.  They still might describe important components of consciousness, but the stronger assertion that they provide the primary explanation now seems dubious.

A quick reminder.  GWT posits that conscious content is information that has made it into a global workspace, that is, content that has won the competition and, for a short time, is globally shared throughout the brain, either exclusively or with only a few other coherently compatible concepts.  It becomes conscious by all the various processes collectively having access to it and each adapting to it within their narrow scope.

HOT on the other hand, posits that conscious content is information for which a higher order thought of some type has been formed for it.  In most HOTs, the higher order processing is thought to happen primarily in the prefrontal cortex.

As the paper I highlighted a while back covered, there are actually numerous theories out there under the higher order banner.  But it seems like they fall into two broad camps.

In the first are versions which say that a perception is not conscious unless there is a higher order representation of that perception.  Crucially in this camp, the entire conscious perception is in the higher order version.  If, due to some injury or pathology, the higher order representation were to be different than the lower order one, most advocates of these theories say that its the higher order one we’d be conscious of, even if the lower order one was missing entirely.

Even prior to reading up on GWT, I had a couple of issues with this version of HOT.  My first concern is that it seems computationally expensive and redundant.  Why would the nervous system evolve to have formed the same imagery twice?  We know neural processing is metabolically expensive.  It seems unlikely evolution would have settled on such an arrangement, at least unless there was substantial value to it, which hasn’t been demonstrated yet.

It also raises an interesting question.  If we can be conscious of a higher order representation without the lower order one, why then, from an explanatory strategy point of view, do we need the lower order one?  In other words, why do we need the two tier system if one (the higher tier) is sufficient?  Why not just have one sufficient tier, the lower order one?

The HOTs I found more plausible were in the second camp, and are often referred to as dispositional or dual content theories.  In these theories, the higher order thought or representation doesn’t completely replace the lower order one.  It just adds additional elements.  This has the benefit of making the redundancy issue disappear.  In this version, most of the conscious perception comes from the lower order representations, with the higher order ones adding feelings or judgments.  This content becomes conscious by its availability to the higher order processing regions.

But this then raises another question.  What about the higher order region makes it conscious?  By making the region itself, the location, the crucial factor, we find ourselves skirting with Cartesian materialism, physical dualism, the idea that consciousness happens in a relatively small portion of the brain.  (Other versions of this type of thinking locate consciousness in various locations such as the brainstem, thalamus, or hippocampus.)

The issue here is that we still face the same problem we had when considering the whole brain.  What about the processing of that particular region makes it a conscious audience?  Only, since now we’re dealing with a subset of the brain, the challenge is tougher, because it has to be solved with less substrate.  (A lot less with many of the other versions.  At least the prefrontal cortex in humans is relatively vast.)

We can get around this issue by positing that the higher order regions make their results available back into the global workspace, that is, by making the entire brain the audience.  It’s not the higher order region itself which is conscious.  Its contents become conscious by being made accessible to the vast collection of unconscious processes throughout the brain, each of which act on it in its own manner, collectively making that content conscious.

But now we’re back to consciousness involving the workspace and its audience processes.  HOT has dissolved into simply being part of the overall GWT framework.  In other words, we don’t need it, at least not as a theory, in and of itself, that explains consciousness.

None of this is to say higher order processing isn’t a major part of human consciousness.  Michael Graziano’s attention schema theory, for instance, might well still have a role to play in providing top down control of attention, and providing our intuitive sense of how it works.  The other higher order processes provide metacognition, imagination, and what Baars calls “feelings of knowing,” among many other things.

They’re just not the sole domain of consciousness.  If many of them were knocked out, the resulting system would still be able to have experiences, experiences that could lay down new memories.  It’s just that the experience would be simpler, less rich.

Finally, it’s worth noting that David Rosenthal, the original author of HOT, makes this point in response to Michael Graziano’s attempted synthesis of HOT, GWT, and his own attention schema theory (AST):

Graziano and colleagues see this synthesis as supporting their claim that AST, GWT, and HO theory “should not be viewed as rivals, but as partial perspectives on a deeper mechanism.” But the HO theory that figures in this synthesis only nominally resembles contemporary HO theories of consciousness. Those theories rely not on an internal model of information processing, but on our awareness of psychological states that we naturally classify as conscious. HO theories rely on what I have called (2005) the transitivity principle, which holds that a psychological state is conscious only if one is in some suitable way aware of that state.

This implies that consciousness is introspection.  Admittedly, there is precedent going back to John Locke for defining consciousness as introspection.  (Locke’s specific definition was “the perception of what passes in a man’s own mind”.)  Doing so dramatically reduces the number of species that we consider to be conscious, perhaps down to just humans, non-infant humans to be precise.  I toyed with this definition a few years ago, before deciding that it doesn’t fit most people’s intuitions.  (And when it comes to definitions of consciousness, our intuitions are really all we have.)

It ignores the fact that we are often not introspecting while we’re conscious.  And much of what we introspect goes on in animals (in varying degrees depending on species), or human babies for that matter, even if they themselves can’t introspect it.  It also ignores the fact that if a human, through brain injury or pathology, loses the ability to introspect, but still shows an awareness of their world, we’re going to regard them as conscious.

So HOT doesn’t hold the appeal for me it did throughout much of 2019.  Although new empirical results could always change that in the future.

What do you think?  Am I missing benefits of HOT?  Or issues with GWT?

Posted in Mind and AI | Tagged , , , , , | 57 Comments

The Witcher

Poster for The Witcher showing the main charactersI just finished watching The Witcher on Netflix.  This was a series that I initially resisted getting into.  From a distance, it looked too much like a Game of Thrones knockoff.  But after numerous people recommended it, I decided to give it a try.

It turns out that The Witcher is based on a book series, a series that actually predates George R.R. Martin’s A Song of Ice and Fire series, so definitely not a knock off.  Indeed The Witcher series seems like part of a much older sword and sorcery tradition going back to the likes of Robert E. Howard and Michael Moorcock.

The world presented in the series is, in many respects, a typical fantasy one, with a roughly high medieval society, but one that also includes elves, dwarves, and various other mythical creatures, including monsters, and where magic works, so there are sorcerers.  Magic in this world is actually pretty common.  Every village seems to have its local sorcerer for various needs.

There are references to a historical event called the Conjunction of the Sphere, which apparently brought humans and the other races and beings together in one land, called The Continent.  This is far from a fairy tale world however.  It’s pretty dark, and its view of humanity is not at all complimentary.  In general, humans in this world mistreat and discriminate against just about anything different from them.

On the other hand, there are likable characters.  And we’re able to see the point of view of even the most ruthless ones.  There does appear to be an overall villain to the series, although his motivations are not yet clear.  Which, of course, means that this first season ends on a cliffhanger.

The show actually takes an episode or two to really get its footing, so don’t give up on it in the first twenty minutes, as I almost did.  It has a lot of surprises and zingers.  If fantasy, particularly dark gritty fantasy works for you, then this is well worth checking out.

I hadn’t heard of the book series before.  Although these days I’m more into science fiction than fantasy, waiting a whole year to see what happens next may spur me to check out the books.

Posted in Zeitgeist | Tagged , | 14 Comments

Global workspace theory: consciousness as brain wide information sharing

Lately I’ve been reading up on global workspace theory (GWT).  In a survey published last year, among general consciousness enthusiasts, integrated information theory (IIT) was the most popular theory, followed closely by GWT.  However, among active consciousness researchers, GWT was seen as the most promising by far (although no theory garnered a majority).  Since seeing those results, I’ve been curious about why.

One reason might be that GWT has been around a long time, having first been proposed by Bernard Baars in 1988, with periodic updates all recently republished in his new book.  It’s received a lot of development and has spawned numerous variants.  Daniel Dennett’s multiple drafts model is one.  But perhaps the one with the most current support is Stanislas Dehaene’s global neuronal workspace, which I read and wrote about earlier this year.

All of the variants posit that for an item to make it into consciousness, it has to enter a global workspace in the brain.  This is most commonly described using a theater metaphor.

Imagine a play in progress in a theater.  A light shines down on the stage on the currently most relevant actor or events, the light of conscious.  The backstage personnel enabling the play, along with the director and other controlling personnel, are not in the light.  They’re in the unconscious dark.  The audience, likewise, is in the dark.  That is, the audience members are unconscious information processing modules.

This last point is crucial, because this is not the infamous Cartesian theater, with an audience of one conscious homunculus, a little person observing events.  Such a notion merely defers the explanation.  If the homunculus provides consciousness, then does it too have its own homunculus?  And that one yet its own?  With infinite regression?  By stipulating that the audience is not conscious, we avoid this circular trap.

That said, one issue I have with this metaphor is the passivity of the audience.  Consider instead a large meeting room with a lot of rowdy people.  There is someone chairing the meeting, but their control is tenuous, with lots of people attempting to talk.  Every so often, someone manages to gain the floor and make a speech, conveying their message throughout the room.  At least until the next person, or coalition of people, either adds to their message, or shouts them down and takes over the floor.

Most of the talking in the room is taking place in low level side conversations.  But the general room “consciousness”, that is, the common things everyone is aware of, are only of what’s conveyed in the speeches, even though all the side conversations are constantly changing the tenor and state of people’s opinions throughout the room, and could effect future speeches.

I think this alternate metaphor makes it more clear what it means to enter the workspace.  In all of the theories, the workspace is not a particular location in the brain.  To “enter” it is to be broadcast throughout the brain, or at least the cortical-thalamic system.

Diagram showing the regions of the brain

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

How does a piece of information, or a coalition of information, accomplish this?  There is a competition.  Various modules in the brain attempt to propagate their signals.  In many cases, actually in most cases, they are able to connect up to one or a few other modules and accomplish a task (the side conversations).  If they do, the processing involved is unconscious.

But in some cases, the signal from a particular module resonates with information from other modules, and a coalition is formed, which results in the information dominating one of the major integration hubs in the brain and brings the competition to the next level.

At some point, a signal succeeds in dominating the frontoparietal network, all competing signals are massively inhibited, and the winning signal is broadcast throughout the cortical-thalamic system, with binding recurrent connections forming circuits between the originating and receiving regions .  The signal achieves what Daniel Dennett calls “fame in the brain”.  It is made available to all the unconscious specialty modules.

Many of these modules will respond with their own information, which again might be used by one or more other modules unconsciously.  Or the new information might excite enough other modules to win the competition and be the next broadcast throughout the workspace.  The stream of consciousness is the series of images, concepts, feelings, or impulses that win the competition.

One question that has long concerned me about GWT: why does simply being in the workspace cause something to be conscious?  I think the answer is it’s the audience that collectively makes it so.

Consider Dennett’s “fame in the brain” metaphor.  If you were to meet a famous person, would you find anything about the person, in an of themselves, that indicated fame?  They might be attractive, an athlete, funny, or extraordinary in some other fashion, but in all cases you could meet non-famous people with those same traits.  What then gives them the quality of fame?  The fact that large numbers of other people know who they are.  Fame isn’t something they exude.  It’s a quality they are granted by large numbers of people, which often give the famous person causal influence in society.

Similarly, there’s nothing about a piece of information in the brain, in and of itself, that makes it either conscious or unconscious.  It becomes a piece of conscious content when it is accessible by several systems throughout the brain, memory systems that might flag it for long term retention, affect systems that might provide valenced reactions, action systems that might use it in planning, or introspective and language systems that might use it for self report.  All of these systems end up giving the information far more causal influence than it would have had if it remained isolated and unconscious.

Admittedly, this is a description of access consciousness.  Someone might ask how this implies phenomenal consciousness.  GWT proponents tend to dismiss the philosophical idea that phenomenal consciousness is something separate and apart from access.  I agree with them.  To me, phenomenal consciousness is what access consciousness is like from the inside.

But I realize many people don’t see it that way.  I suspect many might accept GWT but feel the need to supplement it with additional philosophy to address the phenomenal issue.  Peter Carruthers, in his latest book, attempts to philosophically demonstrate how GWT explains phenomenal experience, but since he’s a “qualia irrealist”, I’m not sure many people seeking that kind of explanation will find his persuasive.

There are a lot of nuanced differences between the various global workspace theories.  For example, Baars most often speaks of the workspace as being the entire cortical-thalamic core.  Dehaene tends to emphasize the role of the prefrontal cortex, although he admits that parietal, temporal, and other regions in the frontoparietal network are major players.

Subcortical structures of the brain

Image credit: OpenStax College via Wikipedia

Baars emphasizes that processing in any one region of the cortical-thalamic core can be conscious or unconscious.  Any region can potentially win the competition and get its contents into the workspace.

Dehaene is more reserved, noting that some regions, particularly executive ones, have more connectivity than others, and that very early sensory regions don’t necessarily seem capable of generating workspace content, except indirectly through later sensory layers.

Both agree that subcortical regions generally can’t contribute directly to the workspace.  Although Baars sees the hippocampus as a possible exception.

Both Dehaene and Baars think it’s likely that many other animal species have global workspaces and are therefore conscious.  Baars seems confident that any animal with a cortex or a pallium has a workspace, which I think would include all vertebrates.  Dehaene is again a bit more cautious, but he sees all mammals as likely having  a workspace, and possibly birds.  Peter Carruthers, who converted from his own particular higher order theory to GWT, doesn’t think there’s a fact of the matter on animal consciousness.

A common criticism of GWTs is that they are theories of cognition rather than consciousness.  Since to me, any scientific theory of consciousness is going to be a cognitive one, I don’t see that as a drawback.  And I realized while reading about them that they also function as theories of general intelligence, the holy grail of AI research.  Which fits since GWT actually has origins in AI research.

GWTs also seem able to account for situations where large parts of the cortex are injured or destroyed.  Unlike higher order theories (HOT), most of which seem dependent on the prefrontal cortex, if large parts of the frontal regions were lost, the workspace would be dramatically reduced but not eliminated.  Capabilities would be lost, but consciousness would still exist in a reduced form.

I also now understand why the overview paper earlier this year on HOT classified GWTs as first order theories, since first order representations can win the workspace competition as well as higher order or executive ones.  This allows GWTs to avoid many of the computational redundancies implicit in HOT, redundancies that might seem unlikely from an evolutionary perspective.

And I’ve recently realized that GWT resonates with my own intuition from reading cognitive neuroscience, which I described in a post a while back, that subjective experience is communication between the sensory, affective, and planning regions of the brain.  The broadcasting workspace seems like the medium of that communication.

GWTs are scientific theories, so they’ll either succeed or fall on empirical research.  I was impressed with the wealth of empirical data discussed in Dehaene’s and Baars’ books.  Only time will tell, but I now understand why so many consciousness experts are in this camp.

What do you think?  Does this theory sound promising?  Or do you see problems with it?  What stands out to you as either its strengths or weaknesses?

Posted in Mind and AI | Tagged , , , , , | 55 Comments