Stimulating the central lateral thalamus produces consciousness

(Warning: neuroscience weeds)

The thalamus.
Image credit: Wikipedia

A couple of people have asked me about this study, described in numerous popular science articles (such as this one).  A monkey had electrodes installed in its brain that allowed scientists to stimulate parts of its thalamus, the region at the center of the brain which links the cortex to the brainstem and other systems, as well as serves as a relay station for some inter-cortical communication.

Stimulating the monkey, while it was anesthetized, in the central lateral thalamus region caused it to wake up, look around, and reach for things.  Ceasing the stimulation caused the monkey to immediately lose consciousness.  Notably, this region is heavily interconnected with frontal and parietal regions.

Diagram showing the various regions of the thalamus

Image credit: Madhero88 via Wikipedia

Interestingly, stimulating the medial dorsal thalamus, which is heavily connected to the prefrontal cortex, “proved less effective”, and stimulating the central medial thalamus, which projects to the striatum, was also less effective.

In other words, consciousness seemed to be associated with the central lateral thalamus region and its projections to layers in the frontoparietal network.

Diagram showing the regions of the brain

Lobes of the brain
Image credit: BruceBlaus via Wikipedia

One interesting point about this study, is it seems to contradict another study from a year or two ago which ruled out the thalamus as having a role in wakefulness (favoring the basal ganglia instead, if I recall correctly), a reminder that it’s not a good idea to hang too tightly on the results of individual studies.  Another point is the demonstration that the frontoparietal network overall, not just the prefrontal cortex, seemed to be most important for stimulating consciousness.

What does it all mean?  Well, it seems like a dramatic experiment.  And it seems to re-establish the role of the thalamus in wakefulness.  The part about stimulating the regions projecting to the prefrontal cortex not being effective makes me wonder about implications for higher order theories that focus on that region.

All that said, I think we have to bear in mind the distinction between the state of consciousness, that is wakefulness or vigilance, and awareness.  A lot of the information in this experiment seems to be about the state more than awareness.  In that sense, some of the anatomical details are new, but the overall macroscopic picture doesn’t seem to be much affected.

But this is a technical paper and there are probably implications I’m missing.  In particular, the implications for anesthesiology  and other clinical situations may be very significant.

Posted in Zeitgeist | Tagged , , | 16 Comments

Do qualia exist? Depends on what we mean by “exist.”

The cognitive scientist, Hakwan Lau, whose work I’ve highlighted several times in the last year, has been pondering illusionism recently.  He did a Twitter survey on the relationship between the phenomenal concept strategy (PCS) and illusionism, which inspired my post on the PCS.  (Meant to mention that in the post, but it slipped.)  Anyway, he’s done a blog post on illusionism, which is well worth checking out for its pragmatic take.

As part of that post, he linked to a talk that Keith Frankish gave some years ago explaining why he thought qualia can’t be reduced to a non-problematic version that can be made compatible with physicalism.  The video, which has Frankish’s voice but only shows his presentation slides, is about 23 minutes.

In many ways, this talk seems to anticipate the criticism from Eric Schwitzgebel that illusionists are dismissing an inflated version of consciousness, one that Schwitzgebel admits comes from other philosophers who can’t seem to resist bundling theoretical commitments into their definitions of it.  He argues for a pre-theoretical, or theoretically naive conception of consciousness.

Frankish discusses the problems with what he calls “diet qualia”, a concept without the problematic aspects that Daniel Dennett articulates in his attempted take down of qualia, a conception that in some ways resembles what Schwitzgebel advocates for.  But Frankish points out that diet qualia don’t work, that any discussion of them inevitably inflates to “classic qualia” or collapses to “zero qualia” (his stance).

Just to review, qualia are generally considered to be instances of subjective experience.  The properties that Dennett identified are (quoted from the Wikipedia article on qualia):

  1. ineffable; that is, they cannot be communicated, or apprehended by any means other than direct experience.

  2. intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.

  3. private; that is, all interpersonal comparisons of qualia are systematically impossible.

  4. directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

Illusionists usually point out that qualia can be described in terms of dispositional states, meaning they’re not really ineffable.  For example, the experience of red can be discussed entirely in terms of sensory processing and the various affective reactions it causes throughout the brain.  And doing so demonstrates that they’re not intrinsic or irreducible.

Privacy can be viewed in two senses: as a matter of no one else being able to know the content of the experience, or of no one being able to have the experience.  The first seems like just a limitation of current technology.  There’s no reason to suppose we won’t be able to monitor a brain someday and know exactly what the content is of an in-progress experience.  The second sense is true, but only in the same sense that the laptop I’m typing this on currently has a precise informational state that no other electronic device has, a fact that really has no metaphysical implications.

There’s a similar double sense for qualia being directly or immediately apprehensible.  In one sense, it implies we have accurate information on our cognitive states, something that modern psychology has pretty conclusively demonstrated is often not true.  In the second sense, it says that we know our impression of the experience, that we know what seems to be, which is trivially true.

So, seen from an objective point of view, qualia, in the sense identified by Dennett, doesn’t exist.  So the failure of the diet versions can seem very significant.

But I think there’s a fundamental mistake here.  The dissolving of qualia in this sense happens objectively.  But remember that qualia are not supposed to be objective.  They are instances of subjective experience.  This means that the way they seem to be, their seeming nature is their nature, at least their subjective nature.

Of course, many philosophers make the opposite mistake.  They take subjective experience and think its phenomenal nature is something other than just subjective, that its objective reality is in some way obvious from its subjective aspects.  But all indications are that the objective mechanisms that underlie the subjective phenomena are radically different from those phenomena, which is what I think most illusionists are trying to say.

Put another way, qualia only exist subjectively.  But they only need to exist subjectively to achieve the status of being instances of subjective experience.

And they only need to exist that way to be subjectively ineffable and subjectively irreducible.  Yes, the processing underlying qualia can be described in objective terms, but much of that description will involve unconscious processing below the level of consciousness, meaning that it won’t be describable from the subjective experience itself, or reducible from that experience.

Looking at it this way allows us to accept qualia realism, but in a manner fully consistent with physcialism.  In other words, there’s nothing spooky going on here.  In many ways, this is just an alternate description of illusionism, but one that hopefully clarifies rather than obscures, and doesn’t seem to deny our actual experiences.

Of course, a hard core illusionist might insist that subjective existence itself doesn’t count as really existing.  Admittedly, it comes down to a matter of how we define “exists.”  In other words, we’re back to a situation where there is no fact of the matter, just different philosophical positions that people can choose to hold.

Unless of course, I’m missing something?

Posted in Mind and AI | Tagged , , , | 62 Comments

Alita: Battle Angel

Movie poster for Alita: Battle Angel

I’m pretty late to the party on this one, but today I finally watched Alita: Battle Angel.  The movie is set in the 26th century and involves a society with a lot of cyborgs in it, including many whose entire body other than their head or brain has been replaced by machinery.

It’s about 300 years after an event known as “the fall”, in which all the floating cities were destroyed in a war.  Only one of those floating sky cities remains: Zalem (apparently held up by some kind of anti-gravity technology).  Beneath and around Zalem is another city on the ground: Iron City.

Directly underneath Zalem is a scrapyard, where the detritus from the floating city lands, and where Dr. Dyson Ido finds the head and upper torso of a young girl, who is still alive.

Ido attaches the her to a cybernetic body.  When she awakens, she has no memory of her past life.  He gives her the name “Alita”.  What follows is a journey of discovery as Alita learns about her abilities, who she is, and where she comes from, and the society she has awoken in.

It’s a society with a sharp class division, with an affluent population living in the sky city Zalem, and the rest of the hardscrabble population living in Iron City.  The people of Iron City dream of finding a way to move up into Zalem, but the options for doing so are very limited.  Life is hard and brutal.

It’s difficult to go into much more detail without getting into spoilers.  I’ll just note that it’s chock full of action, with plenty of battles between cyborgs with all kinds of attached weaponry.  And there are lots of special effects.  A big part of the appeal is the imagery, which this movie handles very well.

I have to admit that I hadn’t heard much about this movie so I wasn’t expecting more than moderately entertaining fluff.  It surprised me by being more than that.  It wasn’t until after finishing it that I learned that it’s based on a classic manga and anime series, and that one of the producers is James Cameron.

It’s not clear if it did well enough financially for sequels.  I hope it did.  It’d be good to see more.  If not, I might have to dig up the anime or manga material.

So, if you’re looking for a couple hours of entertainment, and aren’t turned off by cybernetic body parts and human machine hybrids being ripped apart, and like me have somehow managed to miss this movie until now, I recommend checking it out.

Posted in Zeitgeist | Tagged | 12 Comments

The phenomenal concept strategy and issues with conceptual isolation

I’ve often pondered that the hard problem of consciousness, the perceived problem of understanding how phenomenal consciousness can happen in physical systems, arises due to the fact that our intuitive model of the phenomenal is very different from our intuitive model of the physical, of the brain in particular.

As is usually the case, anytime you think you’re having an original observation, you should make sure someone hasn’t thought of it first.  In this case, philosophers have.  It’s called the phenomenal concept strategy (PCS).  Peter Carruthers discussed it in his blog posts a few weeks ago, but in a manner that expected the reader to already be familiar with it.  And Hakwan Lau brought it up on Twitter recently, spurring me to investigate.

It’s basically the idea that the explanatory gap between mind and body exists not because there’s a gap between physical and mental phenomena, but because there’s a gap in our concepts of these things.

Part of the value of this strategy, is that it supposedly helps physicalists answer the knowledge argument from Mary’s room: the thought experiment where Mary, a scientific expert in visual perception who has spent her entire life in a black and white room, leaves the room and experiences color for the first time, and the question is asked, does Mary learn something new when she leaves the room?  According to the PCS, what Mary learns is a new phenomenal concept, which just expresses other knowledge she already had in a new way.

At first glance, this view seems to offer a lot.  But as with all philosophical positions, it pays to look before you leap.  Under the view, the reason this works is that phenomenal concepts are conceptually isolated.  This isolation supposedly makes philosophical zombies conceivable.

“Conceivable” in this case is supposed to mean logically coherent, as opposed to merely imaginable.  And the zombies in this case aren’t the traditional ones which are physically identical to a conscious being (and simply presuppose dualism) but functional or behavioral ones, systems that are different internally but behaviorally identical.

The idea is that it’s possible to imagine a being just like you but with different or missing phenomenal concepts.

David Chalmers uses this zombie conceivability to attack the PCS.  His point seems to be that we eventually run into the same gaps in the concepts that we perceived to be in the originals.  Peter Carruthers responds with a discussion involving phenomenal and schmenomenal states that I have to admit I haven’t yet parsed.

But my issue is that the concepts can’t be that isolated, because we can discuss them.  Indeed, it seems dubious that there can be a being that is missing phenomenal concepts who can nonetheless discuss them.

That’s not to say that our concepts of phenomenal content can’t be isolated, but that isolation doesn’t seem inherent or absolute.  It’s something we allow to creep into our thinking.  It’s a failure to ask Daniel Dennett’s hard question:  “And then what happens?”

I personally think qualia exist, but not in any non-physical manner.  They are information, physical information that are part of the causal chain.  There is no phenomenal experience which doesn’t convey information (although it may not be information we need at the moment).  This information is raw and primal, so it doesn’t feel like information to us, but it is information nonetheless.

Consider the pain of a toothache.  How else should the valence systems in the brain signal to the planning systems that there is an issue here which needs addressing?  The only alternative is to imagine some form of symbolic communication (numbers, notations, etc), but symbolic communications is just communication built on top of the primal version, raw conscious experience.

This communication is primal to us because it is subjectively irreducible.  We have no access to its construction and underlying mechanisms (which ironically can be understood in symbolic terms), therefore it seems like something that exists separate and apart from those mechanisms.  In that sense, our concept of it is isolated from our concepts of those underlying mechanism.  This might tempt us to see that concept as completely isolated.

But it’s only isolated in that way if we fail to relate it to why we have that phenomenal experience, and how we use it.  If we touch a hot stove, our hand may reflexively jerk back due to the received nociception, but we also experience the burning pain.  If we didn’t, and didn’t remember it, we might be tempted later to touch the stove again.  A zombie needs to have a similar mechanism for it to be functional in the same way.  (Maybe Carruthers’ “schmenomenal” states?)

In other words, phenomenal experience has a functional role.  It evolved for a reason (or more accurately a whole range of reasons).  That doesn’t mean it may not misfire in some situations, leaving us wondering what the functional point of it is, but that’s more a factor that evolution can’t foresee every situation, and of how strange the modern world is in comparison to our original ecological niche in places like the African savanna.

So, I think there is some value to seeing the explanatory gap in terms of concepts, but not in seeing those concepts as isolated in some rigid or absolute manner.  They’re only isolated if we make them so, as we frequently do.  And they’re not so consistently isolated that we need to let in zombies.

Of course, I’m approaching this as someone who most frequently falls within Chalmers’ Type A materialist category.  Apparently the PCS is typically championed by Type B materialists, those who see a hard problem that needs addressing.  So it may be that I was never the intended audience for this strategy.

Unless of course I’m missing something.  Are zombies less avoidable than I’m thinking here?  What do you think of the PCS overall?

Posted in Mind and AI | Tagged , , , | 74 Comments

Prefrontal activity associated with the contents of consciousness

The other day I bemoaned the fact that the Templeton competition between global workspace theory (GWT) and integrated information theory (IIT) would take so long, particularly the point about having to wait to see the role of the front and back of the brain in consciousness clarified.  Well, it looks like many aren’t waiting, and studies seem to be piling up showing that the frontal regions have a role.

In a preprint of a new study, the authors discuss how they exposed monkeys to a binocular rivalry type situation.  They monitored the monkeys using a no-report protocol, to minimize the possibility that the monitored activity was more about the need to report than perception.  In this case, the no-report was achieved by monitoring a reflexive eye movement that had been previously shown to correlate with conscious perception.  So the monkeys didn’t have to “report” by pressing a button or any other kind of volitional motor action.

The authors were able “decode the contents of consciousness from prefrontal ensemble activity”.  Importantly, they were able to find this activity when other studies hadn’t, because while those other studies had depended on fMRI scans using blood oxygen levels, this study used equipment physically implanted in the monkey’s brain.

These results add support for cognitive theories of consciousness, such as GWT and higher order theories (HOT), and seem to contradict the predictions made by IIT.

Of course, it doesn’t close off every loophole.  There was speculation on Twitter that Ned Block will likely point out that some variation of his no-post-perceptual-cognition protocol is necessary.  In other words, it can’t be ruled out that the activity wasn’t the monkeys having cognition about their perception after the perception itself.  (Which of course assumes that cognition about the perception and conscious perception are distinct things, something cognitive theories deny.)

And as I’ve noted before, I tend to doubt that the prefrontal cortex’s role will be the whole story, which seems necessary for strict HOT.  It seems possible that someone could have sensory consciousness without it, but probably not affect consciousness, and not introspective consciousness.

So, not the last word, but important results.  After the study last week calling into question the role of the P3b wave, it seems to get global neuronal workspace off the ropes.

Posted in Zeitgeist | Tagged , | 34 Comments

Recurrent processing theory and the function of consciousness

Victor Lamme’s recurrent processing theory (RPT) remains on the short list of theories considered plausible by the consciousness science community.  It’s something of a dark horse candidate, without the support of global workspace theory (GWT) or integrated information theory (IIT), but it gets more support among consciousness researchers than among general enthusiasts.  The Michael Cohen study reminded me that I hadn’t really made an effort to understand RPT.  I decided to rectify that this week.

Lamme put out an opinion paper in 2006 that laid out the basics, and a more detailed paper in 2010 (warning: paywall).  But the basic idea is quickly summarized in the SEP article on the neuroscience of consciousness.

RPT posits that processing in sensory regions of the brain are sufficient for conscious experience.  This is so, according to Lamme, even when that processing isn’t accessible for introspection or report.

Lamme points out that requiring report as evidence can be problematic.  He cites the example of split brain patients.  These are patients who’ve had their cerebral hemispheres separated to control severe epileptic seizures.  After the procedure, they’re usually able to function normally.  However careful experiments can show that the hemispheres no longer communicate with each other, and that the left hemisphere isn’t aware of sensory input that goes to the right hemisphere, and vice-verse.

Usually only the left hemisphere has language and can verbally report its experience.  But Lamme asks, do we regard the right hemisphere as conscious?  Most people do.  (Although some scientists, such as Joseph LeDoux, do question whether the right hemisphere is actually conscious due to its lack of language.)

If we do regard the right hemisphere as having conscious experience, then Lamme argues, we should be open to the possibility that other parts of the brain may be as well.  In particular, we should be open to it existing in any region where recurrent processing happens.

Communication in neural networks can be feedforward, or it can include feedback.  Feedforward involves signals coming into the input layer and progressing up through the processing hierarchy one way, going toward the higher order regions.  Feedback processing is in the other direction, higher regions responding with signals back down to the lower regions.  This can lead to a resonance where feedforward signals cause feedback signals which cause new feedforward signals, etc, a loop, or recurrent pattern of signalling.

Lamme identifies four stages of sensory processing that can lead to the ability to report.

  1. The initial stimulus comes in and leads to superficial feedforward processing in the sensory region.  There’s no guarantee the signal gets beyond this stage.  Unattended and brief stimuli, for example, wouldn’t.
  2. The feedforward signal make it beyond the sensory region, sweeping throughout the cortex, reaching even the frontal regions.  This processing is not conscious, but it can lead to unconscious priming.
  3. Superficial or local recurrent processing in the sensory regions.  Higher order parts of these regions respond with feedback signalling and a recurrent process is established.
  4. Widespread recurrent processing throughout the cortex in relation to the stimulus.  This leads to binding of related content and an overall focusing of cortical processes on the stimulus.  This is equivalent to entering the workspace in GWT.

Lamme accepts that stage 4 is a state of consciousness.  But what, he asks, makes it conscious?  He considers that it can either be the location of the processing or the type of processing.  But for the location, he points out that the initial feed forward sweep in stage 2 that reaches widely throughout the brain doesn’t produce conscious experience.

Therefore, it must be the type of processing, the recurrent processing that exists in stages 3 and 4.  But then, why relegate consciousness only to stage 4?  Stage 3 has the same type of processing as stage 4, just in a smaller scope.  If recurrent processing is the necessary and sufficient condition for conscious experience, then that condition can exist in the sensory regions alone.

But what about recurrent processing, in and of itself, makes it conscious?  Lamme’s answer is that synaptic plasticity is greatly enhanced in recurrent processing.  In other words, we’re much more likely to remember something, to be changed by the sensory input, if it reaches a recurrent processing stage.

Lamme also argues from an IIT perspective, pointing out that IIT’s Φ (phi), the calculated quotient of consciousness, would be higher in a recurrent region than in one only doing feedforward processing.  (IIT does see feedback as crucial, but I think this paper was written before later versions of IIT used the Φmax postulate to rule out talk of pieces of the system being conscious.)

Lamme points out that if recurrent processing leads to conscious experience, then that puts consciousness on strong ontological ground, and makes it easy to detect.  Just look for recurrent processing.  Indeed, a big part of Lamme’s argument is that we should stop letting introspection and report define our notions of consciousness and should adopt a neuroscience centered view, one that lets the neuroscience speak for itself rather than cramming it into preconceived psychological notions.

This is an interesting theory, and as usual, when explored in detail, it turns out to be more plausible than it initially sounded.  But, it seems to me, it hinges on how lenient we’re prepared to be in defining consciousness.  Lamme argues for a version of experience that we can’t introspect or know about, except through careful experiment.  For a lot of people, this is simply discussion about the unconscious, or at most, the preconscious.

Lamme’s point is that we can remember this local recurrent processing, albeit briefly, therefore it was conscious.  But this defines consciousness as simply the ability to remember something.  Is that sufficient?  This is a philosophical question rather than an empirical one.

In considering it, I think we should also bear in mind what’s absent.  There’s no affective reaction.  In other words, it doesn’t feel like anything to have this type of processing.  That requires bringing in other regions of the brain which aren’t likely to be elicited until stage 4: the global workspace.  (GWT does allow that it could be elicited through peripheral unconscious propagation, but it’s less likely and takes longer.)

It’s also arguable that considering the sensory regions alone outside of their role in the overall framework is artificial.  Often the function of consciousness is described as enabling learning or planning.  Ryota Kanai, in a blog post discussing his information generation theory of consciousness (which I highlighted a few weeks ago), argues that the function of consciousness is essentially imagination.

These functional descriptions, which often fit our intuitive grasp of what consciousness is about, require participation from the full cortex, in other words, Lamme’s stage 4.  In this sense, it’s not the locations that matter, but what functionality those locations provide, something I think Lamme overlooks in his analysis.

Finally, similar to IIT’s Φ issue, I think tying consciousness only to recurrent processing risks labeling a lot of systems conscious that no one regards as conscious.  For instance, it might require us to see an artificial recurrent neural network as having conscious experience.

But this theory highlights the point I made in the post on the Michael Cohen study, that there is no one finish line for consciousness.  We might be able to talk about a finish line for short term iconic memory (which is largely what RPT is about), another for working memory, one for affective reactions and availability for longer term memory, and perhaps yet another for availability for report.  Stage 4 may quickly enable all of these, but it seems possible for a signal to propagate along the edges and get to some of them.  Whether it becomes conscious seems like something we can only determine retrospectively.

Unless of course I’m missing something?  What do you think of RPT?  Or of Lamme’s points about the problems of relying on introspection and self report?  Should we just let the neuroscience speak for itself?

Posted in Mind and AI | Tagged , , , , , | 48 Comments

Stephen Macknik’s work on prosthetic vision

This is pretty wild.  In her latest Brain Science podcast, Ginger Campbell interviews Stephen Macknik on his work to develop a visual replacement implant for blind people.  For a quick overview, check out this short video.

One question Campbell asks, that I was wondering myself: how does the light reach the neurons in the LGN nucleus of the thalamus deep in the center of the brain?  Macknik points out that the gene therapy causes the protein receptor genes to develop in the whole neuron, including the axon terminals that reach into the visual cortex, so the implant doesn’t need to project to the thalamus, just to the end of their axons in the visual cortex.

For more details, check out the podcast itself (it’s about 69 minutes).  It gets a bit technical, but it’s a fascinating interview, and a powerful demonstration about how neuroscientific knowledge can be used.

Posted in Zeitgeist | Tagged , , , , | 8 Comments

Star Trek Picard

Poster for Star Trek Picard, showing Picard standing with his dogJust watched the first episode of Star Trek Picard.  What follows has spoilers, but only from the early parts of the episode.

It takes place about 15 years after the events of the last Next Generation movie.  Picard appears to be living in retirement in his family vineyard, apparently with a couple of Romulans, presumably refugees from the supernova that destroyed Romulus, an event referenced pretty heavily in the first Star Trek reboot movie.

Picard led an effort to evacuate Romulus, but it seems things went very badly, somehow involving synths (androids) setting fire to Mars and killing large numbers of people, and leading the Federation to enact a general ban on synths.  And Starfleet’s overall response to the Romulus situation apparently was not a good one, leading Picard to resign.

The story gets going when a young woman finds herself on the run, with sudden and unique powers allowing her to escape from danger, and visions of Picard’s face leading her to his vineyard, and shaking him out of his retirement.

While things appear to be as utopian on Earth as always, there’s a sense that the Federation isn’t the idealistic setting it once was.  Many long time Trek fans may dislike this, but I can’t blame the writers too much.  Stories in utopias tend to be boring, which is why most classic Trek takes place outside or on the edges of the Federation.  Making the universe edgier is probably inevitable, but it does deviate from Roddenberry’s optimistic vision.

My initial reaction is, not bad.  Some of you know that I’m a long time Star Trek fan, but was disappointed by Star Trek Discovery.  At first blush, Picard looks much more promising.  They’ve definitely got me for at least one more episode.

If you’ve seen it, what did you think?

Posted in Science Fiction | Tagged , , , | 20 Comments

Is there a conscious perception finish line?

Global workspace theory (GWT) is the proposition that consciousness is composed of contents broadcast throughout the brain.  Various specialty processes compete for the limited capacity of the broadcasting mechanisms, to have their content broadcast to the all the other specialty processes.

Global neuronal workspace (GNW) is a variant of that theory, popularly promoted by Stanislas Dehaene, which I’ve covered before.  GNW is more specific than generic GWT on the physical mechanisms involved.  It relies on empirical work done over the years demonstrating that conscious reportability involves wide scale activation of the cortex.

One of the observed stages is a massive surge about 300 milliseconds after a stimulus, called the P3b wave.  Previous work seemed to establish that the P3b wave is a neural correlate of consciousness.  Dehaene theorized that it represents the stage where one of the signals achieves a threshold and wins domination, with all the other signals being inhibited.  Indeed, the distinguishing mark of the P3b is that it is massively negative in amplitude, indicating that most of it comes from inhibitory action.

The P3b has been replicated extensively and been seen as a pretty established phenomenon associated with attention and consciousness.  But this is science, and any result is always provisional.  Michael Cohen and colleagues have put out a preprint of a study that may demonstrate that the P3b wave is not associated with conscious perception, but with post perceptual processing.

The study tests the perception of subjects, showing various images while measuring their brain waves via EEG.  Using a no-report protocol, in half of the tests, the subjects were asked to report on whether they saw something, but in the other half they were not asked to report.  Crucially, the P3b wave only manifested in the reported cases, never in the non-report ones, even when the non-report image were exactly the same as the ones that did generate affirmative reports.

To control for the possibility that the subjects weren’t actually conscious of the image in the non-report cases, the subjects were given a memory test after a batch of non-report events, checking to see what they remember perceiving.  Their memories of the perception correlated with the results in the report versions.

So, the P3b wave, a major piller of GNW, may be knocked down.  The study authors are careful to make clear that this does not invalidate GWT or other cognitive theories of consciousness.  They didn’t test for all the other ways the information may have propagated throughout the cortex.  Strictly speaking, it doesn’t even invalidate GNW itself, but it does seem to knock out a major piece of evidence for it.

However, this is a more interesting discussion if we ask, what would it mean if all cortical communication beyond the sensory regions were ruled out, that the ability to acquire a memory of a sight only required the local sensory cortices?  It might seem like a validation of views like Victor Lamme’s local recurrent processing theory, which holds that local processing in the sensory cortices is sufficient for conscious perception.

But would it be?  Dehaene, when discussing his theory, is clear that it’s a theory of conscious access.  For him, something isn’t conscious until it becomes accessible by the rest of the brain.  Content in sensory cortices may form, but it isn’t conscious until it’s accessible.  Dehaene refers to this content as preconscious.  It isn’t yet conscious, but it has the potential to become so.

In that view, the content of what the subjects perceived in the non-report tests may have been preconscious, unless and until their memories were probed, at which point it became conscious.

This may be another case where the concept of consciousness is causing people to argue about nothing.  If we describe the situation without reference to it, the facts seem clear.

Sensory representations form in the local sensory cortex.  A temporary memory of that representation may persist in that region, so if probed soon enough afterward, a report about the representation can be extracted from it.  But until there is a need for a report or other usage, it is not available to the rest of the system, and none of the activity, including the P3b, normally associated with that kind of access is evident.

This reminds me of Daniel Dennett’s multiple drafts theory (MDT) of consciousness.  MDT is a variant of GWT, but minus the idea that there is any one event where content becomes conscious.  It’s only when the system is probed in certain ways that one of the streams, one of the drafts, become selected, generally one of the ones that has managed to leave its effects throughout the brain, that has achieved “fame in the brain.”

In other words, Dennett denies that there is any one finish line where content that was previously unconscious becomes conscious.  In his view, the search for that line is meaningless.  In that sense, the P3b wave may be a measure of availability, but calling it a measure of consciousness is probably not accurate.  And it’s not accurate to say that the Lamme’s local recurrent processing is conscious, although it’s also not accurate to relegate it completely to the unconscious.  What we can say is that it’s at a particular point in the stream where it may become relevant for behavior, including report.

Maybe this view is too ecumenical and I’m papering over important differences.  But it seems like giving up the idea of one finish line for consciousness turns a lot of theories that look incompatible into models of different aspects of the same overall system.

None of this is to say that GWT or any of its variants might not be invalidated at some point.  These are scientific theories and are always subject to falsification on new data.  But if or when that happens, we should be clear about exactly what is being invalidated.

Unless of course I’m missing something?

Posted in Mind and AI | Tagged , , , , , , | 54 Comments

For animal consciousness, is there a fact of the matter?

Book cover of Human and Animal MindsPeter Carruthers has been blogging this week on the thesis of his new book, Human and Animal Minds: The Consciousness Question Laid to Rest.  I mentioned Carruthers’ book in my post on global workspace theory (GWT), but didn’t get into the details.  While I had been considering taking a fresh look at GWT, his book was the final spur that kicked me into action.

Carruthers used to be an advocate for higher order theories (HOT) of consciousness.  He formulated the dual content version that I thought was more plausible.  As an advocate for HOT, he seemed skeptical of animal consciousness.  But in recent years, he’s abandoned HOT in favor of GWT: the idea that conscious content is the result of processes that have won the competition to have their results globally broadcast to systems throughout the brain.

Most GWT proponents will admit that it’s a theory of access consciousness, that it doesn’t directly address phenomenal consciousness, which usually isn’t seen as a problem because most people in this camp see them as the same thing, with phenomenal consciousness being access consciousness from the inside.   In other words, the idea that phenomenal consciousness is something separate and apart from access consciousness is rejected.

Carruthers isn’t completely outside of this view, but his is a bit more nuanced.  He sees phenomenal consciousness as a subset of access consciousness, the portion of it that includes nonconceptual content, content that is irreducible, such as the color yellow.  (Of course, objectively the content of yellow is reducible to patterns of neural spikes originating from M and L cones in the retina, but only the irreducible sensation of yellow makes it into the workspace.)  This is in contrast to conceptual data, such as the perception of a dog, that is reducible to more primitive experience.

So Carruthers sees phenomenal experience as globally broadcast nonconceptual content, in humans.  Why the stipulation at the end?  He points out that phenomenal experience is inherently a first person account, in that discussing it is basically an invitation for each of us to access our own internal experience.

Asking whether another system has that same internal experience is asking how much like us they are.  Other species may have processes that resemble our global broadcasting mechanisms, to greater or lesser extents, and the collection of competing and receiving processes may resemble our own, again to greater or lesser extent.  In both cases, the farther we move away humans in taxonomy, the less like us they are.

Which means that no other species will have the exact same components of our experience.  Whether what they have amounts to phenomenal experience, our first person experience, depends on which aspects of that experience we judge to be essential.  In other words, there isn’t a fact of the matter.

I pointed out to Carruthers that this also applies to many humans, notably brain injured patients, whose global broadcasting mechanism or collection of competing and receiving processes no longer match that of a common healthy human.  Carruthers, to his credit, bites this bullet and acknowledges that there isn’t a fact of the matter when it comes to whether human infants or brain injured patients are phenomenally conscious.

Carruthers’ overall point is that it doesn’t matter, because nothing magical happens at any stage.  Nothing changes.  There are just capabilities that are either present or absent.  In his view, the focus on consciousness is a mistake.  Broadly speaking, I think he’s right.  There is no fact of the matter.  Consciousness is in the eye of the beholder.

But it’s worth discussing why that’s so, arguably the reason why anything ever fails to be a fact of the matter: ambiguity.  In this case, ambiguity about what we mean by terms like “phenomenal experience”, for it to be “like something”.  “Like” to what degree?  And what “thing”?

As soon as we try to nail down a specific definition, we run into trouble, because no specific definition is widely accepted.  The vague terminology masks wide differences.  It refers to a vast, hazy, and inconsistent collection of capabilities we’ve all agreed to put under one label, but without agreeing on the specifics.  It’s like lawmakers who can’t agree on precisely what a law should say, so instead write something vague that can be agreed on, and leave it to the courts to hash out later.

And yet, I think we can still recognize the ways various species process information will be similar to the way we do, to varying degrees.  As Carruthers notes, there is no magical line, no point where we can clearly say consciousness begins.  But great apes are a lot closer to us than dogs, which are closer than mice, which in turn are closer than frogs, fish, etc, all of which are much closer than plants, rocks, storm systems, or electrons.

And there’s something to be said for focusing on systems that do have irreducible sensory and affective content that are globally integrated into their processing.  This matches the definition many biologists use for primary consciousness.  Primary consciousness appears to be widespread among mammals and birds, and possibly among all vertebrates and arthropods.

But primary consciousness omits aspects of our experience many will insist are essential, such as metacognitive self awareness or imaginative deliberation, capabilities that dramatically expand our appreciation of the contents of primary consciousness.  Such a view dramatically reduces the number of species that are conscious, perhaps only to humans and maybe some great apes.  Which view is right?  To Carruthers’ point, there is no fact of the matter.

Incidentally, even primary consciousness gets into definitional difficulties.  For example, fish and amphibians can be demonstrated to have both sensory and affective content, but the architecture of their brains makes it unclear just how integrated the affective contents are with much of the sensory content.  Does this then still count as primary consciousness?  I personally think the answer is yes since there is at least some integration, but can easily see why many might conclude otherwise.

What do you think?  Is Carruthers being too stingy in his conclusion?  Is there a way we can establish a fact of the matter we can all agree on?  Or is the best we can do is recognize the partial and varying commonalities we have with other species?

Posted in Mind and AI | Tagged , , , , , | 74 Comments