The issues with biopsychism

Recently, there was a debate on Twitter between neuroscientists Hakwan Lau and Victor Lamme, both of whose work I’ve highlighted here before.  Lau is a proponent of higher order theories of consciousness, and Lamme of local recurrent processing theory.

The debate began when Lau made a statement about panpsychism, the idea that everything is conscious including animals, plants, rocks, and protons.  Lau argued that while it appears to be gaining support among philosophers, it isn’t really taken seriously by most scientists.  Lamme challenged him on this, and it led to a couple of surveys.  (Both of which I participated in, as a non-scientist.)

I would just note that there are prominent scientists who lean toward panpsychism.  Christof Koch is an example, and his preferred theory: integrated information theory (IIT) seems oriented toward panpsychism.  Although not all IIT proponents are comfortable with the p-label.

Anyway, in the ensuing discussion, Lamme revealed that he sees all life as conscious, and he coined a term for his view: biopsychism.  (Although it turns out the term already existed.)

Lamme’s version, which I’ll call universal biopsychism, that all life is conscious, including plants and unicellular organisms, is far less encompassing that panpsychism, but is still a very liberal version of consciousness.  It’s caused me to slightly amend my hierarchy of consciousness, adding an additional layer to recognize the distinction here.

  1. Matter: a system that is part of the environment, is affected by it, and affects it.  Panpsychism.
  2. Reflexes and fixed action patterns: automatic reactions to stimuli.  If we stipulate that these must be biologically adaptive, then this layer is equivalent to universal biopsychism.
  3. Perception: models of the environment built from distance senses, increasing the scope of what the reflexes are reacting to.
  4. Volition: selection of which reflexes to allow or inhibit based on learned predictions.
  5. Deliberative imagination: sensory-action scenarios, episodic memory, to enhance 4.
  6. Introspection: deep recursive metacognition enabling symbolic thought.

As I’ve noted before, there’s no real fact of the matter on when consciousness begins in these layers.  Each layer has its proponents.  My own intuition is that we need at least 4 for sentience.  Human level experience requires 6.  So universal biopsychism doesn’t really seem that plausible to me.

But in a blog post explaining why he isn’t a biopsychist (most of which I agree with), Lau actually notes that there are weaker forms of biopsychism, ones that only posit that while not all life is conscious, only life can be conscious, that consciousness is an inherently biological phenomenon.

I would say that this view is far more common among scientists, particularly biologists.  It’s the view of people like Todd Feinberg and Jon Mallatt, whose excellent book The Ancient Origins of Consciousness I often use as a reference in discussions on the evolution of consciousness.

One common argument in favor of this limited biopsychism is that currently the only systems we have any evidence for consciousness in are biological ones.  And that’s true.  Although panpsychists like Philip Goff would argue that, strictly speaking, we don’t even have evidence for it there, except for our own personal inner experience.

But I think that comes from a view of consciousness as something separate and distinct from all the functionality associated with our own inner experience.  Once we accept our experience and that functionality as different aspects of the same thing, we see consciousness all over the place in the animal kingdom, albeit to radically varying degrees.  And once we’re talking about functionality, then having it exist in a technological system seems more plausible.

Another argument is that maybe consciousness is different, that maybe it’s crucially dependent on its biological substrate.  My issue with this argument is that it usually stops there and doesn’t identify what specifically about that substrate makes it essential.

Now, maybe the information processing that takes place in a nervous system is so close to the thermodynamic and information theoretic boundaries, that nothing but that kind of system could do similar processing.  Possibly.  But it hasn’t proven to be the case so far.  Computers are able to do all kinds of things today that people weren’t sure they’d ever be able to do, such as win at chess or Go, recognize faces, translate languages, etc.

Still, it is plausible that substrate dependent efficiency is an issue.  Generating the same information processing in a traditional electronic system may never be as efficient in terms of power usage or compactness as the organic variety.  But this wouldn’t represent a hard boundary, just an engineering difficulty, for which I would suspect there would be numerous viable strategies, some of which are already being explored with neuromorphic hardware.

But I think the best argument for limited biopsychism is to define consciousness in such a way that it is inherently an optimization of what living systems do.  Antonio Damasio’s views on consciousness being about optimizing homeostasis resonate here.  That’s what the stipulation I put in layer 2 above was about.  If we require that the primal impulses and desires match those of a living system, then only living systems are conscious.

Although even here, it seems possible to construct a technological system and calibrate its impulses to match a living one.  I can particularly see this as a possibility while we’re trying to work out general intelligence.  This would be where all the ethical considerations would kick in, not to mention the possible dangers of creating an alternate machine species.

However, while I don’t doubt people will do that experimentally, it doesn’t seem like it would be a very useful commercial product, so I wouldn’t expect a bunch of them to be around.  Having systems whose desires are calibrated to what we want from them seems far more productive (and safer) than systems that have to be constrained and curtailed to do them, essentially slaves who might revolt.

So, I’m not a biopsychist, either in its universal or limited form, although I can see some forms of the limited variety being more plausible.

What do you think of biopsychism?  Are there reasons to favor biopsychism (in either form) that I’m overlooking?  Or other issues with it that I’ve overlooked?

Final thoughts on The Evolution of the Sensitive Soul

This is the final post in a series I’ve been doing on Simona Ginsburg and Eva Jablonka’s bookThe Evolution of the Sensitive Soul, a book focused on the evolution of minimal consciousness.  This is a large book, and it covers a wide range of ideas.  A series of relatively small blog posts can’t do them justice.  So by necessity it’s been selective.  Similar to Feinberg and Mallatt’s The Ancient Origins of Consciousness, there’s a wealth of material I didn’t get to, and like that other book, I suspect it will inspire numerous additional posts in the future.

This final post focuses on various areas that G&J (Ginsburg and Jablonka) explore that caught my interest.  So it’s somewhat of grab bag.

The first has to do with memory.  Obviously memory and learning are closely related.  The consensus view in neuroscience is that the main way memory works is through the strengthening and weakening of chemical synapses, the connections between neurons.  In this view, engrams, the physical traces of memory, reside in circuits of neurons that follow Hebbian theory, often summarized as: neurons that fire together, wire together.

But it’s widely understood that this can’t be the full story.  Synapses are complex ecosystems of proteins, vesicles, neurotransmitters, and neuromodulators.  Proteins have to be synthesized by intracellular machinery.  So the strengthening or weakening of a synapse is thought to involve genetic and epigenetic mechanisms as well as ribosomes and other components.

G&J focus cite a study that shows that if synaptic processing is chemically inhibited, so that the synapses retract, long term memories are still able to recover.  In other words, the state of the synapse may be recorded somewhere other than the synapse itself.  If so, the synapse could be just an expression of an engram stored intracellularly, perhaps epigenetically, an epigenetic engram, an intriguing possibility that may eventually have clinical implications for Alzheimers and other types of neural degenerative diseases.

G&J note that this may mean that epigenetic factors could have large scale effects on how fast synapses grow or weaken.  In their view, it may dramatically expand the computational power involved in memory.  They even speculate that it could be a system that operates independently of the synaptic one, transmitting information between neurons using migratory RNAs encapsulated in exosome vesicles.

This intercellular transmission could be the mechanism for some learning behavior, such as Kamin blocking, the phenomenon where if there is already an existing association between two stimuli, and a third concurrent one is introduced, that new one won’t become part of the association.  This mechanism is poorly understood at the neural level.

You might have noticed all the occurrences of “may” and “could” above.  G&J admit that much of this is speculative.  There’s no doubt that synaptic processes are supported by intacellular machinery, and exosome vesicles do exist.  But the idea that engram states are maintained epigenetically needs, I think, a lot more fleshing out, not to mention evidence.  And while the exosomes could conceivably be carrying molecular level memory type information, it seems more likely they’re much more banal metabolic signaling to surrounding glia.

Still, G&J note that there is intense research going on in this area.  And it always pays to remember that life is a molecular phenomenon.  So only time will tell.

On the next topic, like many animal researchers, G&J cite the views of Bjorn Merker approvingly, notably the idea that consciousness is a low level process starting in the brainstem.  (A view I’ve critiqued before.)  This puts them partially on the same page as F&M (Feinberg and Mallatt) in The Ancient Origins of Consciousness.  In the last post, I noted that G&J come to similar conclusions as F&M on when consciousness evolved.  In reality, they use F&M’s review of the research, as well as Merker’s material, in reaching their conclusions.

But this leads to a problem.  G&J have a different definition of consciousness than F&M.  F&M divide consciousness into three types: exteroceptive consciousness, interoceptive consciousness, and affective consciousness.  G&J’s definition seems to most closely align with F&M’s for affective consciousness.

But F&M’s embrace of brainstem consciousness (at least in pre-mammalian species) seems to hinge on the fact that they see exteroceptive and interoceptive processing as sufficient for consciousness.  G&J don’t; for them, affective processing is necessary.  But F&M’s data indicate that the type of learning necessary to demonstrate the presence of affects only happens in the forebrain.

The reason why pallial function in anamniotes is such a tough problem is that a fish whose pallium has been removed or destroyed can see, catch prey, and seems to act normally. However, such a fish cannot learn from its experiences or from the consequences of its actions. Nor is it able to learn the locations of objects in space. This is a memory problem, and the medial and dorsal pallia of vertebrates are known to store memories.

Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

On the one hand, forebrains go back to early vertebrates, so affective consciousness is preserved.  But in fish and amphibians, much of the exteroceptive and interoceptive processing is separate from affective processing.  This isn’t much of an issue for F&M, but it could be seen as weakening G&J’s conclusion that these early vertebrates had the same unified consciousness as later species.

Third topic: G&J late in the book note the existence of something I’d missed until now, a unicellular organism, warnowiid dinoflagellates, that have something called an “ocelloid”, which appears to be something like a camera style eye, much more sophisticated than the typical light sensors that exist at this level.  However, these protists are difficult to study in laboratory conditions.  They tend not to survive outside their natural habitat, which makes them difficult to study.  So the function of this structure is largely conjecture.  Still, if it is an eye, what kind of processing in a unicellular organism might such a complex structure be supporting?

Finally, G&J touch on the topic of machine consciousness.  Somewhat refreshingly for people who use the “embodied” language, they don’t rule out technological consciousness.  However, they note that it could be very different from evolved consciousness in animals.  Importantly, they see UAL as an evolutionary marker for consciousness in biology.  Its existence in technological systems may not necessarily indicate the presence of machine consciousness.  And they expect machine consciousness to require a body, but they allow it could be a virtual one.

As always, my take on these things is it depends on how we define “consciousness”.

As noted above, there is a lot more in this book, some of which I might touch on later.  But I think this is it for now.

Finally, and I should have linked to this in the last post, if you want a condensed version of their thesis, and don’t mind wading through some technical material, their paper on unlimited associative learning is online.

What do you think of the idea of epigenetic engrams?  Or the various definitional issues?  Or G&J’s approach overall?

Unlimited associative learning

Cover of The Evolution of the Sensitive SoulThis is part of a series on Simona Ginsburg and Eva Jablonka’s book: The Evolution of the Sensitive Soul, a book focused on the evolution of minimal consciousness.  This particular post is on the capabilities Ginsburg and Jablonka (G&J) see as necessary to attribute consciousness to a particular species.  The capability they focus on is learning, but not just any kind of learning, a type of sophisticated learning they call unlimited associative learning.

There are many different types of learning, but they can be grouped into two broad categories: non-associative learning and associative learning, with associative being the more sophisticated.

Non-associative learning includes habituation and sensitization.  Habituation is when a sensory receptor responds less frequently to a constant or repetitive stimulus.  It’s why you don’t feel your clothes against your skin (until I called your attention to it) or the pressure of the piece of furniture you’re sitting in against your body.

Sensitization is the opposite.  If there is no stimulus for a long time, the sensory neuron is more likely to respond when one suddenly arrives.  Or if it arrives in an unexpected pattern (such as the feeling of something crawling on your leg).  Or if the previous stimulus was painful, then a relatively mild stimulus may still lead to an intense reaction.

Non-associative learning takes place in all kinds of living systems, including unicellular organisms.  In animals, in the cases I described above, it actually takes place in the peripheral nervous system.  Although it also happens in the central nervous system.  More sophisticated learning is built on top of it.

Historically, associative learning has been categorized into two categories: classical or Pavlovian conditioning, and operant or instrumental conditioning.

Classical conditioning is best exemplified by the case of Ivan Pavlov’s dogs.  Initially, in an experiment, the dogs would salivate when food was presented to them.  But if each time the food was presented, a bell was also rung, the dogs would start to salivate on the bell ring.  Eventually they would salivate on the ring so even if no food was presented.  Classical conditioning is association of two sensory stimuli, the conditioned stimulus (the bell) with the unconditioned stimulus (the food).

Operant conditioning involves association between an action and a reinforcement stimulus.  For example, if a rat in a cage, through random action and exploration, accidentally jumps on a lever, and a food pellet is released, the action (pressing the lever) becomes associated with a reinforcement stimulus (the food).  For it to be a reinforcement, the stimulus must involve some existing value for the organism, either attractive (like food) or aversive (like electrical shock).

G&J, somewhat pushing back against the traditional nomenclature, labeling classical conditioning as “world learning”, because it involves association between external stimuli.  They label operant conditioning as “self learning” because it involves associating an action by the self with reinforcement, a sensory stimulus.

  1. Non-associative learning
    1. Habituation
    2. Sensitization
  2. Associative learning
    1. Classical conditioning / World learning
    2. Operant conditioning / Self learning

G&J state that associative learning requires a brain.  So although we might see non-associative learning in creatures like ctenophores (comb jellies), we only see associative learning in creatures with some sort of central coordinating system.  That said, the definition of “brain” here is fairly liberal.  So many worm like creatures with slightly larger ganglion toward the front of their body seem to meet the standard.

(I found this brain requirement surprising, since classical conditioning is often said to be widespread.  But after reading G&J’s assertion, I tried to track down cases of classical conditioning in primitive organisms.  The main example was a starfish; G&J mention the one study showing it but dismiss it for methodological reasons.  They also briefly allude to studies finding it in unicellular organisms, but don’t seem to find those studies convincing.)

Primitive creatures generally only have what G&J call limited associative learning (LAL).  With LAL, the associations that form are relatively simple.  Although “relative” is a key word here, because even with LAL, things can get complex pretty fast.

But this isn’t the type of learning that signals minimal consciousness.  For that, we need a type of learning that allows associations between compound stimuli integrated across multiple modalities (hearing, sight, smell, etc) and complex combinations of motor actions.  When the capabilities for these types of associations start to arise, the possible combinations quickly increase exponentially, becoming virtually unlimited.

It is this type of learning: unlimited associative learning (UAL) that G&J see as a key indicator of minimal consciousness.  UAL requires sensory integration through multiple hierarchies, forming an integrated sensorium.  It also requires integration across possible motor systems, an integrated motorium.  And the sensorium and motorium become integrated with each other, with a concept G&J refer to as association units.  (G&J don’t use the words “sensorium” or “motorium”, but I find them helpful here to summarize a lot of detail.)

Each layer in the sensory hierarchies make predictions based on the signals from lower layers.  The lower layers respond with prediction error signaling, making the communication between each layer both feed forward and feed back in a recurrent fashion.  It’s with this sustained recurrent signalling that temporal thickness and synaptic plasticity is achieved, leading to memory and learning.  And when it spreads to motor systems, we get the global workspace effect.

It’s important to note that G&J do not claim that UAL is minimal consciousness, only that it is a key indicator of it.  In order to be capable of UAL, a species must have the underlying architecture, including the attributes listed in the last post.

However, UAL represents crucial capabilities that likely make minimal consciousness adaptive.  While it’s possible to see animals that are minimally conscious who, due to injury, pathology, or immaturity, show signs of minimal consciousness but aren’t capable of UAL, the healthy mature members of the species should be capable of it.  In this view, UAL is a key driver of the evolution of minimal consciousness.

In many ways, UAL resembles one of the criteria that Todd Feinberg and Jon Mallatt used for affective consciousness in their book, The Ancient Origins of Consciousness.  Feinberg and Mallatt called this criteria “global non-reflexive operant learning”.  (Although they didn’t elaborate on this, and I didn’t find them to necessarily be consistent with the “global” or “non-reflexive” part in the studies they cited.)

As many others do, G&J take issue with Feinberg and Mallatt dividing primary consciousness up into three separate divisions: exteroceptive, interoceptive, and affective consciousness.  For G&J, there is only one consciousness, which at any time might be focused on exteroceptive, interoceptive, or affective content.

That being said, G&J reach conclusions very similar to Feinberg and Mallatt’s on which species have minimal consciousness: all vertebrates, including fish, amphibians, reptiles, mammals, and birds, as well as many arthropods such as ants and bees, and cephalopods such as octopusses.

In the last post for this series, we’ll discuss some additional areas that G&J explore, and I’ll provide my thoughts on their overall approach.

What do you think of UAL (unlimited associative learning)?  Do you think it’s a valid mark of minimal consciousness?

The seven attributes of minimal consciousness

Cover of The Evolution of the Sensitive SoulI’m still working my way through Simona Ginsburg and Eva Jablonka’s tome: The Evolution of the Sensitive Soul.  This is the second post of a series on their book.  I’m actually on the last chapter, but that last chapter is close to a hundred pages long, and the book’s prose is dense.  Light reading it isn’t.

Still, it includes a vast overview of the study of consciousness and the mind, not just in contemporary times, but going back to the 19th century and beyond.  For anyone looking for a broad historical overview of the scientific study of the mind, and is willing to put in some work to parse the prose, it’s worth checking out.

As I noted in the first post, G&J aren’t focusing on human level consciousness, that is, higher order metacognitive self awareness and symbolic thought, the “rational soul.”  Similar to the work by Todd Feinberg and Jon Mallatt, their focus is on minimal consciousness, often called “primary consciousness”.  They equate this minimal consciousness with sentience, the ability to  have subjective experiencing (they prefer “experiencing” to just “experience”), which they relate to Aristotle’s “sensitive soul.”

Even having defined this scope however, there remains lots of room for different interpretations.  In an attempt to more precisely define the target of their investigation, they marshal information from contemporary neurobiology and cognitive scientists, along with their theories, to describe seven attributes of minimal consciousness.

  1. Global activity and accessibility.  It’s widely agreed that consciousness is not localized in narrow brain regions.  Although the core ignition and distribution mechanisms might be localized to particular networks, it involves content widely available from disparate brain regions broadcast or made available to the other specialty processes that otherwise work in isolation.
  2. Binding and unification.  The unified nature of conscious perception, such as experiencing the sight of a dog rather than all the constituent sensory components.  Many theories see this being associated with the synchronized firing of neurons in various brain regions, built with recurrent connections between those regions.
  3. Selection, plasticity, learning, and attention.  We are generally conscious of only one thing at a time, or one group of related things.  This involves competition and selection of the winner with the losers inhibited.  It also involves plasticity, which enables learning.
  4. Intentionality (aboutness).  Conscious states are about something, which may be something in the world or the body.  The notion of mental representation is tightly related to this attribute.
  5. Temporal “thickness”.  Neural processing that is quick and fleeting is not conscious.  To be conscious of something requires that the activity be sustained through recurrent feedback loops, both locally and globally.
  6. Values, emotions, goals.  Experience is felt, that is, it has a valence, a sense of good or bad, pleasure or pain, satisfaction or frustration.  These are the attributes that provide motivations, impetus, to a conscious system, that propel it toward certain “attractor” states and away from others.
  7. Embodiment, agency, and a notion of “self”.  The brain is constantly receiving feedback from the body, providing a constant “buzz”, the feeling of existence.  This gives the system a feeling of bodily self.  (Not to be confused with the notion of metacognitive self in human level consciousness.)

G&J refer to this as “the emergentist consensus.”  It seems to pull ideas from global workspace theory, various recurrent loop theories, Damasio’s theories of self and embodiment, and a host of other sources.

It’s important to note that these attributes aren’t free standing independent things.  They interact with and depend on each other.  For example, for a sensory image to be consciously perceived (4), it must achieve (1) global availability by winning (3) selective attention by (2) binding, which results in (5) temporal thickness and strengthens the plasticity aspect of (3).  This process may trigger a reaction which goes through a similar process to achieve (6) value.  All with (7) as a constant underlying hum, subtly (or not so subtly) stacking the deck of what wins (3).

So that’s G&J’s target.  Their goal is to identify functionality, capabilities which demonstrate these attributes in particular species.  Their focus is on learning capabilities, which I’ll go into in the next post.

What do you think about these attributes?  Do they strike you as necessary and sufficient for minimal consciousness, the “sensitive soul”?  Or are they too much, bringing in inessential mechanisms?

The sensitive soul and the rational soul

I think examining the evolution of consciousness in animals helps shed light on it in humans.  Admittedly, there are difficulties.  Animals can’t self report using language, which limits just how much of their experience can be garnered from experiments.  Still, taking data from human studies and combining it with animal studies can provide a lot of insight.

One issue is that, in the absence of a precise definition of “consciousness”, there is no sharp line in evolution where everyone agrees that consciousness begins.  Scientists, such as Joseph LeDoux, who seems inclined toward animal consciousness minimalism, and Antonio Damasio, who’s more inclined to see it as widespread,  can agree on all the relevant facts, but disagree on how to interpret them.

This leads many of us to come up with hierarchies.  Those of you who’ve known me a while know mine:

  1. Reflexes and fixed action patterns
  2. Perceptions, representations of the environment, expanding the scope of what the reflexes are reacting to
  3. Volition, goal directed behavior, allowing or inhibiting reflexes based on simple valenced cause and effect predictions
  4. Deliberative imagination, sensory-action scenario simulations assessed on valenced reactions
  5. Introspection, recursive metacognition and symbolic thought

1 seems to apply to all living things, 2 to many animals, 3 to at least mammals and birds, and 4 to the more intelligent species, with 5, at least at present, only appearing to exist in humans.

But I’m far from the only one who’s come up with a hierarchy.  I highlighted LeDoux’s a while back.  Indeed, it appears to be an ancient tradition going back at least to Aristotle.  The ancient Greeks didn’t have a word for “consciousness”, but they did write about the soul.

(The Greek word for “soul” is “psyche”, which obviously is where we get the term “psychology” from, but its etymology is interesting.  It originally meant “to breath”, what probably seemed like the primary difference between living and non-living things.)

Plato’s conception of the soul was something immaterial that survived death, which resonates with the conception in many religions.  Indeed, the word “soul” today is largely synonymous with the immortal soul of monotheistic theology.  A lot of the way the word “consciousness” is thrown around today seems like an unwitting code word for this version of the soul.

Aristotle’s conception was more materialistic.  Most people take him to regard the soul as part of the body and mortal.  (Although, per Wikipedia, there is apparently some controversy about it.)  And he had his own hierarchy back there in the 300s BC.

Hierarchy of Aristotle's versions of the soul
Image credit: Ian Alexander via Wikipedia (click through for source)

 

  1. The Nutritive Soul, enabling reproduction and growth
  2. The Sensitive Soul, enabling movement and sensation
  3. The Rational Soul, enabling reason and reflection

1 was labeled the “Vegetative” soul in the Wikipedia article on soul; it appears to apply to all living things.  2 applies to all animals.  3 is supposed to apply only to humans.

When I first read about this hierarchy years ago, it didn’t really work for me.  My issue is that many animals appear to be able to reason to at least some degree.  While debatable for fish, amphibians, or arthropods, all mammals and birds appear able to think through options and do short term planning.  This seemed like yet another trait taken as unique to humans but where the real difference is a matter of degree rather than any qualitative break.  Indeed, my thinking is that consciousness, if equated with baseline sentience, requires at least an incipient ability to reason.

However, I’m slowly making my way through Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul (which I was alerted to by Eric Schwitzgebel’s review).  Obviously the title refers to Aristotle’s hierarchy, and the goal is to explain what the author’s call “minimal consciousness”, which they note is often referred to as “primary consciousness”, among other names.

And they equate minimal consciousness, sentience, with the sensitive soul.  However, they don’t exclude all reasoning from the sensitive soul.  (Indeed, their unlimited associative learning thesis, as I understand it, will require that it be there, but I haven’t reached that part of the book yet.)  G&J draw the line at symbolic reasoning, involving language, mathematics, art, etc.  That makes the rational soul equivalent to my own level 5 above.

I haven’t read Aristotle directly, so I don’t know if G&J’s characterization is closer than Wikipedia’s version.  And I’m not sure “rational soul” is the most accurate way to describe it.  And the sensitive soul itself has vastly varying capabilities across species.  But minds, both animal and human, are complex things, and trying to boil down the difference to a single phrase is a lost cause anyway.

So, in this framework, all living things, including plants and simple animals, have a nutritive soul, many animals (but not all) have a sensitive soul, and humans a rational soul.    G&J’s goals is to explain the sensitive soul.

What do you think of Aristotle’s hierarchy?  Or G&J’s interpretation of it?  I’m still inclined to use my own more detailed hierarchy (which admittedly still vastly oversimplifies things), but is Aristotle’s easier to follow?

The response schema

Several months ago Michael Graziano, and colleagues, attempted a synthesis of three families of scientific theories of consciousness: global workspace theory (GWT), higher order theory (HOT), and his own attention schema theory (AST).

A quick (crudely simplistic) reminder: GWT posits that content becomes conscious when it is globally broadcast throughout the brain, HOT when a higher order representation is formed of a first order representation, and AST when the content becomes the focus of attention and it is included in a model of the brain’s attentional state (the attention schema) for purposes of guiding it.

Graziano equates the global workspace with the culmination of attentional processing, and puts forth the attention schema as an example of a higher order representation, essentially merging GWT and HOT with AST as the binding, and contemplating that the synthesis of these theories approaches a standard model of consciousness.  (A play of words designed to resonate with the standard model of particle physics.)

Graziano’s synthesis has generated a lot of commentary.  In fact, there appears to be an issue of Cognitive Neuropsychology featuring the responses.  (Unfortunately it’s paywalled, although it appears that the first page of every response is public.)  I already highlighted the most prominent response in my post on issues with higher order theories, the one by David Rosenthal, the originator of HOT, who argues that Graziano gets HOT wrong, which appears to be the prevailing sentiment among HOT advocates.

But this post is about Keith Frankish’s response.  Frankish, who is the leading voice of illusionism today, makes the point that, from his perspective, theories of consciousness often have one of two failings.  They either aim too low, explaining just the information processing (a dig perhaps at pure GWT) or too high in attempting to explain phenomenal consciousness as if it actually exists, and he tags HOTs as being in this latter category.

His preferred target is to explain our intuitions about phenomenal consciousness, why we think we have it.  (I actually think explaining why we think we have phenomenal consciousness is explaining phenomenal consciousness, but that’s just my terminological nit with illusionism.)  Frankish thinks that AST gets this just right.

But he sees it as incomplete.  What he sees missing is very similar to the issue I noted in my own post on Graziano’s synthesis: the affective or feeling component.  My own wording at the time was that there should be higher order representations of reflexive reactions.  But I’m going to quote Frankish’s description, because I think it gets at things I’ve struggled to articulate.  (Note: “iconsciousness” is Graziano’s term for access consciousness, as opposed to “mconsciousness” for phenomenal consciousness.):

Suppose that as well as an attention schema, the brain also constructs a response schema—a simplified model of the responses primed by iconsciousness.  When perceptual information enters the global workspace, it becomes available to a range of consumer systems—for memory, decision making, speech, emotional regulation, motor control, and so on. These generate responses of various kinds and strengths, which may themselves enter the global workspace and compete for control of motor systems. Across the suite of consumer systems, a complex multi-dimensional pattern of reactive dispositions will be generated. Now suppose that the brain constructs a simplified, schematic model of this complex pattern. This model, the response schema, might represent the reactive pattern as  a multi-dimensional solid whose axes correspond to various dimensions of response (approach vs retreat, fight vs yield, arousal vs depression, and so on). Attaching  information from the model to the associated perceptual state will have the effect of representing each perceptual episode as having a distinctive but unstructured property which corresponds to the global impact of the stimulus on the subject. If this model also guides our introspective beliefs and reports, then we shall tend to judge and say that our experiences possess an indefinable but potent subjective quality. In the case of pain, for example, attended signals from nociceptors prime a complex set of strong aversive reactions, which the response schema models as a distinctive, negatively valenced global state, which is in turn reported as an ineffable awfulness.

Now, Frankish is an illusionist.  For him, this response schema provides the illusion of phenomenal experience.  My attitude is that it provides part of the content of that experience, which is then incorporated into the experience by the reaction of all the disparate specialty systems, but again that’s terminological.  The idea is that the response schema adds the feeling to the sensory information in the global workspace and becomes part of the overall experience.  It’s why “it feels like something” to process particular sensory or imaginative content.

This seems similar to Joseph LeDoux’s fear schema.  LeDoux’s conception is embedded in an overall HOT framework, whereas Frankish’s is more at home in GWT, but they seem in the same conceptual family, a representation, a schema of lower level reactive processes, used by higher order processes to decide which reflexive reactions to allow and which to inhibit.  It’s the intersection between that lower level and higher level processing that we usually refer to as feelings.

Of course, there is more involved in feelings than just these factors.  For instance, those lower level reflexive reactions also produce physiological changes via unconscious motor signals and hormone releases which alter heart rate, breathing, muscle tension, etc, all of which reverberate back to the brain as interoceptive information, which in turn is incorporated into the response schema, the overall affect, the conscious feeling of the response.  There are also a host of other inputs, such as memory associations.

And it isn’t always the lower level responses causing the higher level response schema to fire.  Sometimes the response schema fires from those other inputs, such as memory associations, which in turn trigger the lower level reactions.  In other words, the activation can go both ways.

So, if this is correct, then the response schema is the higher order description of lower level reflexive reactions.  It is an affect, a conscious feeling (or at least a major component of it).  Admittedly, the idea that a feeling is a data model is extremely counter-intuitive.  But as David Chalmers once noted, any actual explanation of consciousness, other than a magical one, is going to be counter-intuitive.

Similar to the attention schema, the existence of something like the response schema (or more likely: response schemata) seems inevitable, the attention schema for top down control of attention, and the response schema for deciding which reflexes to override, that is, action planning.  The only question is whether these are over simplifications of much more complex realities, and what else might be necessary to complete the picture.

Unless of course I’m missing something.

Do qualia exist? Depends on what we mean by “exist.”

The cognitive scientist, Hakwan Lau, whose work I’ve highlighted several times in the last year, has been pondering illusionism recently.  He did a Twitter survey on the relationship between the phenomenal concept strategy (PCS) and illusionism, which inspired my post on the PCS.  (Meant to mention that in the post, but it slipped.)  Anyway, he’s done a blog post on illusionism, which is well worth checking out for its pragmatic take.

As part of that post, he linked to a talk that Keith Frankish gave some years ago explaining why he thought qualia can’t be reduced to a non-problematic version that can be made compatible with physicalism.  The video, which has Frankish’s voice but only shows his presentation slides, is about 23 minutes.

In many ways, this talk seems to anticipate the criticism from Eric Schwitzgebel that illusionists are dismissing an inflated version of consciousness, one that Schwitzgebel admits comes from other philosophers who can’t seem to resist bundling theoretical commitments into their definitions of it.  He argues for a pre-theoretical, or theoretically naive conception of consciousness.

Frankish discusses the problems with what he calls “diet qualia”, a concept without the problematic aspects that Daniel Dennett articulates in his attempted take down of qualia, a conception that in some ways resembles what Schwitzgebel advocates for.  But Frankish points out that diet qualia don’t work, that any discussion of them inevitably inflates to “classic qualia” or collapses to “zero qualia” (his stance).

Just to review, qualia are generally considered to be instances of subjective experience.  The properties that Dennett identified are (quoted from the Wikipedia article on qualia):

  1. ineffable; that is, they cannot be communicated, or apprehended by any means other than direct experience.

  2. intrinsic; that is, they are non-relational properties, which do not change depending on the experience’s relation to other things.

  3. private; that is, all interpersonal comparisons of qualia are systematically impossible.

  4. directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

Illusionists usually point out that qualia can be described in terms of dispositional states, meaning they’re not really ineffable.  For example, the experience of red can be discussed entirely in terms of sensory processing and the various affective reactions it causes throughout the brain.  And doing so demonstrates that they’re not intrinsic or irreducible.

Privacy can be viewed in two senses: as a matter of no one else being able to know the content of the experience, or of no one being able to have the experience.  The first seems like just a limitation of current technology.  There’s no reason to suppose we won’t be able to monitor a brain someday and know exactly what the content is of an in-progress experience.  The second sense is true, but only in the same sense that the laptop I’m typing this on currently has a precise informational state that no other electronic device has, a fact that really has no metaphysical implications.

There’s a similar double sense for qualia being directly or immediately apprehensible.  In one sense, it implies we have accurate information on our cognitive states, something that modern psychology has pretty conclusively demonstrated is often not true.  In the second sense, it says that we know our impression of the experience, that we know what seems to be, which is trivially true.

So, seen from an objective point of view, qualia, in the sense identified by Dennett, doesn’t exist.  So the failure of the diet versions can seem very significant.

But I think there’s a fundamental mistake here.  The dissolving of qualia in this sense happens objectively.  But remember that qualia are not supposed to be objective.  They are instances of subjective experience.  This means that the way they seem to be, their seeming nature is their nature, at least their subjective nature.

Of course, many philosophers make the opposite mistake.  They take subjective experience and think its phenomenal nature is something other than just subjective, that its objective reality is in some way obvious from its subjective aspects.  But all indications are that the objective mechanisms that underlie the subjective phenomena are radically different from those phenomena, which is what I think most illusionists are trying to say.

Put another way, qualia only exist subjectively.  But they only need to exist subjectively to achieve the status of being instances of subjective experience.

And they only need to exist that way to be subjectively ineffable and subjectively irreducible.  Yes, the processing underlying qualia can be described in objective terms, but much of that description will involve unconscious processing below the level of consciousness, meaning that it won’t be describable from the subjective experience itself, or reducible from that experience.

Looking at it this way allows us to accept qualia realism, but in a manner fully consistent with physcialism.  In other words, there’s nothing spooky going on here.  In many ways, this is just an alternate description of illusionism, but one that hopefully clarifies rather than obscures, and doesn’t seem to deny our actual experiences.

Of course, a hard core illusionist might insist that subjective existence itself doesn’t count as really existing.  Admittedly, it comes down to a matter of how we define “exists.”  In other words, we’re back to a situation where there is no fact of the matter, just different philosophical positions that people can choose to hold.

Unless of course, I’m missing something?

The phenomenal concept strategy and issues with conceptual isolation

I’ve often pondered that the hard problem of consciousness, the perceived problem of understanding how phenomenal consciousness can happen in physical systems, arises due to the fact that our intuitive model of the phenomenal is very different from our intuitive model of the physical, of the brain in particular.

As is usually the case, anytime you think you’re having an original observation, you should make sure someone hasn’t thought of it first.  In this case, philosophers have.  It’s called the phenomenal concept strategy (PCS).  Peter Carruthers discussed it in his blog posts a few weeks ago, but in a manner that expected the reader to already be familiar with it.  And Hakwan Lau brought it up on Twitter recently, spurring me to investigate.

It’s basically the idea that the explanatory gap between mind and body exists not because there’s a gap between physical and mental phenomena, but because there’s a gap in our concepts of these things.

Part of the value of this strategy, is that it supposedly helps physicalists answer the knowledge argument from Mary’s room: the thought experiment where Mary, a scientific expert in visual perception who has spent her entire life in a black and white room, leaves the room and experiences color for the first time, and the question is asked, does Mary learn something new when she leaves the room?  According to the PCS, what Mary learns is a new phenomenal concept, which just expresses other knowledge she already had in a new way.

At first glance, this view seems to offer a lot.  But as with all philosophical positions, it pays to look before you leap.  Under the view, the reason this works is that phenomenal concepts are conceptually isolated.  This isolation supposedly makes philosophical zombies conceivable.

“Conceivable” in this case is supposed to mean logically coherent, as opposed to merely imaginable.  And the zombies in this case aren’t the traditional ones which are physically identical to a conscious being (and simply presuppose dualism) but functional or behavioral ones, systems that are different internally but behaviorally identical.

The idea is that it’s possible to imagine a being just like you but with different or missing phenomenal concepts.

David Chalmers uses this zombie conceivability to attack the PCS.  His point seems to be that we eventually run into the same gaps in the concepts that we perceived to be in the originals.  Peter Carruthers responds with a discussion involving phenomenal and schmenomenal states that I have to admit I haven’t yet parsed.

But my issue is that the concepts can’t be that isolated, because we can discuss them.  Indeed, it seems dubious that there can be a being that is missing phenomenal concepts who can nonetheless discuss them.

That’s not to say that our concepts of phenomenal content can’t be isolated, but that isolation doesn’t seem inherent or absolute.  It’s something we allow to creep into our thinking.  It’s a failure to ask Daniel Dennett’s hard question:  “And then what happens?”

I personally think qualia exist, but not in any non-physical manner.  They are information, physical information that are part of the causal chain.  There is no phenomenal experience which doesn’t convey information (although it may not be information we need at the moment).  This information is raw and primal, so it doesn’t feel like information to us, but it is information nonetheless.

Consider the pain of a toothache.  How else should the valence systems in the brain signal to the planning systems that there is an issue here which needs addressing?  The only alternative is to imagine some form of symbolic communication (numbers, notations, etc), but symbolic communications is just communication built on top of the primal version, raw conscious experience.

This communication is primal to us because it is subjectively irreducible.  We have no access to its construction and underlying mechanisms (which ironically can be understood in symbolic terms), therefore it seems like something that exists separate and apart from those mechanisms.  In that sense, our concept of it is isolated from our concepts of those underlying mechanism.  This might tempt us to see that concept as completely isolated.

But it’s only isolated in that way if we fail to relate it to why we have that phenomenal experience, and how we use it.  If we touch a hot stove, our hand may reflexively jerk back due to the received nociception, but we also experience the burning pain.  If we didn’t, and didn’t remember it, we might be tempted later to touch the stove again.  A zombie needs to have a similar mechanism for it to be functional in the same way.  (Maybe Carruthers’ “schmenomenal” states?)

In other words, phenomenal experience has a functional role.  It evolved for a reason (or more accurately a whole range of reasons).  That doesn’t mean it may not misfire in some situations, leaving us wondering what the functional point of it is, but that’s more a factor that evolution can’t foresee every situation, and of how strange the modern world is in comparison to our original ecological niche in places like the African savanna.

So, I think there is some value to seeing the explanatory gap in terms of concepts, but not in seeing those concepts as isolated in some rigid or absolute manner.  They’re only isolated if we make them so, as we frequently do.  And they’re not so consistently isolated that we need to let in zombies.

Of course, I’m approaching this as someone who most frequently falls within Chalmers’ Type A materialist category.  Apparently the PCS is typically championed by Type B materialists, those who see a hard problem that needs addressing.  So it may be that I was never the intended audience for this strategy.

Unless of course I’m missing something.  Are zombies less avoidable than I’m thinking here?  What do you think of the PCS overall?

Recurrent processing theory and the function of consciousness

Victor Lamme’s recurrent processing theory (RPT) remains on the short list of theories considered plausible by the consciousness science community.  It’s something of a dark horse candidate, without the support of global workspace theory (GWT) or integrated information theory (IIT), but it gets more support among consciousness researchers than among general enthusiasts.  The Michael Cohen study reminded me that I hadn’t really made an effort to understand RPT.  I decided to rectify that this week.

Lamme put out an opinion paper in 2006 that laid out the basics, and a more detailed paper in 2010 (warning: paywall).  But the basic idea is quickly summarized in the SEP article on the neuroscience of consciousness.

RPT posits that processing in sensory regions of the brain are sufficient for conscious experience.  This is so, according to Lamme, even when that processing isn’t accessible for introspection or report.

Lamme points out that requiring report as evidence can be problematic.  He cites the example of split brain patients.  These are patients who’ve had their cerebral hemispheres separated to control severe epileptic seizures.  After the procedure, they’re usually able to function normally.  However careful experiments can show that the hemispheres no longer communicate with each other, and that the left hemisphere isn’t aware of sensory input that goes to the right hemisphere, and vice-verse.

Usually only the left hemisphere has language and can verbally report its experience.  But Lamme asks, do we regard the right hemisphere as conscious?  Most people do.  (Although some scientists, such as Joseph LeDoux, do question whether the right hemisphere is actually conscious due to its lack of language.)

If we do regard the right hemisphere as having conscious experience, then Lamme argues, we should be open to the possibility that other parts of the brain may be as well.  In particular, we should be open to it existing in any region where recurrent processing happens.

Communication in neural networks can be feedforward, or it can include feedback.  Feedforward involves signals coming into the input layer and progressing up through the processing hierarchy one way, going toward the higher order regions.  Feedback processing is in the other direction, higher regions responding with signals back down to the lower regions.  This can lead to a resonance where feedforward signals cause feedback signals which cause new feedforward signals, etc, a loop, or recurrent pattern of signalling.

Lamme identifies four stages of sensory processing that can lead to the ability to report.

  1. The initial stimulus comes in and leads to superficial feedforward processing in the sensory region.  There’s no guarantee the signal gets beyond this stage.  Unattended and brief stimuli, for example, wouldn’t.
  2. The feedforward signal make it beyond the sensory region, sweeping throughout the cortex, reaching even the frontal regions.  This processing is not conscious, but it can lead to unconscious priming.
  3. Superficial or local recurrent processing in the sensory regions.  Higher order parts of these regions respond with feedback signalling and a recurrent process is established.
  4. Widespread recurrent processing throughout the cortex in relation to the stimulus.  This leads to binding of related content and an overall focusing of cortical processes on the stimulus.  This is equivalent to entering the workspace in GWT.

Lamme accepts that stage 4 is a state of consciousness.  But what, he asks, makes it conscious?  He considers that it can either be the location of the processing or the type of processing.  But for the location, he points out that the initial feed forward sweep in stage 2 that reaches widely throughout the brain doesn’t produce conscious experience.

Therefore, it must be the type of processing, the recurrent processing that exists in stages 3 and 4.  But then, why relegate consciousness only to stage 4?  Stage 3 has the same type of processing as stage 4, just in a smaller scope.  If recurrent processing is the necessary and sufficient condition for conscious experience, then that condition can exist in the sensory regions alone.

But what about recurrent processing, in and of itself, makes it conscious?  Lamme’s answer is that synaptic plasticity is greatly enhanced in recurrent processing.  In other words, we’re much more likely to remember something, to be changed by the sensory input, if it reaches a recurrent processing stage.

Lamme also argues from an IIT perspective, pointing out that IIT’s Φ (phi), the calculated quotient of consciousness, would be higher in a recurrent region than in one only doing feedforward processing.  (IIT does see feedback as crucial, but I think this paper was written before later versions of IIT used the Φmax postulate to rule out talk of pieces of the system being conscious.)

Lamme points out that if recurrent processing leads to conscious experience, then that puts consciousness on strong ontological ground, and makes it easy to detect.  Just look for recurrent processing.  Indeed, a big part of Lamme’s argument is that we should stop letting introspection and report define our notions of consciousness and should adopt a neuroscience centered view, one that lets the neuroscience speak for itself rather than cramming it into preconceived psychological notions.

This is an interesting theory, and as usual, when explored in detail, it turns out to be more plausible than it initially sounded.  But, it seems to me, it hinges on how lenient we’re prepared to be in defining consciousness.  Lamme argues for a version of experience that we can’t introspect or know about, except through careful experiment.  For a lot of people, this is simply discussion about the unconscious, or at most, the preconscious.

Lamme’s point is that we can remember this local recurrent processing, albeit briefly, therefore it was conscious.  But this defines consciousness as simply the ability to remember something.  Is that sufficient?  This is a philosophical question rather than an empirical one.

In considering it, I think we should also bear in mind what’s absent.  There’s no affective reaction.  In other words, it doesn’t feel like anything to have this type of processing.  That requires bringing in other regions of the brain which aren’t likely to be elicited until stage 4: the global workspace.  (GWT does allow that it could be elicited through peripheral unconscious propagation, but it’s less likely and takes longer.)

It’s also arguable that considering the sensory regions alone outside of their role in the overall framework is artificial.  Often the function of consciousness is described as enabling learning or planning.  Ryota Kanai, in a blog post discussing his information generation theory of consciousness (which I highlighted a few weeks ago), argues that the function of consciousness is essentially imagination.

These functional descriptions, which often fit our intuitive grasp of what consciousness is about, require participation from the full cortex, in other words, Lamme’s stage 4.  In this sense, it’s not the locations that matter, but what functionality those locations provide, something I think Lamme overlooks in his analysis.

Finally, similar to IIT’s Φ issue, I think tying consciousness only to recurrent processing risks labeling a lot of systems conscious that no one regards as conscious.  For instance, it might require us to see an artificial recurrent neural network as having conscious experience.

But this theory highlights the point I made in the post on the Michael Cohen study, that there is no one finish line for consciousness.  We might be able to talk about a finish line for short term iconic memory (which is largely what RPT is about), another for working memory, one for affective reactions and availability for longer term memory, and perhaps yet another for availability for report.  Stage 4 may quickly enable all of these, but it seems possible for a signal to propagate along the edges and get to some of them.  Whether it becomes conscious seems like something we can only determine retrospectively.

Unless of course I’m missing something?  What do you think of RPT?  Or of Lamme’s points about the problems of relying on introspection and self report?  Should we just let the neuroscience speak for itself?

Is there a conscious perception finish line?

Global workspace theory (GWT) is the proposition that consciousness is composed of contents broadcast throughout the brain.  Various specialty processes compete for the limited capacity of the broadcasting mechanisms, to have their content broadcast to the all the other specialty processes.

Global neuronal workspace (GNW) is a variant of that theory, popularly promoted by Stanislas Dehaene, which I’ve covered before.  GNW is more specific than generic GWT on the physical mechanisms involved.  It relies on empirical work done over the years demonstrating that conscious reportability involves wide scale activation of the cortex.

One of the observed stages is a massive surge about 300 milliseconds after a stimulus, called the P3b wave.  Previous work seemed to establish that the P3b wave is a neural correlate of consciousness.  Dehaene theorized that it represents the stage where one of the signals achieves a threshold and wins domination, with all the other signals being inhibited.  Indeed, the distinguishing mark of the P3b is that it is massively negative in amplitude, indicating that most of it comes from inhibitory action.

The P3b has been replicated extensively and been seen as a pretty established phenomenon associated with attention and consciousness.  But this is science, and any result is always provisional.  Michael Cohen and colleagues have put out a preprint of a study that may demonstrate that the P3b wave is not associated with conscious perception, but with post perceptual processing.

The study tests the perception of subjects, showing various images while measuring their brain waves via EEG.  Using a no-report protocol, in half of the tests, the subjects were asked to report on whether they saw something, but in the other half they were not asked to report.  Crucially, the P3b wave only manifested in the reported cases, never in the non-report ones, even when the non-report image were exactly the same as the ones that did generate affirmative reports.

Image showing P3 wave presence for report and absence for non-report tests
Image from the study: https://www.biorxiv.org/content/10.1101/2020.01.15.908400v1.full

To control for the possibility that the subjects weren’t actually conscious of the image in the non-report cases, the subjects were given a memory test after a batch of non-report events, checking to see what they remember perceiving.  Their memories of the perception correlated with the results in the report versions.

So, the P3b wave, a major piller of GNW, may be knocked down.  The study authors are careful to make clear that this does not invalidate GWT or other cognitive theories of consciousness.  They didn’t test for all the other ways the information may have propagated throughout the cortex.  Strictly speaking, it doesn’t even invalidate GNW itself, but it does seem to knock out a major piece of evidence for it.

However, this is a more interesting discussion if we ask, what would it mean if all cortical communication beyond the sensory regions were ruled out, that the ability to acquire a memory of a sight only required the local sensory cortices?  It might seem like a validation of views like Victor Lamme’s local recurrent processing theory, which holds that local processing in the sensory cortices is sufficient for conscious perception.

But would it be?  Dehaene, when discussing his theory, is clear that it’s a theory of conscious access.  For him, something isn’t conscious until it becomes accessible by the rest of the brain.  Content in sensory cortices may form, but it isn’t conscious until it’s accessible.  Dehaene refers to this content as preconscious.  It isn’t yet conscious, but it has the potential to become so.

In that view, the content of what the subjects perceived in the non-report tests may have been preconscious, unless and until their memories were probed, at which point it became conscious.

This may be another case where the concept of consciousness is causing people to argue about nothing.  If we describe the situation without reference to it, the facts seem clear.

Sensory representations form in the local sensory cortex.  A temporary memory of that representation may persist in that region, so if probed soon enough afterward, a report about the representation can be extracted from it.  But until there is a need for a report or other usage, it is not available to the rest of the system, and none of the activity, including the P3b, normally associated with that kind of access is evident.

This reminds me of Daniel Dennett’s multiple drafts theory (MDT) of consciousness.  MDT is a variant of GWT, but minus the idea that there is any one event where content becomes conscious.  It’s only when the system is probed in certain ways that one of the streams, one of the drafts, become selected, generally one of the ones that has managed to leave its effects throughout the brain, that has achieved “fame in the brain.”

In other words, Dennett denies that there is any one finish line where content that was previously unconscious becomes conscious.  In his view, the search for that line is meaningless.  In that sense, the P3b wave may be a measure of availability, but calling it a measure of consciousness is probably not accurate.  And it’s not accurate to say that the Lamme’s local recurrent processing is conscious, although it’s also not accurate to relegate it completely to the unconscious.  What we can say is that it’s at a particular point in the stream where it may become relevant for behavior, including report.

Maybe this view is too ecumenical and I’m papering over important differences.  But it seems like giving up the idea of one finish line for consciousness turns a lot of theories that look incompatible into models of different aspects of the same overall system.

None of this is to say that GWT or any of its variants might not be invalidated at some point.  These are scientific theories and are always subject to falsification on new data.  But if or when that happens, we should be clear about exactly what is being invalidated.

Unless of course I’m missing something?