Final thoughts on The Evolution of the Sensitive Soul

This is the final post in a series I’ve been doing on Simona Ginsburg and Eva Jablonka’s bookThe Evolution of the Sensitive Soul, a book focused on the evolution of minimal consciousness.  This is a large book, and it covers a wide range of ideas.  A series of relatively small blog posts can’t do them justice.  So by necessity it’s been selective.  Similar to Feinberg and Mallatt’s The Ancient Origins of Consciousness, there’s a wealth of material I didn’t get to, and like that other book, I suspect it will inspire numerous additional posts in the future.

This final post focuses on various areas that G&J (Ginsburg and Jablonka) explore that caught my interest.  So it’s somewhat of grab bag.

The first has to do with memory.  Obviously memory and learning are closely related.  The consensus view in neuroscience is that the main way memory works is through the strengthening and weakening of chemical synapses, the connections between neurons.  In this view, engrams, the physical traces of memory, reside in circuits of neurons that follow Hebbian theory, often summarized as: neurons that fire together, wire together.

But it’s widely understood that this can’t be the full story.  Synapses are complex ecosystems of proteins, vesicles, neurotransmitters, and neuromodulators.  Proteins have to be synthesized by intracellular machinery.  So the strengthening or weakening of a synapse is thought to involve genetic and epigenetic mechanisms as well as ribosomes and other components.

G&J focus cite a study that shows that if synaptic processing is chemically inhibited, so that the synapses retract, long term memories are still able to recover.  In other words, the state of the synapse may be recorded somewhere other than the synapse itself.  If so, the synapse could be just an expression of an engram stored intracellularly, perhaps epigenetically, an epigenetic engram, an intriguing possibility that may eventually have clinical implications for Alzheimers and other types of neural degenerative diseases.

G&J note that this may mean that epigenetic factors could have large scale effects on how fast synapses grow or weaken.  In their view, it may dramatically expand the computational power involved in memory.  They even speculate that it could be a system that operates independently of the synaptic one, transmitting information between neurons using migratory RNAs encapsulated in exosome vesicles.

This intercellular transmission could be the mechanism for some learning behavior, such as Kamin blocking, the phenomenon where if there is already an existing association between two stimuli, and a third concurrent one is introduced, that new one won’t become part of the association.  This mechanism is poorly understood at the neural level.

You might have noticed all the occurrences of “may” and “could” above.  G&J admit that much of this is speculative.  There’s no doubt that synaptic processes are supported by intacellular machinery, and exosome vesicles do exist.  But the idea that engram states are maintained epigenetically needs, I think, a lot more fleshing out, not to mention evidence.  And while the exosomes could conceivably be carrying molecular level memory type information, it seems more likely they’re much more banal metabolic signaling to surrounding glia.

Still, G&J note that there is intense research going on in this area.  And it always pays to remember that life is a molecular phenomenon.  So only time will tell.

On the next topic, like many animal researchers, G&J cite the views of Bjorn Merker approvingly, notably the idea that consciousness is a low level process starting in the brainstem.  (A view I’ve critiqued before.)  This puts them partially on the same page as F&M (Feinberg and Mallatt) in The Ancient Origins of Consciousness.  In the last post, I noted that G&J come to similar conclusions as F&M on when consciousness evolved.  In reality, they use F&M’s review of the research, as well as Merker’s material, in reaching their conclusions.

But this leads to a problem.  G&J have a different definition of consciousness than F&M.  F&M divide consciousness into three types: exteroceptive consciousness, interoceptive consciousness, and affective consciousness.  G&J’s definition seems to most closely align with F&M’s for affective consciousness.

But F&M’s embrace of brainstem consciousness (at least in pre-mammalian species) seems to hinge on the fact that they see exteroceptive and interoceptive processing as sufficient for consciousness.  G&J don’t; for them, affective processing is necessary.  But F&M’s data indicate that the type of learning necessary to demonstrate the presence of affects only happens in the forebrain.

The reason why pallial function in anamniotes is such a tough problem is that a fish whose pallium has been removed or destroyed can see, catch prey, and seems to act normally. However, such a fish cannot learn from its experiences or from the consequences of its actions. Nor is it able to learn the locations of objects in space. This is a memory problem, and the medial and dorsal pallia of vertebrates are known to store memories.

Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

On the one hand, forebrains go back to early vertebrates, so affective consciousness is preserved.  But in fish and amphibians, much of the exteroceptive and interoceptive processing is separate from affective processing.  This isn’t much of an issue for F&M, but it could be seen as weakening G&J’s conclusion that these early vertebrates had the same unified consciousness as later species.

Third topic: G&J late in the book note the existence of something I’d missed until now, a unicellular organism, warnowiid dinoflagellates, that have something called an “ocelloid”, which appears to be something like a camera style eye, much more sophisticated than the typical light sensors that exist at this level.  However, these protists are difficult to study in laboratory conditions.  They tend not to survive outside their natural habitat, which makes them difficult to study.  So the function of this structure is largely conjecture.  Still, if it is an eye, what kind of processing in a unicellular organism might such a complex structure be supporting?

Finally, G&J touch on the topic of machine consciousness.  Somewhat refreshingly for people who use the “embodied” language, they don’t rule out technological consciousness.  However, they note that it could be very different from evolved consciousness in animals.  Importantly, they see UAL as an evolutionary marker for consciousness in biology.  Its existence in technological systems may not necessarily indicate the presence of machine consciousness.  And they expect machine consciousness to require a body, but they allow it could be a virtual one.

As always, my take on these things is it depends on how we define “consciousness”.

As noted above, there is a lot more in this book, some of which I might touch on later.  But I think this is it for now.

Finally, and I should have linked to this in the last post, if you want a condensed version of their thesis, and don’t mind wading through some technical material, their paper on unlimited associative learning is online.

What do you think of the idea of epigenetic engrams?  Or the various definitional issues?  Or G&J’s approach overall?

Unlimited associative learning

Cover of The Evolution of the Sensitive SoulThis is part of a series on Simona Ginsburg and Eva Jablonka’s book: The Evolution of the Sensitive Soul, a book focused on the evolution of minimal consciousness.  This particular post is on the capabilities Ginsburg and Jablonka (G&J) see as necessary to attribute consciousness to a particular species.  The capability they focus on is learning, but not just any kind of learning, a type of sophisticated learning they call unlimited associative learning.

There are many different types of learning, but they can be grouped into two broad categories: non-associative learning and associative learning, with associative being the more sophisticated.

Non-associative learning includes habituation and sensitization.  Habituation is when a sensory receptor responds less frequently to a constant or repetitive stimulus.  It’s why you don’t feel your clothes against your skin (until I called your attention to it) or the pressure of the piece of furniture you’re sitting in against your body.

Sensitization is the opposite.  If there is no stimulus for a long time, the sensory neuron is more likely to respond when one suddenly arrives.  Or if it arrives in an unexpected pattern (such as the feeling of something crawling on your leg).  Or if the previous stimulus was painful, then a relatively mild stimulus may still lead to an intense reaction.

Non-associative learning takes place in all kinds of living systems, including unicellular organisms.  In animals, in the cases I described above, it actually takes place in the peripheral nervous system.  Although it also happens in the central nervous system.  More sophisticated learning is built on top of it.

Historically, associative learning has been categorized into two categories: classical or Pavlovian conditioning, and operant or instrumental conditioning.

Classical conditioning is best exemplified by the case of Ivan Pavlov’s dogs.  Initially, in an experiment, the dogs would salivate when food was presented to them.  But if each time the food was presented, a bell was also rung, the dogs would start to salivate on the bell ring.  Eventually they would salivate on the ring so even if no food was presented.  Classical conditioning is association of two sensory stimuli, the conditioned stimulus (the bell) with the unconditioned stimulus (the food).

Operant conditioning involves association between an action and a reinforcement stimulus.  For example, if a rat in a cage, through random action and exploration, accidentally jumps on a lever, and a food pellet is released, the action (pressing the lever) becomes associated with a reinforcement stimulus (the food).  For it to be a reinforcement, the stimulus must involve some existing value for the organism, either attractive (like food) or aversive (like electrical shock).

G&J, somewhat pushing back against the traditional nomenclature, labeling classical conditioning as “world learning”, because it involves association between external stimuli.  They label operant conditioning as “self learning” because it involves associating an action by the self with reinforcement, a sensory stimulus.

  1. Non-associative learning
    1. Habituation
    2. Sensitization
  2. Associative learning
    1. Classical conditioning / World learning
    2. Operant conditioning / Self learning

G&J state that associative learning requires a brain.  So although we might see non-associative learning in creatures like ctenophores (comb jellies), we only see associative learning in creatures with some sort of central coordinating system.  That said, the definition of “brain” here is fairly liberal.  So many worm like creatures with slightly larger ganglion toward the front of their body seem to meet the standard.

(I found this brain requirement surprising, since classical conditioning is often said to be widespread.  But after reading G&J’s assertion, I tried to track down cases of classical conditioning in primitive organisms.  The main example was a starfish; G&J mention the one study showing it but dismiss it for methodological reasons.  They also briefly allude to studies finding it in unicellular organisms, but don’t seem to find those studies convincing.)

Primitive creatures generally only have what G&J call limited associative learning (LAL).  With LAL, the associations that form are relatively simple.  Although “relative” is a key word here, because even with LAL, things can get complex pretty fast.

But this isn’t the type of learning that signals minimal consciousness.  For that, we need a type of learning that allows associations between compound stimuli integrated across multiple modalities (hearing, sight, smell, etc) and complex combinations of motor actions.  When the capabilities for these types of associations start to arise, the possible combinations quickly increase exponentially, becoming virtually unlimited.

It is this type of learning: unlimited associative learning (UAL) that G&J see as a key indicator of minimal consciousness.  UAL requires sensory integration through multiple hierarchies, forming an integrated sensorium.  It also requires integration across possible motor systems, an integrated motorium.  And the sensorium and motorium become integrated with each other, with a concept G&J refer to as association units.  (G&J don’t use the words “sensorium” or “motorium”, but I find them helpful here to summarize a lot of detail.)

Each layer in the sensory hierarchies make predictions based on the signals from lower layers.  The lower layers respond with prediction error signaling, making the communication between each layer both feed forward and feed back in a recurrent fashion.  It’s with this sustained recurrent signalling that temporal thickness and synaptic plasticity is achieved, leading to memory and learning.  And when it spreads to motor systems, we get the global workspace effect.

It’s important to note that G&J do not claim that UAL is minimal consciousness, only that it is a key indicator of it.  In order to be capable of UAL, a species must have the underlying architecture, including the attributes listed in the last post.

However, UAL represents crucial capabilities that likely make minimal consciousness adaptive.  While it’s possible to see animals that are minimally conscious who, due to injury, pathology, or immaturity, show signs of minimal consciousness but aren’t capable of UAL, the healthy mature members of the species should be capable of it.  In this view, UAL is a key driver of the evolution of minimal consciousness.

In many ways, UAL resembles one of the criteria that Todd Feinberg and Jon Mallatt used for affective consciousness in their book, The Ancient Origins of Consciousness.  Feinberg and Mallatt called this criteria “global non-reflexive operant learning”.  (Although they didn’t elaborate on this, and I didn’t find them to necessarily be consistent with the “global” or “non-reflexive” part in the studies they cited.)

As many others do, G&J take issue with Feinberg and Mallatt dividing primary consciousness up into three separate divisions: exteroceptive, interoceptive, and affective consciousness.  For G&J, there is only one consciousness, which at any time might be focused on exteroceptive, interoceptive, or affective content.

That being said, G&J reach conclusions very similar to Feinberg and Mallatt’s on which species have minimal consciousness: all vertebrates, including fish, amphibians, reptiles, mammals, and birds, as well as many arthropods such as ants and bees, and cephalopods such as octopusses.

In the last post for this series, we’ll discuss some additional areas that G&J explore, and I’ll provide my thoughts on their overall approach.

What do you think of UAL (unlimited associative learning)?  Do you think it’s a valid mark of minimal consciousness?

The sensitive soul and the rational soul

I think examining the evolution of consciousness in animals helps shed light on it in humans.  Admittedly, there are difficulties.  Animals can’t self report using language, which limits just how much of their experience can be garnered from experiments.  Still, taking data from human studies and combining it with animal studies can provide a lot of insight.

One issue is that, in the absence of a precise definition of “consciousness”, there is no sharp line in evolution where everyone agrees that consciousness begins.  Scientists, such as Joseph LeDoux, who seems inclined toward animal consciousness minimalism, and Antonio Damasio, who’s more inclined to see it as widespread,  can agree on all the relevant facts, but disagree on how to interpret them.

This leads many of us to come up with hierarchies.  Those of you who’ve known me a while know mine:

  1. Reflexes and fixed action patterns
  2. Perceptions, representations of the environment, expanding the scope of what the reflexes are reacting to
  3. Volition, goal directed behavior, allowing or inhibiting reflexes based on simple valenced cause and effect predictions
  4. Deliberative imagination, sensory-action scenario simulations assessed on valenced reactions
  5. Introspection, recursive metacognition and symbolic thought

1 seems to apply to all living things, 2 to many animals, 3 to at least mammals and birds, and 4 to the more intelligent species, with 5, at least at present, only appearing to exist in humans.

But I’m far from the only one who’s come up with a hierarchy.  I highlighted LeDoux’s a while back.  Indeed, it appears to be an ancient tradition going back at least to Aristotle.  The ancient Greeks didn’t have a word for “consciousness”, but they did write about the soul.

(The Greek word for “soul” is “psyche”, which obviously is where we get the term “psychology” from, but its etymology is interesting.  It originally meant “to breath”, what probably seemed like the primary difference between living and non-living things.)

Plato’s conception of the soul was something immaterial that survived death, which resonates with the conception in many religions.  Indeed, the word “soul” today is largely synonymous with the immortal soul of monotheistic theology.  A lot of the way the word “consciousness” is thrown around today seems like an unwitting code word for this version of the soul.

Aristotle’s conception was more materialistic.  Most people take him to regard the soul as part of the body and mortal.  (Although, per Wikipedia, there is apparently some controversy about it.)  And he had his own hierarchy back there in the 300s BC.

Hierarchy of Aristotle's versions of the soul
Image credit: Ian Alexander via Wikipedia (click through for source)


  1. The Nutritive Soul, enabling reproduction and growth
  2. The Sensitive Soul, enabling movement and sensation
  3. The Rational Soul, enabling reason and reflection

1 was labeled the “Vegetative” soul in the Wikipedia article on soul; it appears to apply to all living things.  2 applies to all animals.  3 is supposed to apply only to humans.

When I first read about this hierarchy years ago, it didn’t really work for me.  My issue is that many animals appear to be able to reason to at least some degree.  While debatable for fish, amphibians, or arthropods, all mammals and birds appear able to think through options and do short term planning.  This seemed like yet another trait taken as unique to humans but where the real difference is a matter of degree rather than any qualitative break.  Indeed, my thinking is that consciousness, if equated with baseline sentience, requires at least an incipient ability to reason.

However, I’m slowly making my way through Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul (which I was alerted to by Eric Schwitzgebel’s review).  Obviously the title refers to Aristotle’s hierarchy, and the goal is to explain what the author’s call “minimal consciousness”, which they note is often referred to as “primary consciousness”, among other names.

And they equate minimal consciousness, sentience, with the sensitive soul.  However, they don’t exclude all reasoning from the sensitive soul.  (Indeed, their unlimited associative learning thesis, as I understand it, will require that it be there, but I haven’t reached that part of the book yet.)  G&J draw the line at symbolic reasoning, involving language, mathematics, art, etc.  That makes the rational soul equivalent to my own level 5 above.

I haven’t read Aristotle directly, so I don’t know if G&J’s characterization is closer than Wikipedia’s version.  And I’m not sure “rational soul” is the most accurate way to describe it.  And the sensitive soul itself has vastly varying capabilities across species.  But minds, both animal and human, are complex things, and trying to boil down the difference to a single phrase is a lost cause anyway.

So, in this framework, all living things, including plants and simple animals, have a nutritive soul, many animals (but not all) have a sensitive soul, and humans a rational soul.    G&J’s goals is to explain the sensitive soul.

What do you think of Aristotle’s hierarchy?  Or G&J’s interpretation of it?  I’m still inclined to use my own more detailed hierarchy (which admittedly still vastly oversimplifies things), but is Aristotle’s easier to follow?

For animal consciousness, is there a fact of the matter?

Book cover of Human and Animal MindsPeter Carruthers has been blogging this week on the thesis of his new book, Human and Animal Minds: The Consciousness Question Laid to Rest.  I mentioned Carruthers’ book in my post on global workspace theory (GWT), but didn’t get into the details.  While I had been considering taking a fresh look at GWT, his book was the final spur that kicked me into action.

Carruthers used to be an advocate for higher order theories (HOT) of consciousness.  He formulated the dual content version that I thought was more plausible.  As an advocate for HOT, he seemed skeptical of animal consciousness.  But in recent years, he’s abandoned HOT in favor of GWT: the idea that conscious content is the result of processes that have won the competition to have their results globally broadcast to systems throughout the brain.

Most GWT proponents will admit that it’s a theory of access consciousness, that it doesn’t directly address phenomenal consciousness, which usually isn’t seen as a problem because most people in this camp see them as the same thing, with phenomenal consciousness being access consciousness from the inside.   In other words, the idea that phenomenal consciousness is something separate and apart from access consciousness is rejected.

Carruthers isn’t completely outside of this view, but his is a bit more nuanced.  He sees phenomenal consciousness as a subset of access consciousness, the portion of it that includes nonconceptual content, content that is irreducible, such as the color yellow.  (Of course, objectively the content of yellow is reducible to patterns of neural spikes originating from M and L cones in the retina, but only the irreducible sensation of yellow makes it into the workspace.)  This is in contrast to conceptual data, such as the perception of a dog, that is reducible to more primitive experience.

So Carruthers sees phenomenal experience as globally broadcast nonconceptual content, in humans.  Why the stipulation at the end?  He points out that phenomenal experience is inherently a first person account, in that discussing it is basically an invitation for each of us to access our own internal experience.

Asking whether another system has that same internal experience is asking how much like us they are.  Other species may have processes that resemble our global broadcasting mechanisms, to greater or lesser extents, and the collection of competing and receiving processes may resemble our own, again to greater or lesser extent.  In both cases, the farther we move away humans in taxonomy, the less like us they are.

Which means that no other species will have the exact same components of our experience.  Whether what they have amounts to phenomenal experience, our first person experience, depends on which aspects of that experience we judge to be essential.  In other words, there isn’t a fact of the matter.

I pointed out to Carruthers that this also applies to many humans, notably brain injured patients, whose global broadcasting mechanism or collection of competing and receiving processes no longer match that of a common healthy human.  Carruthers, to his credit, bites this bullet and acknowledges that there isn’t a fact of the matter when it comes to whether human infants or brain injured patients are phenomenally conscious.

Carruthers’ overall point is that it doesn’t matter, because nothing magical happens at any stage.  Nothing changes.  There are just capabilities that are either present or absent.  In his view, the focus on consciousness is a mistake.  Broadly speaking, I think he’s right.  There is no fact of the matter.  Consciousness is in the eye of the beholder.

But it’s worth discussing why that’s so, arguably the reason why anything ever fails to be a fact of the matter: ambiguity.  In this case, ambiguity about what we mean by terms like “phenomenal experience”, for it to be “like something”.  “Like” to what degree?  And what “thing”?

As soon as we try to nail down a specific definition, we run into trouble, because no specific definition is widely accepted.  The vague terminology masks wide differences.  It refers to a vast, hazy, and inconsistent collection of capabilities we’ve all agreed to put under one label, but without agreeing on the specifics.  It’s like lawmakers who can’t agree on precisely what a law should say, so instead write something vague that can be agreed on, and leave it to the courts to hash out later.

And yet, I think we can still recognize the ways various species process information will be similar to the way we do, to varying degrees.  As Carruthers notes, there is no magical line, no point where we can clearly say consciousness begins.  But great apes are a lot closer to us than dogs, which are closer than mice, which in turn are closer than frogs, fish, etc, all of which are much closer than plants, rocks, storm systems, or electrons.

And there’s something to be said for focusing on systems that do have irreducible sensory and affective content that are globally integrated into their processing.  This matches the definition many biologists use for primary consciousness.  Primary consciousness appears to be widespread among mammals and birds, and possibly among all vertebrates and arthropods.

But primary consciousness omits aspects of our experience many will insist are essential, such as metacognitive self awareness or imaginative deliberation, capabilities that dramatically expand our appreciation of the contents of primary consciousness.  Such a view dramatically reduces the number of species that are conscious, perhaps only to humans and maybe some great apes.  Which view is right?  To Carruthers’ point, there is no fact of the matter.

Incidentally, even primary consciousness gets into definitional difficulties.  For example, fish and amphibians can be demonstrated to have both sensory and affective content, but the architecture of their brains makes it unclear just how integrated the affective contents are with much of the sensory content.  Does this then still count as primary consciousness?  I personally think the answer is yes since there is at least some integration, but can easily see why many might conclude otherwise.

What do you think?  Is Carruthers being too stingy in his conclusion?  Is there a way we can establish a fact of the matter we can all agree on?  Or is the best we can do is recognize the partial and varying commonalities we have with other species?

Peter Carruthers on the problems of consciousness

Peter Carruthers is posting this week at The Brains Blog on his new book, Human and Animal Minds, which I mentioned in my post on global workspace theory.  His first post focuses on two issues: latent dualism and terminological confusion.

I think he’s right on both counts.  On the latent dualism issue, I’m reminded of something Elkhonon Goldberg said in his book on the frontal lobes: The New Executive Brain:

Why, then, have neuroscientists, and certainly the general public, been so committed to the concept of consciousness and to the axiomatic assumption of its centrality in the workings of the mind? My answer to this question is shockingly embarrassing in its implications: because old gods die hard. Instead of representing a leap forward, the quest for the mechanisms of consciousness represents a leap backward. The dualism of body and soul has been rejected in name but not in substance. We no longer talk about soul; we now call it consciousnes, just as in some circles people no longer talk about creation, they talk about “intelligent design.” We may feel embarrassed by certain old, tired explanatory constructs, and feel intellectually obligated to discard them, but they are often too ingrained for us to truly purge them from our own mental makeup. We give them different names and sneak them right in through the back door. Like many recent converts, we continue to honor the old gods in secret—the god of soul in the guise of consciousness.

Goldberg, Elkhonon. The New Executive Brain: Frontal Lobes in a Complex World (pp. 35-36). Oxford University Press. Kindle Edition.

Carruthers doesn’t seem to go quite that far, but he does note that tacit dualism makes people take thought experiments like zombies and Mary’s room far more seriously than they should.  Amen.

(I should note that Goldberg, despite his skepticism, summarily describes a neural theory about conscious that is essentially the global workspace theory.)

On the terminological issue, Carruthers makes distinctions between different meanings of the word “conscious”, distinguishing between:

  1. wakefulness
  2. perception of things in the environment
  3. access consciousness
  4. phenomenal consciousness.

For him, it’s not controversial that animals have 1-3, but 4 is more questionable.  In a comment, I challenged him that the notion that access and phenomenal consciousness are something other than different perspectives on the same thing, is itself latent dualism, and that we should expect phenomenal consciousness to be present to the extent access consciousness is.

He responded that he’ll address this in later posts this week.  Having read large sections of his book, I’m pretty familiar with what his answer will be.  (And he alludes to it in his response.)  But I’ll hold off commenting until he does address it.

What do you think of his points?  Or of Goldberg’s?

The problem of animal minds

Joseph LeDoux has an article at Nautilus on The Tricky Problem with Other Minds.  It’s an excerpt from his new book, which I’m currently reading.  For an idea of the main thesis:

The fact that animals can only respond nonverbally means there is no contrasting class of response that can be used to distinguish conscious from non-conscious processes. Elegant studies show that findings based on non-verbal responses in research on episodic memory, mental time travel, theory of mind, and subjective self-awareness in animals typically do not qualify as compelling evidence for conscious control of behavior. Such results are better accounted in “leaner” terms; that is, by non-conscious control processes.22 This does not mean that the animals lacked conscious awareness. It simply means that the results of the studies in question do not support the involvement of consciousness in the control of the behavior tested.

LeDoux makes an important point.  We have to be very careful when observing the behavior of non-human animals.  It’s very easy to see behavior similar to that of humans, and then assume that the same conscious states humans have with that behavior also apply to the animal.

On the other hand, and I’m saying this as someone who hasn’t yet finished his book, I think LeDoux might downplay the results in animal research a bit too much.  It does seem possible to identify behavior in humans that requires consciousness, such as dealing with novel situations, making value trade-off decisions, or overriding impulses, and then deduce that the equivalent behavior in animals also requires it.

But LeDoux makes an excellent point.  There are wide variances in what we can mean by the word “consciousness”.  In particular, he discusses a distinction between noetic and autonoetic consciousness.  Noetic consciousness appears to be consciousness of the environment and of one’s body.  Autonoetic appears to be consciousness of one’s mental thoughts.  He describes the autonoetic variety as providing the capability of mental time travel.

I’m not sure about this distinction, but in many ways it seems similar to the distinction between primary consciousness and metacognitive self awareness.  This always brings to mind a hierarchy I use to think about the various capabilities and stages:

  1.  Reflexes: fixed action patterns, automatic responses to stimuli.
  2. Perceptions: predictive models of the environment built with sensory input, expanding the scope of what the reflexes can react to.
  3. Attention: prioritization of what the reflexes react to, including bottom up attention: reflexive prioritization, and top down attention: prioritization from the next layer.
  4. Imagination / sentience: sensory action scenarios to resolve conflicts among reflexes, resulting in some being allowed and others inhibited.  It is here where reflexes become feelings, dispositions to act rather than automatic action.
  5. Metacognition: awareness and assessment of one’s own cognition, enabling metacognitive self awareness, symbolic thought such as language, and human level intelligence.

Noetic consciousness (if I’m understanding the term correctly) would seem to require 1-4.  Autonoetic might only come around with 5.  Although 4 enables the mental time travel LeDoux discusses, so this match may not be a clean one.  If I end up buying into the noetic vs autonoetic distinction, I might have to split 4 up.

But LeDoux’s point is that behavior precedes consciousness, and that’s easy to see using the hierarchy.  Unicellular organisms are able to engage in approach and avoidance behavior with only 1, reflexes.  The others only come much later in evolution.  It’s easy to see behavior driven by the lower levels and project the full hierarchy on them, because it’s what we have.

All of which is to say, that I think LeDoux is right that arguing about whether animals or conscious or not, as though consciousness is something they either have or don’t have, isn’t meaningful.  The real question is how conscious are they and what the nature of that consciousness is.

It’s natural to assume it’s the same as ours.  It’s part of the built in empathetic machinery we have as a social species.  But just because anthropomorphism is natural, doesn’t mean it’s right.  Science demands we be more skeptical.

Unless of course I’m missing something?

Detecting consciousness in animals and machines, inside-out

An interesting paper came up in my feeds this weekend: Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach.  The authors put forth a definition of consciousness, and then criteria to test for it, although they emphasize that these can’t be “hard” criteria, just indicators.  None of them individually definitely establish consciousness.  Nor does any one absent indicator rule it out.  But cumulatively their presence or absence can make consciousness more likely or unlikely.

Admitting that defining consciousness is fraught with lots of issues, they focus on key features:

  1. Qualitative Richness: Conscious content is multimodal, involving multiple senses.
  2. Situatedness: The contents of consciousness are related to the system’s physical circumstances.  In other words, we aren’t talking about encyclopedic type knowledge, but knowledge of the immediate surroundings and situation.
  3. Intentionality: Conscious experience is about something, which unavoidably involves interpretation and categorization of sensory inputs.
  4. Integration: The information from the multiple senses are integrated into a unified experience.
  5. Dynamics and stability: Despite things like head and eye movements, the perception of objects are stabilized in the short term.  We don’t perceive the world as a shifting moving mess.  Yet we can detect actual dynamics in the environment.  The machinery involved in this are prone to generating sensory illusions.

In discussing the biological function of consciousness, the authors focus on the need of an organism to make complex decisions involving many variables, decisions that can’t be adequately handled by reflex or habitual impulses.  They don’t equate consciousness with this complex decision making, but with the “multimodal, situational survey of one’s environment and body” that supports it.

This point seems crucial, because the authors at one point assert that the frontal lobes are not critical for consciousness.  Of course, many others assert the opposite.  A big factor is whether frontal lesions impair consciousness.  There seems to be widespread disagreement in the field about this, but at least some of it may hinge on the exact definition of consciousness under consideration.

The authors then identify key indicators:

  1. Goal directed behavior and model based learning:  Crucially, the goal must be formulated and envisioned by the system.  People like sex because it leads to reproduction, but reproduction is a “goal” of natural selection, not necessarily of the individuals involved, who often take measures to enjoy sex while frustrating the evolutionary “purpose”.  On the other hand, formulating a novel strategy to woo a mate would qualify.
  2. Brain anatomy and physiology: In mammals, conscious experience is associated with thalamo-cortical systems, or in other vertebrates with their functional analogs, such as the nidopallium in birds.  But this criteria largely breaks down with simpler vertebrates, not to mention invertebrates or artificial intelligence.
  3. Psychometrics and metacognitive judgments: The ability of a system to detect and discriminate objects is measurable and, if present, the organism’s ability to assess its own knowledge.
  4. Episodic Memory:  Autobiographical memory of events experienced at particular places and times.
  5. Illusion and multistable perception: Susceptible to sensory illusions (such as visual illusions) due to intentionality, the building of perceptual models.
  6. Visuospatial behavior: Having a stable situational survey despite body movements.

I’m not a fan of relying too much on 2, specific anatomy, at least other than in cases of assessing whether someone is still conscious after brain injuries.  As I noted in the post on plant consciousness, I think focusing on capabilities keeps us grounded but still open minded.

I don’t recall the authors making this connection, but it’s worth noting that the same neural machinery is involved in both 1, goal planning, and 4, episodic memory.  We don’t retrieve memories from a recording, but imagine, simulate, reproduce past events, which is why memory can be so unreliable, but also why we can fit a lifetime of memories in our brain.

I was initially skeptical of the illusion criteria in 5, but on reflection it makes sense.  Experiencing a visual or other sensory illusion means you are accessing a representation, even if not a correct one, so a system showing signs of that experience does indicate intentionality, the aboutness of experience.

The authors spend some space assessing IIT (Integration Information Theory) in relation to “the problem of panpsychism”.  They view panpsychism as cheapening the concept of consciousness to the point where the word loses its usefulness, and see IIT as “underconstrained” in a manner that leads to it.  (I saw a comment the other day that IIT gets cited as much in neuroscience papers as other theories like GWT, but at least in my own personal survey, most of the citations of IIT seem to be criticisms.)

Finally, the authors look at modern machine learning neural networks and conclude that they currently show no signs of consciousness.  They note that machines may have alternate strategies for accomplishing the same thing as consciousness, which raises the question of how malleable we want the word “consciousness” to be.

There’s a lot here that resonates with the work surveyed by Feinberg and Mallatt, which I’ve reported on before, although these indicators seem a bit less concrete that F&M’s.  They might better be viewed as criteria for the development of more specific experimental criteria.

Of course, if you don’t buy their definition of consciousness, they you may not buy their criteria for indicators.  But this is always the problem with scientific studies of ambiguous concepts.

So the question is, do you buy their description?  Or the resulting indicators?

h/t Neuroskeptic

Do boiling crawfish suffer?

Boiled crawfish.
Image credit: Giovanni Handal via Wikipedia

This Easter I visited one of my cousins and, as is tradition for a lot of people this time of year, we had a crawfish boil.  Eating boiled crawfish (crayfish for you non-Cajuns) is an ever present activity in southern Louisiana, at least when they’re in season, and I’ve had my share over the years.  Although for me it’s a mostly social thing because I can take or leave crawfish as a food.

Anyway, it had been a while since I observed the actual cooking process.  When the squirming wriggling mass of crawfish are lowered into the boiling water, I’ve always had a moment of dread and mortification, wondering how much these creatures are suffering in their final moments.  And  how long they remain alive in that pot.

When I was a boy, I mentioned this once or twice, and was teased for it, both by adults and other kids, for essentially being concerned about the welfare of “mud bugs.”  At the time I accepted this as a correction for attributing too much intelligence and feelings to these creatures.  But the disquiet each time I saw it never went away, although I eventually learned to keep my mouth shut.

In retrospect, after seeing other kids get the treatment over the years, I now see the teasing as a defensive reaction.  No one wants to consider that we may be subjecting these creatures to unconscionable suffering.  Far easier to conclude that they have no real sentience, and to squash any sentiment that they might, particularly in kids who might go on to ask difficult questions.

Pain in crustaceans such as crawfish, as well as invertebrates overall, is a difficult issue.  The evolution of vertebrates and invertebrates diverged from each other long before central nervous systems came along, so many of the structures we associate with cognition and pain are either radically different or missing.

Even in vertebrates, we have to be careful.  Vertebrates have specialized nerve cells throughout their peripheral nervous system called nociceptors, which are sensitive to tissue damage.  Signals from these nociceptors are rapidly relayed to the spinal cord and brain, where it usually leads to automatic responses such as withdrawal reflexes or avoidance behavior, as well as changes in heart rate, breathing, blood pressure, and other metabolic functions.

But, as counter-intuitive as it sounds, nociception by itself is not pain.  Pain is a complex emotional mental state.  Neurological case studies show that in humans it happens in the forebrain, the thalamo-cortical system, where the right kind of lesions on pathways to the anterior cingulate cortex can knock it out.  This means that the processing happening in the brainstem is below the level of consciousness, and that the behavior associated with it, when seen in other species, is not by itself an indicator of conscious pain.

This is an important point, because a lot of the material out there confuses nociception with pain, citing things like protective motor reactions and avoidance behavior as evidence for pain.  But pain is a higher cognitive state.  To establish that it’s present requires demonstrating that the animal can engage in nonreflexive operant learning and value trade off reasoning.

All vertebrates appear to display at least incipient levels of this more sophisticated behavior, indicating that all vertebrates feel pain.  Although in the case of fish, many species are missing a type of nociceptive fibers, c-fibers, which transmit the signals that lead to the long burning type of pain associated with prolonged suffering.  These fish appear to suffer the sharp pain when an injury is incurred, but not the long burning pain that land-animals experience.

However, nociceptors haven’t been found in most invertebrates, either of the fast sharp variety or the long burning kind.  This has led many to conclude that they don’t feel pain.  However, many invertebrates do show some reflexive reactions similar to the ones associated with nociception in vertebrates, which seems to show they have alternative interoceptive mechanisms for accomplishing similar results.

Perhaps a more difficult issue is whether they show any signs of the cognitive abilities required for pain in vertebrates.  Todd Feinberg and Jon Mallatt, whose book, The Ancient Origins of Consciousness, is my go-to source for this sort of thing, lists crayfish as demonstrating global operant learning and behavioral trade offs.

Following the citation trail, the paper that reaches this conclusion shows that crayfish, while having a pretty limited repertoire of behaviors, can nonetheless inhibit reflexive responses, and change responses depending on value based calculations.  This is pretty much the same capability in vertebrates associated with the capacity to experience affective states, such as pain.

That would seem to indicate that the crawfish, while possibly not experiencing pain as we understand it, nonetheless are in distress.

I read somewhere that lobsters being boiled can live for up to three minutes.  (The experiments to figure that out can’t have been pretty.)  Hopefully, crawfish, being smaller, die quicker.  And hopefully they lose consciousness quickly.

Some countries ban boiling of crustaceans alive, requiring that cooks kill the animal prior to boiling them.  Apparently there’s a device you can get that will shock the head of a lobster, killing it instantly, or at least rendering it unconscious.  Unfortunately, even if it’s anatomically feasible, the idea of using something like that on the hundreds of crawfish about to go into a pot isn’t very practical.  There’s just too many packed too closely together.  Some people advocate freezing first, but it’s not clear that’s a humane way to go either, and doing so with a large cache of crawfish is, again, not practical.

So even if people could be convinced that there was suffering to be concerned about here, I doubt there would be much change in the technique, although it might lead to less people wanting to eat them in the first place.

There is also the fact that lobsters only have 100,000 neurons, less than half what fruit flies and ants have, and only about a tenth of what bees or cockroaches have.  I couldn’t find anywhere how many crawfish have, but I suspect it’s comparable to the lobsters.  In other words, the resolution and depth of their experience of the world is extremely limited, far more so than many other animals whose welfare we typically disregard.

How much of a difference should that make?  Is it right to think of them as conscious?  Does the fact that they themselves have no empathy and couldn’t return ours, matter?  How concerned about this should we be?  Should we follow the example of the countries that outlaw boiling lobsters alive?

Frans de Waal on animal consciousness

Frans de Waal is a well known proponent of animals being much more like us than many people are comfortable admitting.  In this short two minute video, he gives his reason for concluding that at least some non-human animals are conscious.  (Note: there’s also a transcript.)

de Waal is largely equating imagination and planning with consciousness, which I’ve done myself on numerous occasions.  It’s a valid viewpoint, although some people will quibble with it since it doesn’t necessarily include metacognitive self awareness.  In other words, it doesn’t have the full human package.  Still, the insight that many non-humans have imagination, whether we want to include it in consciousness or not, is an important point.

As I’ve noted many times before, I think the right way to look at this is as a hierarchy or progression of capabilities.  In my mind, this usually has five layers:

  1. Survival circuit reflexes
  2. Perception: predictive sensory models of the environment, expanding the scope of what the reflexes can react to
  3. Attention: prioritizing what the reflexes react to
  4. Imagination / sentience: action scenario simulations to decide which reflexes to allow or inhibit, decoupling the reflexes into feelings, expanding the scope of what the reflexes can react to in time as well as space
  5. Metacognition: theory-of-mind self awareness, symbolic thought

There’s nothing crucial about this exact grouping.  Imagination in particular could probably be split into numerous capabilities.  And I’m generally ignoring habitual decisions in this sketch.  The main point is that our feelings of consciousness come from layered capabilities, and sharp distinctions between what is or isn’t conscious probably aren’t meaningful.

It’s also worth noting that there are many layers to self awareness in particular.  A creature with only 1-3 will still have some form of body self awareness.  One with 4 may also have attention and affect awareness, both arguably another layer of self awareness.  Only with 5 do we get the full bore mental self awareness.

It seems like de Waal’s point about observing the capabilities in animal behavior to determine if they’re conscious will also eventually apply to machines.  Although while machines will have their own reflexes (programming), those reflexes won’t necessarily be oriented toward their survival, which may prevent us from intuitively seeing them as conscious.  Lately I’ve been wondering if “agency” might be a better word for these types of systems, ones that might have models of themselves and their environment, but don’t have animal sentience.

Of course, the notion that comes up in opposition to this type of assessment is the philosophical zombie, specifically the behavioral variety, a system that can mimic consciousness but has no inner experience.  But if consciousness evolved, for it to have been naturally selected, it would have had to produce beneficial effects, to be part of the causal structure that produces behavior.  The idea that we can have its outputs without some version of it strikes me as very unlikely.

So in general, I think de Waal is right.  Our best evidence for animal consciousness lies in the capabilities they display.  This views consciousness as a type of intelligence, which I personally think is accurate, although I know that’s far from a universal sentiment.

But is this view accurate?  Is consciousness something above and beyond the functionality of the system?  If it is, then what is its role?  And how widespread would it be in the animal kingdom?  And could a machine ever have it?

Malcolm MacIver on imagination and consciousness

Sean Carroll’s latest episode of his podcast, Mindscape, features an interview with neuroscientist Malcom MacIver, one that is well worth checking out for anyone interested in consciousness.

Consciousness has many aspects, from experience to wakefulness to self-awareness. One aspect is imagination: our minds can conjure up multiple hypothetical futures to help us decide which choices we should make. Where did that ability come from? Today’s guest, Malcolm MacIver, pinpoints an important transition in the evolution of consciousness to when fish first climbed on to land, and could suddenly see much farther, which in turn made it advantageous to plan further in advance. If this idea is true, it might help us understand some of the abilities and limitations of our cognitive capacities, with potentially important ramifications for our future as a species.

The episode is about 80 minutes long.  If your time is limited, there’s a transcript at the linked page.

MacIver largely equates imagination, the ability to plan, to think, to remember episodic memories and to simulate possible courses of action, with consciousness.  I can see where he’s coming from.  I’ve toyed with that idea myself.   (I don’t use the word “imagination” in the linked post, but that’s what’s being discussed.)

But while I think imagination is an important component of consciousness, meeting a lot of the attributes many of us intuitively associate with it, it doesn’t appear to be the whole show.  This is one reason why I often talk about a hierarchy of consciousness:

  1. Reflexes: survival circuits, primal instinctive reactions to stimuli
  2. Perception: predictive models of the environment based on sensory input, increasing the scope of what the reflexes react to
  3. Attention: prioritization of what the reflexes react to
  4. Imagination / sentience: simulations of possible courses of action based on reflexive reactions, decoupling the reflexes so that they become affective feelings
  5. Metacognitive self awareness / symbolic thought

The consciousness of a healthy mature human contains this entire hierarchy.  Most vertebrates have 1-4, although as MacIver discusses, the imagination of fish is very limited, usually only providing a second or two of advance planning.  Land animals have more, although most can only plan a few minutes into their future.  The more intelligent mammals and birds can plan further.  But to plan weeks, months, or years in the future seems to require the volitional symbolic thought that only humans seem to possess.

But many of us, if presented with an animal who only has 1-3, will still regard it as conscious to at least some degree.  This is particularly true with humans who, due to brain pathologies, may lose 4 and 5.  The fact that they are still aware of their environment and can respond habitually or reflexively to things still triggers most people’s intuition of consciousness.

Which view is right?  Which layers must be present for consciousness?  I don’t think there’s a fact of the matter answer.  Unless of course I’m missing something?

h/t James of Seattle