A correction: LeDoux’s actual views on instrumental learning in vertebrates

I have to make a correction.  In my post on LeDoux’s views on consciousness and emotions, I made the following statement:

Anyway, LeDoux states that there is “no convincing” evidence for instrumental behavior in pre-mammalian vertebrates, or in invertebrates.  In his view, instrumental behavior only exists in mammals and birds.

As it turns out, this is is wrong.  In his hierarchy, he makes a distinction between instrumental learning that is habitual versus goal-oriented (action-outcome).  On my first pass reading his description of this, I assumed that a habit could only form after initial goal-oriented learning.  But while checking back on some details, I realized he actually describes learning that leads directly to habits without the goal-oriented stage.

In practice, an animal may engage in random trial and error behavior, some of which leads to a result that reinforces the behavior.  If repeated often enough, a habit develops.  Habitual learning can be distinguished from goal-oriented behavior by seeing what happens when the reward is later removed.  In goal-oriented behavior, the behavior quickly ends, but habits tend to persist for a while.  (Which of course is what a habit is all about.)

Habitual learning is much slower than the goal-oriented version, much more stimulus-response driven, far less flexible, but apparently it does happen.  I’ve dug around a bit in the literature, and it appears to be widely accepted.

So to correct the statement above, LeDoux does see instrumental learning as existing in all vertebrates, not just mammals and birds.  However, it is the goal-oriented learning he doesn’t see as having been demonstrated in pre-mammalian vertebrates.  Fish, amphibians, and reptiles he sees as only having the habit forming version.

I’m not sure what to make of this habitual type of instrumental learning.  Habits by and large appear to be largely nonconscious, so would learning them be as well?  Of course, LeDoux doesn’t even see goal-oriented instrumental learning as conscious, so in his view this distinction only amounts to different levels of sophistication in nonconscious learning.

As I mentioned in the other post, Feinberg and Mallatt, in The Ancient Origins of Consciousness, do see instrumental learning as indicating what they call affect consciousness, aka sentience.  And the indications of instrumental learning in all vertebrates drive their conclusion that all vertebrates are sentient.

But Feinberg and Mallatt don’t get into the distinction between habit and goal-oriented instrumental learning.  So I don’t know if this is a difference they overlooked, disagree with, or accept but see even the habit learning version as indicating affect consciousness.  A clue might be that, when deciding on their behavioral criteria for affect consciousness, they ruled out “persistence in pursuit of reward” as a criteria, because it “could reflect aroused but unconscious habits.”  (Emphasis added.)

A case could be made that even in habit learning, if not in habit persistence, there needs to be a valence, but both LeDoux and the literature make clear this happens in a representation or model free manner, which may not leave much room for it to fall even in primary consciousness.

A question then is, can goal oriented behavior be demonstrated in fish, amphibians, reptiles, or invertebrates?  LeDoux doesn’t think so, and notes that habit and goal-oriented behavior look alike without explicit tests to distinguish them, although maybe the rapidity of learning might provide clues.

So, this may complicate my new hierarchy, particularly the level where affects begin.  I’m going to have to give this some thought, and additional research, but wanted to get  the correction out.

Posted in Mind and AI | Tagged , , , , , , | 34 Comments

Layers of consciousness, September 2019 edition

A couple of years ago, when writing about panpsychism, I introduced a five layer conception of consciousness.  The idea back then was to show a couple of things.

One was that very simple conceptions of consciousness, such as interactions with the environment, were missing a lot of capabilities that we intuitively think of as belonging to a conscious entity.

But the other was to show how gradually the emergence of all these capabilities were.  There isn’t a sharp objective line between conscious and non-conscious systems, just degrees of capabilities.  For this reason, it’s somewhat meaningless to ask if species X is conscious, as though consciousness is something they either possess or don’t.  That’s inherently dualistic thinking, essentially asking whether or not the system in question has a soul, a ghost in the machine.

I’ve always stipulated that this hierarchy isn’t itself any new theory of consciousness.  It’s actually meant to be theory agnostic, at least to a degree.  (It is inherently monistic.)  It allows me to keep things straight, and can serve as a kind of pedagogical tool for getting ideas across.  And I’ve always noted that it might change as my own understanding improved.

Well, although disagreeing with him on a number of important points, after reading Joseph LeDoux’s account of the evolution of the mind, as well as going through a lot of papers in the last year, along with many of the conversations we’ve had, it’s become clear that my hierarchy has changed.

Here’s the new version:

  1. Reflexes and fixed action patterns.  Automatic reactions to sensory stimuli and automatic actions from innate impulses.  In biology, these are survival circuits which can be subject to local classical conditioning.
  2. Perception.  Predictive models built from distance senses such as vision, hearing, and smell.  This expands the scope of what the reflexes are reacting to.  It also includes bottom-up attention, meta-reflexive prioritization of what the reflexes react to.
  3. Instrumental behavior / sentience.  The ability to remember past cause and effect interactions and make goal driven decisions based on them.  It is here where reflexes start to become affects, dispositions to act rather than automatic action.  Top down attention begins here.
  4. Deliberation.  Imagination.  The ability to engage in hypothetical sensory-action scenario simulations to solve novel situations.
  5. Introspection.  Sophisticated hierarchical and recursive metacognition, enabling mental-self awareness, symbolic thought, enhancing 3 and 4 dramatically.

Note that attention has been demoted from a layer in and of itself to aspects of other layers.  It rises through them, increasing in sophisticating as it does, from early bottom up meta-reflexes, to deliberative and introspective top down control of focus.

Note also that I’ve stopped calling the fifth layer “metacognition”.  The reason is a growing sense that primal metacognition may not be as rare as I thought when I formulated the original hierarchy, although the particularly sophisticated variety used for introspection likely remains unique to humans.

Some of you who were bothered by sentience being so high in the hierarchy might be happy to see it move down a notch.  LeDoux convinced me that what I was lumping together under “Imagination” probably needed to be broken up into at least a couple of layers, and I think sentience, affective feelings, start with the lower one, although they increase in sophistication in the higher layers.

I noted that mental-self awareness is in layer 5.  I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.  I think with layer 2, the system has to have a body-self representation in relation to the environment, so it’s tempting to put it there, but putting the word “awareness” at that layer feels misleading.  (I’m open to suggestions.)

It seems clear that all life, including plants and unicellular organisms, have 1, reflexes.

All vertebrates, arthropods, and cephalopods have 2, perception.  It’s possible some of these have a simple version of 3, instrumental behavior.  (Cephalopods in particular might have 4.)

All mammals and birds have 3.

Who has 4, deliberation, is an interesting question; LeDoux asserts only primates, but I wouldn’t be surprised if elephants, dolphins, crows, and some other species traditionally thought to be high in intelligence show signs of it.  And again, possibly cephalopods.

And only humans seem to have 5.

In terms of neural correlates, 1 seems to be in the midbrain and subcortical forebrain regions.  2 is in those regions as well as cortical ones.  LeDoux identifies 3 as being subcortical forebrain, although I suspect he’s downplaying the cortex here.  4 seems mostly a prefrontal phenomenon, and 5 seems to exist at the very anterior (front) of the prefrontal cortex.

Where in the hierarchy does consciousness begin?  For primary consciousness, my intuition is layer 3.  But the subjective experience we all have as humans requires all five layers.  In the end, there’s no fact of the matter.  It’s a matter of philosophy.  Consciousness lies in the eye of the beholder.

Unless of course, I’m missing something?  What do you think?  Is this hierarchy useful?  Or is it just muddying the picture?  Would a different breakdown work better?

Posted in Mind and AI | Tagged , , , , , , | 73 Comments

Joseph LeDoux’s theories on consciousness and emotions

The cover of 'The Deep History of Ourselves'In the last post, I mentioned that I was reading Joseph LeDoux’s new book, The Deep History of Ourselves: The Four-Billion-Year Story of How We Got Conscious Brains.  There’s a lot of interesting stuff in this book.  As its title implies, it starts early in evolution, providing a lot of information on early life, although I didn’t find that the latter parts of the book, focused on consciousness and emotion, made much use of the information from the early chapters on evolution.  Still, it was fascinating reading and I learned a lot.

In the Nautilus piece I shared before, LeDoux expressed some skepticism about animal consciousness being like ours.  That seems to be a somewhat milder stance compared to the one in the book.  Here, LeDoux seems, at best, on the skeptical side of agnostic toward non-human animal consciousness.  The only evidence for consciousness he sees as unequivocal is self report, which of course only humans can provide.

In terms of consciousness theories, LeDoux regards Higher Order Theories (HOT) and Global Workspace Theories (GWT) as the most promising, but his money is on HOT, and he provides his own proposed theoretical extensions to it.  HOT posits that consciousness doesn’t lie in the first order representations made in early sensory regions, but in later stage representations that are about these first order ones.  In essence, to be conscious of a representation requires another higher order representation.

In typical HOT, these higher order representations are thought to be in the prefrontal cortex.  LeDoux attributes a lot of functionality to the prefrontal cortex, more than most neuroscientists.  Some of what he attributes I’ve more commonly seen attributed to regions like the parietal cortex.  But he presents information on the connections between various cortical and subcortical regions to the prefrontal cortex to back up his positions.

In the last post, I laid out the hierarchy I usually use to think of cognitive capabilities.  LeDoux has a similar hierarchy, which he discusses in a paper available online, although his is focused on types of behavior.  Going from simpler to more sophisticated:

  1. Species typical innate behavior
    1. Reflexes: Relatively simply survival circuits, centered on the brainstem regions
    2. Fixed Reaction Patterns: More complex survival circuits, often going through subcortical regions such as the amygdala
  2. Instrumental learned behavior
    1. Habits: Actions that persist despite lack of evidence of a good or bad consequence
    2. Action-outcome behaviors: Actions based on the remembered outcomes of past trial-and-error learning
    3. Nonconscious deliberative actions: Actions taken based on prospective predictions made on future outcomes
    4. Conscious deliberative actions: Deliberative actions accompanied and informed by conscious feeling states

On first review, I was unsure about the distinction between action-outcome and deliberative action.  Action-outcome seems like simply a less sophisticated version of deliberative action, particularly since episodic memory and imagined future scenarios are reputed to use the same neural machinery.  It seemed like just different degrees of what I normally label as imaginative planning.

But on further consideration, I can see a case that simply remembering a past pattern of activity and recognizing the same sequence, is not the same thing as simulating new hypothetical scenarios, specific scenarios that the animal has never experienced before.  Put another way, deliberative actions require taking multiple past scenarios and combining them in creative new ways.

Anyway, LeDoux states that there is “no convincing” evidence for instrumental behavior in pre-mammalian vertebrates, or in invertebrates.  In his view, instrumental behavior only exists in mammals and birds.

(This seems to contrast sharply with Feinberg and Mallatt in The Ancient Origins of Consciousness, who cite numerous studies showing instrumental learning in fish, amphibians, and reptiles.  One of the things I’m not wild about LeDoux’s book, is that while he has bibliographic notes, they’re not in-body citations,  making it very difficult to review the sources of his conclusions.)

Deliberative action, on the other hand, LeDoux only sees existing in primates, with humans taking it to a new level.  Apparently in this hierarchy, consciousness only comes into the picture with the most sophisticated version.  I think “consciousness” in this particular context means autonoetic consciousness, that is, introspective self awareness with episodic memory.

(Endel Tulving, the scientist who proposed the concept of autonoesis, doesn’t see episodic memory developing until humans.  However, there is compelling behavioral evidence that it developed much earlier, and is in, at least, all mammals and birds,  although it’s admittedly far more developed in humans.)

On emotions, LeDoux starts by bemoaning the terminological mess that exists any time emotions are discussed.  He reserves the word “emotion” for conscious feelings, and resists its application to the lower level survival circuitry, which he sees as non-conscious.  He points out that a lot of published results which claim to show things such as fear in flies, are actually just showing survival circuit functionality.  He sees survival circuits as very ancient, going back to the earliest life forms, but emotions as relatively new, only existing in humans.

In LeDoux’s view, emotions, the conscious feelings, are cognitive constructions in the prefrontal cortex, predictions based on signals from the lower level survival circuitry, reinforced by interoceptive signals from the physiological changes that the lower level circuitry initiate: changes in blood pressure, heart rate, breathing, stomach muscle clenching, etc.

LeDoux’s views are similar to Lisa Feldmann Barrett’s constructive emotions theory, and contrast with views such as Jaak Panksepp, who saw consciously felt emotion in the lowest level survival circuits.  Barrett also sees emotions only existing in humans, although she makes allowances for animals to have affects, simpler more primal valenced feelings such as hunger, pain, etc.  I’m not sure what LeDoux’s position is on affects.  He doesn’t mention them in this book.

My views on all this is that I think LeDoux is too skeptical of animal consciousness.  It doesn’t seem like a human without language could pass his criteria.  However, as always, this may come down to which definition of “consciousness” we’re discussing.  Human level consciousness includes introspective self awareness and far wider ranging imagination, enabled by symbolic thought such as language, than exist in any other species.  If we set that as the minimum, then only humans are conscious, but many will see that as too stringent.  In particular, I think a case could be made that it’s far too stringent for sentience.

On emotion, I do think LeDoux is right that the lower level survival circuitry, the reflexes and fixed reaction patterns in subcortical regions, shouldn’t be thought of as feeling states.  This means we shouldn’t take defensive behaviors in simpler animals as evidence for fear, or aggressive behavior as evidence for anger.

On the other hand, I think he’s wrong that feeling states don’t come around until sophisticated deliberative processing.  It seems like any goal-directed instrumental behavior, such as selecting an action for a particular outcome, requires that there be some preference for that outcome, some valence, input from the lower level survival circuits to the higher level ones that decide whether to pursue a goal or avoid an outcome.

This might be far simpler than what humans feel, perhaps only meeting Barrett’s sense of an affect rather than what Barrett and LeDoux see as the full constructed emotion, but they should be felt states nonetheless.  By LeDoux’s own criteria, that would include any animal capable of instrumental behavior, including mammals and birds.  Admittedly, there’s no guarantee these felt states are conscious ones, but again, definitions.

Comparing LeDoux’s book to Feinberg and Mallatt’s, I’m struck by how much of the disagreement actually does come down to definitions.  The real differences, such as which species are capable of operant / instrumental learning, seem like they will eventually be resolvable empirically.  The differences on consciousness, may always be a matter of philosophical debate.

What do you think of LeDoux’s various stances?

Update 9-11-19: The statement above about LeDoux seeing instrumental learning only in mammals and birds isn’t right.  Please see the correction post.

Posted in Mind and AI | Tagged , , , , , , , , | 117 Comments

The problem of animal minds

Joseph LeDoux has an article at Nautilus on The Tricky Problem with Other Minds.  It’s an excerpt from his new book, which I’m currently reading.  For an idea of the main thesis:

The fact that animals can only respond nonverbally means there is no contrasting class of response that can be used to distinguish conscious from non-conscious processes. Elegant studies show that findings based on non-verbal responses in research on episodic memory, mental time travel, theory of mind, and subjective self-awareness in animals typically do not qualify as compelling evidence for conscious control of behavior. Such results are better accounted in “leaner” terms; that is, by non-conscious control processes.22 This does not mean that the animals lacked conscious awareness. It simply means that the results of the studies in question do not support the involvement of consciousness in the control of the behavior tested.

LeDoux makes an important point.  We have to be very careful when observing the behavior of non-human animals.  It’s very easy to see behavior similar to that of humans, and then assume that the same conscious states humans have with that behavior also apply to the animal.

On the other hand, and I’m saying this as someone who hasn’t yet finished his book, I think LeDoux might downplay the results in animal research a bit too much.  It does seem possible to identify behavior in humans that requires consciousness, such as dealing with novel situations, making value trade-off decisions, or overriding impulses, and then deduce that the equivalent behavior in animals also requires it.

But LeDoux makes an excellent point.  There are wide variances in what we can mean by the word “consciousness”.  In particular, he discusses a distinction between noetic and autonoetic consciousness.  Noetic consciousness appears to be consciousness of the environment and of one’s body.  Autonoetic appears to be consciousness of one’s mental thoughts.  He describes the autonoetic variety as providing the capability of mental time travel.

I’m not sure about this distinction, but in many ways it seems similar to the distinction between primary consciousness and metacognitive self awareness.  This always brings to mind a hierarchy I use to think about the various capabilities and stages:

  1.  Reflexes: fixed action patterns, automatic responses to stimuli.
  2. Perceptions: predictive models of the environment built with sensory input, expanding the scope of what the reflexes can react to.
  3. Attention: prioritization of what the reflexes react to, including bottom up attention: reflexive prioritization, and top down attention: prioritization from the next layer.
  4. Imagination / sentience: sensory action scenarios to resolve conflicts among reflexes, resulting in some being allowed and others inhibited.  It is here where reflexes become feelings, dispositions to act rather than automatic action.
  5. Metacognition: awareness and assessment of one’s own cognition, enabling metacognitive self awareness, symbolic thought such as language, and human level intelligence.

Noetic consciousness (if I’m understanding the term correctly) would seem to require 1-4.  Autonoetic might only come around with 5.  Although 4 enables the mental time travel LeDoux discusses, so this match may not be a clean one.  If I end up buying into the noetic vs autonoetic distinction, I might have to split 4 up.

But LeDoux’s point is that behavior precedes consciousness, and that’s easy to see using the hierarchy.  Unicellular organisms are able to engage in approach and avoidance behavior with only 1, reflexes.  The others only come much later in evolution.  It’s easy to see behavior driven by the lower levels and project the full hierarchy on them, because it’s what we have.

All of which is to say, that I think LeDoux is right that arguing about whether animals or conscious or not, as though consciousness is something they either have or don’t have, isn’t meaningful.  The real question is how conscious are they and what the nature of that consciousness is.

It’s natural to assume it’s the same as ours.  It’s part of the built in empathetic machinery we have as a social species.  But just because anthropomorphism is natural, doesn’t mean it’s right.  Science demands we be more skeptical.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , | 17 Comments

Machine learning and the need for innate foundations

This interesting Nature article by Anthony M. Zador came up in my Twitter feed: A critique of pure learning and what artificial neural networks can learn from animal brains:

Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.

The behavior of the vast majority of animals is primarily driven by instinct, that is, innate behavior, with learning being more of a fine tuning mechanism.  For simple animals, such as insects, the innate behavior is almost the whole thing.  Zador points out, for example, that spiders are born ready to hunt.

By the time we get to mammals, learning is responsible for a larger share of the behavior, but mice and squirrel behavior remains mostly innate.  We have a tendency to view ourselves as an exception, and we are, to an extent.  Our behavior is far more malleable, subject to revision from learning, than the typical mammal.

But a lot more human behavior is innate than most of us are comfortable acknowledging.  We have a hard time seeing it because we’re doing so from within the species.  We talk about “general” intelligence as though we were one.  But our intelligence is tightly wound to the needs of a social primate species.

I’m a bit surprised that the artificial intelligence field needs to be told that natural neural networks are not born blank slates.  Although rather than blank slate philosophy, this might simply represent the desire of engineers to ensure that the learning algorithm well has been thoroughly tapped.

But it seems like the next generation of ANNs will require a new approach.  Zador points out how limited our current ANNs actually are.

We cannot build a machine capable of building a nest, or stalking prey, or loading a dishwasher. In many ways, AI is far from achieving the intelligence of a dog or a mouse, or even of a spider, and it does not appear that merely scaling up current approaches will achieve these goals.

Nature’s secret sauce appears to be this innate wiring.  But a big question is where this innate wiring comes from.  It has to come from the genome, in some manner.  But Zador points out that the information capacity of the genome is far smaller, by several orders of magnitude, than what is needed to specify the wiring for a brain.

Although for simple creatures, like c-elegans worms, it is plausible for the genome to actually specify the wiring of their entire nervous system, in the case of more complex animals, particularly humans, it has to be about specifying rules for wiring during development.  Interestingly, human genomes are relatively small compared to many others in the animal kingdom, such as fish, indicating that the the genome information bottleneck may actually have some adaptive value.

This means that brain circuits should show repeating patterns, a canonical circuit that many neuroscientists search for.  I’m reminded of the hypothesis of cortical columns, which seem similar to the idea of the canonical structure.  If so, it would only apply to the cortex itself.

But aside from the cerebellum, most of the neurons in the brain are in the cortex.  Of the 86 billion neurons in the human brain, 69 billion are in the cerebellum, 16 billion in the cortex, and all the subcortical and brainstem neurons fall in that last billion or so.  I would think the subcortical and brainstem regions are the ones with the most innate wiring, meaning that these are the regions that a lot of the genomic wiring rules would have to apply to, but detailed rules for a billion neurons seem easier to conceive of than for 86 billion.

Zador points out that, from a technological perspective, ANNs learn by encoding the structure of statistical regularities from the incoming data into their network.  In the animal versions, evolution could be viewed as an “outer” loop where long term regularities get encoded across generations, and an “inner” loop of the animal learning during its individual lifetime.  Although the outer loop only happens indirectly through the genome.

Anyway, it seems like there’s a lot to be learned about building a mind by studying how the human genome codes for and leads to the development of neural wiring.  Essentially, our base programming comes from this process.

But apparently it remains controversial that AI research still has things to learn from biological systems.  It’s often said that the relationship of AI to brains is like the one between planes and birds.  Engineers could only learn so much from bird flight.

But Zador points out that this misses important capabilities we want from an AI.  While a plane can fly faster and higher than any bird, it can’t dive into the water and catch a fish, swoop down on a mouse, or hover next to a flower.  Computer systems already surpass humans in many specific tasks, but fail miserably in many others, such as language, reasoning, common sense, spatial navigation, or object manipulation, that are trivially simple for us.

If Zador’s right, and it’s hard for me to imagine he isn’t, then AI research still has a lot to learn from biological systems.  Frankly, I’m a bit surprised this is controversial.  As in many endeavors, intractable problems often become easier if we just broaden the scope of our investigation.

Unless, of course, there’s something about this I’m missing?

Posted in Zeitgeist | Tagged , , , , , | 36 Comments

Ginger Campbell is doing a series on consciousness

I’ve highlighted Dr. Ginger Campbell’s excellent Brain Science Podcast before.  It’s an invaluable resource for anyone interested in the science of the brain.  Many of the books and concepts I’ve highlighted here over the years, I first heard about on her show.  Campbell, a medical doctor, pretty much focuses on neuroscience rather than philosophy, but many of the concepts she explores are relevant to anyone interested in the philosophy of mind.

This week, she started a series on consciousness, reviewing several books in the first episode, some of which I’ve discussed here, including Stanislas Dehaene’s Consciousness and the Brain, Sean Carroll’s The Big Picture, Antonio Damasio’s The Strange Order of Things, Todd Feinberg and Jon Mallatt’s Consciousness Demystified, and Daniel Dennett’s From Bacteria to Bach and Back.

She spends a lot of time on Dehaene’s book in particular, discussing a lot of stuff I didn’t have the space to get into in my post on it.  She also has different takes on some of these books from mine, some of which make we want to go back and look at them again.  And her summation of books I haven’t finished yet, such as Damasio’s and Dennett’s, were very helpful.

Highly recommended if you have the time and interest.  As of this posting, the episode doesn’t appear to be on her site yet, but it looks like this link works publicly, or you can follow the link she tweeted out.

Posted in Zeitgeist | Tagged , , , , , , | 3 Comments

The reflex and the feeling

Stephen T. Asma and Rami Gabriel have an interesting article at Aeon on emotions.  Their main thesis is that many emotions are biological, universal, and rooted in evolution.  And that they arise through “the strata of consciousness”: the physiological, the experential, and the conceptual.

They start off casting aspersions on computationalism, evolutionary psychology, and artificial intelligence research, but their main guns are focused on the ideas of Lisa Feldmann Barrett and her constructionist view of emotions, that emotions are high level conceptual constructs, concepts that we socially learn.  In Barrett’s view, emotions are best thought of as predictions or interpretations of interoceptive sensations and of low level valences.

Asma and Gabriel’s view is closer to their mentor, the late Jaak Panksepp, that emotions arise in layers from sub-cortical regions.  They resist the idea that emotions are conceptually constructed, but insist that they’re built on primal physiological phenomena.  They discuss Panksepp’s seven primary emotions: FEAR, LUST, CARE, PLAY, RAGE, SEEKING and PANIC/GRIEF.

Similar to Panksepp, Asma and Gabriel do allow that there are differences between the low level “physiological” emotions and the higher level cognitive ones.  They don’t deny the more elaborate ones have a social aspect.

It’s worth noting that constructionists like Barrett admit primal drives like the Four Fs (fighting, fleeing, feeding, mating).  That puts her about halfway to Panksepp’s seven primary emotions.  We can equate mating with LUST, fighting with RAGE, fleeing with FEAR, and feeding with SEEKING.  It’s not hard to imagine mammals and birds having additional primal impulses to protect their young (CARE and PANIC/GRIEF) and an urge to prepare for complex activity (PLAY).

So a good part of this difference could be seen as definitional.  There is a difference in which components are seen as conscious, although even this could be seen in terms of how “consciousness” is defined.

Asma and Gabriel see the lower level sub-cortical impulses as conscious ones.  They draw on Ned Block’s distinction between access and phenomenal consciousness.  To them the lower level impulses are part of phenomenal consciousness, but not access consciousness, in other words, they’re not available for introspection or use in reasoning.  Barrett, on the other hand, sees these lower level impulses as survival circuits, unconscious reflexes.

My view is closer to Barrett’s.  A distinction has to be made between the reflex and the feeling generated by that reflex.  The reflex generally happens sub-cortically, being the primary impulses described above (and probably others).  But the feeling happens in the cortex.

I see the reflexes as unconscious and the feelings (affect, emotion, etc) as at least potentially conscious.  Asma and Gabriel’s use of the phenomenal consciousness concept here strikes me as unfortunate, showing the problems with Block’s distinction.  In this view, are any perceptions unconscious ones?

But I also think Barrett is a bit too absolutist in seeing emotions as only high level constructions.  Asma and Rami are right that emotions need to be viewed as multilevel phenomena, just not in a way that sees every level as conscious.

Unless of course I’m missing something?

Posted in Zeitgeist | Tagged , , , , , , , , | 16 Comments

Recommendation: What Is Real?

Last week I started listening to a Sean Carroll podcast episode, an interview of Adam Becker on his book, What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics.  Before even finishing the episode, I downloaded Becker’s book and read it.

Becker starts out in the early decades of the 20th century, when quantum physics was still being worked out.  He takes us through the early controversies starting at the famous Solvay Conference.  He covers select details of the personal lives of many of the physicists involved, including the fact that many of them were Jewish and starkly affected by the deteriorating situation in Europe in the 1930s.

He describes the effects that World War II had on the physics community, shifting its center from Europe to America, along with the postwar influx of big money from the military and corporations, turning a small community of philosophically minded physicists into a much larger and more pragmatic field, a shift that likely affected the field’s attitudes toward exploring the foundations of quantum physics.

It’s often said that the Copenhagen Interpretation of quantum mechanics is the default one, but Becker points out that there isn’t really any one interpretation that everyone agrees is the Copenhagen one.  Talk to physicists who accept it and you’ll get a fairly wide variety of viewpoints, with Neils Bohr, generally cited as the primary originator of the interpretation, himself providing a lot of contradictory commentary on it.

Measurement is a key event in most versions.  But what exactly is a measurement?  Bohr insisted that the language describing a measurement must be in ordinary language, and resisted any attempts to be more precise, such as describing the measuring apparatus from a quantum perspective.  To me, this implies an epistemic stance, about the limits on what we can know, but his language is reportedly pretty unclear on this.

From the beginning, there were people who weren’t satisfied with the Copenhagen explanation.  One of the first to provide an alternative was Louis de Broglie.  He proposed an alternative explanation at the 1927 Solvay conference involving a particle and a pilot wave, but wasn’t well prepared to answer criticisms and quickly withdrew it.

Albert Einstein, himself one of the early pioneers of quantum physics, was also not happy.  It’s commonly assumed that Einstein’s chief beef with quantum physics was its lack of determinism, but Becker points out that his real complaint was the lack of locality, and its implications for special and general relativity.  The famous EPR (Einstein-Podolsky-Rosen) paper in 1935 was mostly about the fact that quantum physics, as currently envisioned, involved “spooky action at a distance.”  This is also shown by the fact that, although Einstein knew about the de Broglie-Bohm pilot-wave interpretation, he wasn’t enthusiastic about it.  It preserved determinism, but still had non-local effects.

Speaking of the de Broglie-Bohm interpretation, Becker covers the travails of David Bohm.  Bohm had Marxist leanings, which got him into hot water during the McCarthy era.  He ended up losing his job and having to leave the country.  After independently coming to the same conclusions de Broglie had decades earlier, he was able to clean up the pilot wave interpretation.  But the taint involved with his politics likely affected the reception of his ideas.

Hugh Everett is commonly described as bitterly leaving academia after the chilly reception of his interpretation, the one that would eventually be known as the Many Worlds Interpretation(MWI).  But Becker points out that Everett never planned an academic career, wanting a more affluent lifestyle which a career in the defense industry provided.

Given that the MWI manages to preserve both locality and determinism, I sometimes wonder what Einstein would have thought of it.  He died in 1955, a couple of years before Everett’s paper in 1957.  What would have been his attitude toward it?  It’s worth noting that Erwin Schrodinger mused about a similar possibility in 1952.  Maybe Einstein also thought of it but didn’t consider it worth the conceptual cost.

Becker also covers John Stewart Bell and his theorem.  Bell managed to take a metaphysical debate and turn it into a scientific one with possible experiments to test the idea, experiments which were later conducted.  These experiments verified the non-local nature of quantum physics (for interpretations other than MWI and its cousins).

This is a fascinating book, with a lot of interesting history.  It provides a particularly stark picture of people enduring terrible costs to their career for daring to explore radical ideas.  But it’s not without its issues.  Becker makes no pretense about being even handed.  He is a partisan in the interpretation wars.  Much of the book is a sustained attack on the Copenhagen Interpretation, and he seems to gravitate toward somewhat strawmannish versions when describing it.

This extends to his descriptions of its proponents such as Niels Bohr or Werner Heisenberg.  Bohr is often described as a slow thinker and unable to communicate clearly, and in the later parts of the book he becomes something of a nemesis, suppressing alternative ideas as they come up.  Heisenberg is described as a status conscious individual with questionable ethics leading him to work with the Nazis, then trying to spin his involvement after the war.

Heisenberg may indeed have been a piece of work.  But in the case of Bohr, his description in this book seems incompatible with the esteem in which he was held by the physics community.  In my experience, when a historical figure is described as clueless but nevertheless has consistent and ongoing success, it usually means that description is skewed, and that was the sense I got here.  Becker ascribes Bohr’s prestige to his charisma, but I doubt it was only that.  The charisma / personality explanation smacks of rationalizing, that people simply couldn’t have thought his actual ideas had merit.

My suspicion in this regard was also fueled by Becker’s discussion on the philosophy of science.  He (probably accurately) describes the influence logical positivism had on Bohr and his collaborators, but he lumps Karl Popper’s falsifiability in with the verifcationism of the logical positivists (even though Popper was an opponent of logical positivism).  And Becker rails against instrumentalism, but his criticism is of a silly version that few modern instrumentalists would ascribe to.

In general, Becker seems impatient with epistemic caution.  For him, physics is about describing the world, and he doesn’t want to be constrained by things like testability.  So he’s enthusiastic for various interpretations even though none of them are uniquely testable, as well as multiverses and all the rest.  He seems unable to see any validity in the discomfort many physicists have with speculation too far removed from experimentation.

All that said, I enjoyed this book and generally do recommend it for getting an idea of the human stories associated with quantum physics, even if its often not an objective one.

Posted in Science | Tagged , , , , | 16 Comments

The Anthropocene is a conceit of human exceptionalism

Peter Brannen has an interesting piece in the Atlantic, pointing out that the Anthropocene is more of a geological event rather than an epoch, at least so far.

Humans are now living in a new geological epoch of our own making: the Anthropocene. Or so we’re told. Whereas some epochs in Earth history stretch more than 40 million years, this new chapter started maybe 400 years ago, when carbon dioxide dipped by a few parts per million in the atmosphere. Or perhaps, as a panel of scientists voted earlier this year, the epoch started as recently as 75 years ago, when atomic weapons began to dust the planet with an evanescence of strange radioisotopes.

Brannen’s point is that human civilization so far is a speck in the geological record, 10,000 years (in the most generous definition of “civilization”) compared to 500 millions years of complex life, or about 0.002% of that history, or 0.0002% of Earth’s overall history.

Along those lines, he makes this point.

If, in the final 7,000 years of their reign, dinosaurs became hyperintelligent, built a civilization, started asteroid mining, and did so for centuries before forgetting to carry the one on an orbital calculation, thereby sending that famous valedictory six-mile space rock hurtling senselessly toward the Earth themselves—it would be virtually impossible to tell. All we do know is that an asteroid did hit, and that the fossils in the millions of years afterward look very different than in the millions of years prior.

Similarly, if we manage to wipe ourselves out in the next century or so (by climate destruction, nuclear war, or  some other means), or even in the next few millenia, virtually all evidence of human civilization would be gone in a few tens of millions of years due to the earth’s constant geological erosion, tectonic upheaval, and overall churn.  A geologist one hundred million years from now might be hard pressed to identify that any civilization in our time had actually existed.

(The situation might be a little more hopeful if our future scientists manage to make it to the moon, Mars, or other locations where perhaps some of our artifacts might still be around, although so far those artifacts are very limited in number.)

(It’s also worth noting that the asteroid strike causing the dinosaur extinction is more controversial than it used to be, although it doesn’t change Brannen’s point.)

Brannen finishes with this:

The idea of the Anthropocene inflates our own importance by promising eternal geological life to our creations. It is of a thread with our species’ peculiar, self-styled exceptionalism—from the animal kingdom, from nature, from the systems that govern it, and from time itself. This illusion may, in the long run, get us all killed. We haven’t earned an Anthropocene epoch yet. If someday in the distant future we have, it will be an astounding testament to a species that, after a colicky, globe-threatening infancy, learned that it was not separate from Earth history, but a contiguous part of the systems that have kept this miraculous marble world habitable for billions of years.

It does seem like human exceptionalism causes a lot intellectual hangups.  Contrary to a lot of misanthropic sentiment, I personally don’t see humanity as depraved in some manner, at least not in any way that other species aren’t.  We have those hangups for evolutionary reasons.  But we do have them.  And our long term survival, existing long enough to be something other than a blip in the geological record, may depend on us finding ways to overcome them.

Posted in Zeitgeist | Tagged , , , , , | 53 Comments

Is the ultimate nature of reality mental?

Philosopher Wilfrid Sellars had a term for the world as it appears, the “manifest image.”  This is the world as we perceive it.  In it, an apple is an apple, something red or green with a certain shape, a range of sizes, a thing that we can eat, or throw.

The manifest image can be contrasted with the scientific image of the world.  Where the manifest image has colors, the scientific one has electromagnetic radiation of certain wavelengths.  Where the manifest image has solid objects, like apples, the scientific image has mostly empty space, with clusters of elementary particles, held together in configurations due to a small number of fundamental interactions.

The scientific image is often radically different from the manifest image, although how different it is depends on what level of organization is being examined.  For many purposes, including scientific ones, the manifest image, which is itself a predictive theory of the world at a certain level or organization, works just fine.  For example, an ethologist, someone who studies animal behavior, can generally do so without having to concern themselves about quantum fields and their interactions.

But if the manifest image of the world is how it appears to us, how do we develop the scientific ones?  After all, we only ever have access to our own subjective experience.  We never get direct access to anything else.

The answer is that we start with those conscious experiences, sensory experiences of the world, and we work to develop models, theories, of how those experiences relate to each other.  (We sometimes forget that “empiricism” is just another word for “experience.”  One comes from Greek, the other Latin.)  We judge these theories by how accurately they’re able to predict future experiences.  It’s the only real measure of a theory, or any kind of knowledge, we ever get.

But often developing these theories, these models, requires that we posit aspects of reality that we can’t perceive.  For example, no one has ever seen an electron.  We take electrons to exist because they’re crucial to many theories.  But they’re most definitely not part of the manifest image.

So the theories give us a radically different picture of the world from what we perceive.  Often those theories force us to conclude that our senses, our actual conscious experience, isn’t showing us reality.  The only reason we take such theories seriously, and give them precedence over our direct sensory experience, is because they accurately predict future conscious experiences.

Of course, there are serious issues with many of these theories.  Two of the most successful, quantum mechanics and general relativity, aren’t compatible with each other.  And there’s the measurement problem in quantum mechanics, the fact that everything we observe tells us that there is a quantum wave, until we measure it, then everything tells us there’s just a localized particle.

These are truly hard problems, and solving them is forcing scientists to consider theories that posit a reality even more removed from the manifest image.  It’s why we get things like brane theory, the many worlds interpretation of quantum physics, or the mathematical universe hypothesis.  If any of these models are true, than the ultimate nature of reality is utterly different from the manifest image.

But as stark as the distinctions between the manifest and scientific images are or could be, it’s not enough for some.  Donald Hoffman is a psychologist and philosopher whose views I’ve discussed before.  Hoffman has a new book that he’s promoting, and it’s putting his views back into the public square.  This week I listened to a podcast interview he did with Michael Shermer.

Hoffman’s main point is that evolution doesn’t prepare us to accurately perceive reality.  That reality therefore can be very different than what our perceptions tell us.  But Hoffman is going much further than the typical manifest / scientific image distinction.  He contends that there isn’t even a physical reality out there.  There are only minds.  Our perception of reality is a “user interface” that enables access to something utterly alien in nature.  Even the various scientific images don’t reflect reality.  These are just more user interfaces.

What then is the ultimate reality?  Hoffman appears to believe it’s consciousness all the way down.  In my last post on Hoffman, I labeled him an idealist, in the sense of thinking that the primary reality is mental rather than physical, and I still think that’s the right description.

Although in the Shermer interview, he says he does think there is an objective reality.  Based on what I’ve heard, he sees this objective reality existing because there’s a universal mind of some sort outside of our minds thinking about it, a view that seems similar to the subjective idealism (and theology) of George Berkeley, where objective things exist because God is thinking about them.

How does Hoffman reach this conclusion?  He starts with the fact that natural selection doesn’t seem to favor an accurate perception of reality, just an effectively adaptive one.  He tests this using mathematical simulations which reportedly tell him that there’s zero probability of natural selection selecting for accuracy.

Here we come to my issues with this idea.  Hoffman is using an empirical theory (natural selection) along with empirically observed results of simulations, to conclude that empirical observations aren’t telling us about reality.  But if all of reality is an illusion, then how can he trust his own observations?  In the interview, he assures Shermer that he avoids this undercutting trap, but if so, it doesn’t seem evident to me.

The second issue is that Hoffman is taking this insight and apparently making a major logical leap to conclude that it leads to much more than the manifest vs scientific image distinction.  The established scientific images exist because they’re part of predictive models.  Extending these images to another level requires additional models and evidence, and those models must explain the successes of the previous ones.  Hoffman owns up to this requirement, but admits it hasn’t been met yet.

My third issue is that Hoffman’s stated motivation for positing this idealism is to solve the hard problem of consciousness.  Per the hard problem, there’s no way to relate physics to consciousness, so maybe the solution is to do away with all physics.

But there is an easier solution to the hard problem, one that doesn’t require radically overturning our view of reality.  That solution is to recognize what many psychological studies tell us, that introspection is unreliable, including our introspection of experience.

This too is a sharp distinction between the manifest image and the scientific view.  The problem, of course, it’s that this version isn’t emotionally comforting.  Like Copernicanism, natural selection, relativity, and quantum physics, it takes us ever further from any central role in reality.

Which brings me to my fourth issue with Hoffman’s view.  It’s a radical view that’s emotionally comforting, seemingly positing that it’s all about us after all.  Of course, just because it’s comforting doesn’t mean it’s wrong, but it does mean we need to be more on guard than usual against fooling ourselves.

I’m a scientific instrumentalist.  While I generally think our scientific theories are telling us about reality, I think to “tell us about reality” is to be a useful prediction instrument.  They are one and the same.  There is no understanding of reality which is not such an instrument.

We can’t rule out idealism.  We can only note that any feasible version of it has to meet all the predictive successes of physicalism.  Once it does, it has to then justify any additional assumptions it makes.  It’s not clear to me that we then have anything other than physicalism by another name, or perhaps a type of neutral monism that amounts to the same thing.

But maybe I’m missing something?

Posted in Philosophy | Tagged , , , , , | 84 Comments