Final thoughts on The Evolution of the Sensitive Soul

This is the final post in a series I’ve been doing on Simona Ginsburg and Eva Jablonka’s bookThe Evolution of the Sensitive Soul, a book focused on the evolution of minimal consciousness.  This is a large book, and it covers a wide range of ideas.  A series of relatively small blog posts can’t do them justice.  So by necessity it’s been selective.  Similar to Feinberg and Mallatt’s The Ancient Origins of Consciousness, there’s a wealth of material I didn’t get to, and like that other book, I suspect it will inspire numerous additional posts in the future.

This final post focuses on various areas that G&J (Ginsburg and Jablonka) explore that caught my interest.  So it’s somewhat of grab bag.

The first has to do with memory.  Obviously memory and learning are closely related.  The consensus view in neuroscience is that the main way memory works is through the strengthening and weakening of chemical synapses, the connections between neurons.  In this view, engrams, the physical traces of memory, reside in circuits of neurons that follow Hebbian theory, often summarized as: neurons that fire together, wire together.

But it’s widely understood that this can’t be the full story.  Synapses are complex ecosystems of proteins, vesicles, neurotransmitters, and neuromodulators.  Proteins have to be synthesized by intracellular machinery.  So the strengthening or weakening of a synapse is thought to involve genetic and epigenetic mechanisms as well as ribosomes and other components.

G&J focus cite a study that shows that if synaptic processing is chemically inhibited, so that the synapses retract, long term memories are still able to recover.  In other words, the state of the synapse may be recorded somewhere other than the synapse itself.  If so, the synapse could be just an expression of an engram stored intracellularly, perhaps epigenetically, an epigenetic engram, an intriguing possibility that may eventually have clinical implications for Alzheimers and other types of neural degenerative diseases.

G&J note that this may mean that epigenetic factors could have large scale effects on how fast synapses grow or weaken.  In their view, it may dramatically expand the computational power involved in memory.  They even speculate that it could be a system that operates independently of the synaptic one, transmitting information between neurons using migratory RNAs encapsulated in exosome vesicles.

This intercellular transmission could be the mechanism for some learning behavior, such as Kamin blocking, the phenomenon where if there is already an existing association between two stimuli, and a third concurrent one is introduced, that new one won’t become part of the association.  This mechanism is poorly understood at the neural level.

You might have noticed all the occurrences of “may” and “could” above.  G&J admit that much of this is speculative.  There’s no doubt that synaptic processes are supported by intacellular machinery, and exosome vesicles do exist.  But the idea that engram states are maintained epigenetically needs, I think, a lot more fleshing out, not to mention evidence.  And while the exosomes could conceivably be carrying molecular level memory type information, it seems more likely they’re much more banal metabolic signaling to surrounding glia.

Still, G&J note that there is intense research going on in this area.  And it always pays to remember that life is a molecular phenomenon.  So only time will tell.

On the next topic, like many animal researchers, G&J cite the views of Bjorn Merker approvingly, notably the idea that consciousness is a low level process starting in the brainstem.  (A view I’ve critiqued before.)  This puts them partially on the same page as F&M (Feinberg and Mallatt) in The Ancient Origins of Consciousness.  In the last post, I noted that G&J come to similar conclusions as F&M on when consciousness evolved.  In reality, they use F&M’s review of the research, as well as Merker’s material, in reaching their conclusions.

But this leads to a problem.  G&J have a different definition of consciousness than F&M.  F&M divide consciousness into three types: exteroceptive consciousness, interoceptive consciousness, and affective consciousness.  G&J’s definition seems to most closely align with F&M’s for affective consciousness.

But F&M’s embrace of brainstem consciousness (at least in pre-mammalian species) seems to hinge on the fact that they see exteroceptive and interoceptive processing as sufficient for consciousness.  G&J don’t; for them, affective processing is necessary.  But F&M’s data indicate that the type of learning necessary to demonstrate the presence of affects only happens in the forebrain.

The reason why pallial function in anamniotes is such a tough problem is that a fish whose pallium has been removed or destroyed can see, catch prey, and seems to act normally. However, such a fish cannot learn from its experiences or from the consequences of its actions. Nor is it able to learn the locations of objects in space. This is a memory problem, and the medial and dorsal pallia of vertebrates are known to store memories.

Feinberg, Todd E.. The Ancient Origins of Consciousness: How the Brain Created Experience (MIT Press) . The MIT Press. Kindle Edition.

On the one hand, forebrains go back to early vertebrates, so affective consciousness is preserved.  But in fish and amphibians, much of the exteroceptive and interoceptive processing is separate from affective processing.  This isn’t much of an issue for F&M, but it could be seen as weakening G&J’s conclusion that these early vertebrates had the same unified consciousness as later species.

Third topic: G&J late in the book note the existence of something I’d missed until now, a unicellular organism, warnowiid dinoflagellates, that have something called an “ocelloid”, which appears to be something like a camera style eye, much more sophisticated than the typical light sensors that exist at this level.  However, these protists are difficult to study in laboratory conditions.  They tend not to survive outside their natural habitat, which makes them difficult to study.  So the function of this structure is largely conjecture.  Still, if it is an eye, what kind of processing in a unicellular organism might such a complex structure be supporting?

Finally, G&J touch on the topic of machine consciousness.  Somewhat refreshingly for people who use the “embodied” language, they don’t rule out technological consciousness.  However, they note that it could be very different from evolved consciousness in animals.  Importantly, they see UAL as an evolutionary marker for consciousness in biology.  Its existence in technological systems may not necessarily indicate the presence of machine consciousness.  And they expect machine consciousness to require a body, but they allow it could be a virtual one.

As always, my take on these things is it depends on how we define “consciousness”.

As noted above, there is a lot more in this book, some of which I might touch on later.  But I think this is it for now.

Finally, and I should have linked to this in the last post, if you want a condensed version of their thesis, and don’t mind wading through some technical material, their paper on unlimited associative learning is online.

What do you think of the idea of epigenetic engrams?  Or the various definitional issues?  Or G&J’s approach overall?

21 thoughts on “Final thoughts on The Evolution of the Sensitive Soul

  1. First, thanks for reviewing this book and drawing my attention to it. My copy is still backlogged on delivery.

    When I looked at the paper on the engram you linked to in the previous post what impressed me the most was how much we are in the dark about how it works. There seems to be a lot of suggestive leads but no much in comprehensive theory. Even most of the studies you’ve cited seem to be about simple associative learning not UAL, and I’m not sure whether what we learn from those just scale up when more neurons enter the picture or not. Probably some of the mechanisms are preserved but there may be other mechanisms that enter into the picture.

    Regarding the much discussed brainstem consciousness, I think you rightly point out a problem in their approach if UAL is unavailable with just a brainstem.

    The ocelloid looks interesting and it amazing to think that something structurally so similar to an eye would exist in a single-celled creature. My guess is that it really is an eye with processing similar to but more primitive than more advanced eyes. So it probably does the things eyes do, possibly detect motions and changing light inputs against stable backgrounds. That probably directly triggers various actions related to prey with no back end processing in a brain.

    Liked by 1 person

    1. Thanks James! Man, that book is taking a while to get there. Amazon must be heavily prioritizing household items. I’ve ordered a bunch of that kind of stuff recently, and much of it has been coming in a timely manner. Although nothing is coming in the old two day window we used to have. Hope it gets there soon. There are advantages to using ebooks!

      I’m not sure what to make of the ocelloid. I’d really like to know just how many photorceptors it has (if it indeed has something like photoreceptors). The higher the number, the sharper the image it can take in, and the more you have to wonder what it’s enabling. My suspicion is it will turn out to be a lot more limited than a casual inspection might imply. There’s just not enough substrate for anything too sophisticated. But I might turn out to dead wrong.

      Like

  2. if synaptic processing is chemically inhibited, so that the synapses retract, long term memories are still able to recover.

    I was just listening to Ginger Campbell’s Brain Science podcast, and the episode was about glial cells. That seems like an obvious candidate for where the “missing” information went to.

    Liked by 1 person

    1. Andrew Koob was really promoting a more expansive role of glial cells.

      https://www.scientificamerican.com/article/the-root-of-thought-what/

      KOOB: Originally, scientists didn’t think they did anything. Until the last 20 years, brain scientists believed neurons communicated to each other, represented our thoughts, and that glia were kind of like stucco and mortar holding the house together. They were considered simple insulators for neuron communication. There are a few types of glial cells, but recently scientists have begun to focus on a particular type of glial cell called the ‘astrocyte,’ as they are abundant in the cortex. Interestingly, as you go up the evolutionary ladder, astrocytes in the cortex increase in size and number, with humans having the most astrocytes and also the biggest. Scientists have also discovered that astrocytes communicate to themselves in the cortex and are also capable of sending information to neurons. Finally, astrocytes are also the adult stem cell in the brain and control blood flow to regions of brain activity. Because of all these important properties, and since the cortex is believed responsible for higher thought, scientists have started to realize that astrocytes must contribute to thought.

      Like

    2. I think there’s a good chance that’s where the migRNAs in the exosomes were going, perhaps to trigger some kind of maintenance processes. But I’m skeptical the state of a synapse are stored there. I did find the Fields interview interesting, although some of the information was dated. (It was a replay of an interview from 2010.)

      Like

  3. One acquires knowledge and know-how and one deploys knowledge and know-how. These are, presumably, evolved natural kinds of raw exercise of intelligence (which might or might not presuppose some equally raw degree of sentience). But wait, one also ” learns” how best to acquire and deploy knowledge and know-how in the service of one’s purposes. The transition from pre-reflective to reflective acquisition and deployment of knowledge and know-how better suits a philosopher’s inquiry—epistemic and practical/deliberative norms and all that. We’re interlopers, we arm-chair bio-chemists.

    Liked by 1 person

    1. Jeff,
      I should note something I haven’t been good at reminding everyone of, that G&J’s goal isn’t to explain human level consciousness. They’re looking at the pre-reflective variety, very early pre-reflective at that. One thing I didn’t get around to mentioning is that they discuss the emergence of episodic memory, which happened much later than minimal consciousness, but much earlier than self reflective consciousness.

      So we get
      1. Minimal consciousness emerging in the Cambrian, and includes most vertebrates, many arthropods, and cephalopods. (500 mya)
      2. Episodic memory rising with mammals and birds (and possibly reptiles). (300-150 mya)
      3. Self reflective consciousness, with humans, and perhaps to a limited degree in great apes. (10-0.2 mya)

      Liked by 1 person

  4. I disagree—you’ve done well in emphasizing G and J’s goals (we Commentators, however, are notorious for favoring our own!). G and J claim that a sufficient condition on the attribution of minimal sentience is a demonstration (on the part of whatever creature at hand) of a capacity for unlimited associative learning (as you say, their purpose is not to “explain” sentience). A capacity for unlimited associative learning plausibly presupposes an information-processing architecture of the kind you surmise (exhibiting the seven or so attributes mentioned in a previous Post). Such an architecture ideally issues in the integration of one’s, “sensorium” with one’s, “motorium”—thus enabling (or is it “constituting”?) a “kind of global non-reflexive operant learning” (Feinberg and Mallatt). All of this said, and as you yourself candidly admit, the inference to minimal sentience or consciousness is still a non-sequitur—or at best underdetermined.

    Like

    1. Ah, if I’ve given the impression that I don’t think there is an explanation here, my bad. I do think it’s here. But it depends on accepting that explaining the various components adds up to an explanation of the whole, or at least a major portion of it.

      Like

    2. I don’t know about G and J but I’m not sure UAL is simply an indicator for an architecture that allows consciousness. It might be that consciousness is required for UAL and the architecture is just present to enable UAL.

      Both Grossberg’s Adaptive Resonance Theory and Baar’s GWT suggest a more definite relationship between learning and consciousness. I would argue that UAL is the evolutionary explanation for consciousness and there is something critical in how consciousness works or what it does that enables UAL (in living organisms to avoid any discussion about machine UAL).

      Like

      1. I think G&J see the relationship between UAL and minimal consciousness in biology as very close, perhaps inseparable. They see UAL as the reason minimal consciousness evolved, to the extent that if a species’ UAL abilities evolutionarily atrophied, so would their minimal consciousness. That seems very plausible to me.

        But if a machine had UAL, they wouldn’t necessarily see it as a sign the machine was minimally conscious. For me, this hinges on what we consider necessary to denote the label “conscious”. Does the system need to have biological impulses? If so, I can see their position. But if we allow that a machine consciousness might have radically different motivations from a biological one, then I think machine UAL does mean machine minimal consciousness.

        Liked by 1 person

  5. I’ve avoided commenting to your blog for some months now Mike, although I recently replied to a comment of yours on Eric Schwitzgebel’s blog. However, this recent post of yours seems like an opportunity to yet again point out the categorization error that leads you to conclude that consciousness is cortical, i.e., that consciousness “display” is a function of the cortex.

    You’ve helpfully linked to your own critique of brainstem consciousness and it’s in that post that your category error is most explicit, where you write:

    The consciousness hierarchy above highlights how important it is to be clear about which type of consciousness we’re discussing.

    You have actually specified a list rather than a hierarchy, but, that aside, your insistence on the existence of multiple types of consciousness is insupportable. You identify affective, phenomenal and self-reflective as three types of consciousness. Consciousness is, however, a single unified presentation with qualitatively differing contents, so that we may speak of consciousness of affect, consciousness of phenomena, consciousness of self and so on, but those are types of conscious content rather than types of consciousness, which do not exist. Eric Schwitzgebel explicitly agreed with this distinction and I value his viewpoint as a well-informed philosopher of consciousness. I had thought you did also and notice that you did not respond that he was in error.

    Your incorrect assumption of the existence of types of consciousness leads directly to your belief that specific brain regions/structures create/display the differing types, when in fact, the most that can be said is that specific brain regions/structures produce and/or contribute to the production of a type of conscious content, which is a view supportive of the brainstem consciousness hypothesis that the brainstem is responsible for the integrated ‘display’ of the content produced by itself and considerably enhanced by cortical activity.

    In your brainstem consciousness post, you quote Feynman’s observation:

    The first principle is that you must not fool yourself—and you are the easiest person to fool..

    Oddly, you never seem to consider that this remark might to apply to yourself, suggesting a continuation of your apparently unrecognized confirmation bias. I suggest you imaginatively consider for a while the consequences of realizing that “types of consciousness” is a category error. There is no such category. What does that mean for your own hypotheses?

    Liked by 2 people

    1. Hi Stephen,
      I remember our interaction on Eric’s blog, but not the particular topic. If I’ve already expressed my position and he (or anyone else) disagrees, reiterating that position feels unproductive unless I have something new to say, so it’s not unusual for me to let him have the last word.

      Obviously I disagree about types of consciousness. We can review the neuroscience and find empirical evidence for what functionality takes place in which locations of the brain, and further research might settle any disputes on that. But which of that functionality is “conscious” depends on which definition of “consciousness” we prefer.

      We can argue about which definition is the one true one, but I can’t see that any one true definition can be established, particularly considering the long history of consciousness discussions and the many amorphous conceptions. Which is why you usually see me list various definitions in a hierarchy.

      Like

      1. When I look at your list

        1. Minimal consciousness emerging in the Cambrian, and includes most vertebrates, many arthropods, and cephalopods. (500 mya)
        2. Episodic memory rising with mammals and birds (and possibly reptiles). (300-150 mya)
        3. Self reflective consciousness, with humans, and perhaps to a limited degree in great apes. (10-0.2 mya)

        I wonder how much of the difference between 1 and 3 is a difference in consciousness and how much is a difference in unconsciousness. Presumably as we go from 1 to 3 the number of circuits increase but the majority of the circuits do not represent consciousness – they are unconscious processing. So how would disconnect the conscious from the unconscious? To what extent is self reflective consciousness qualitatively different vs just being consciousness built on top of a larger unconscious base?

        Like

        1. For variations of global workspace theory (which G&J’s seven attributes build on), it’s worth noting that the difference between conscious and unconscious circuits, at least within the thalamo-cortical system, is in what wins the competition to have its contents widely broadcast/made available. Just about any region in that system can win the competition.

          But I think your question is, how much difference is there in the experience of a creature at the various levels? The answer I think, is that there are substantial ones. This may need its own blog post, but fish and amphibian experience is missing key elements of the experience of later species: the processing for vision, hearing, and touch appears to be separate from the processing for smell and memory systems. It’s the latter which have UAL.

          Things get a little more like us at the basal mammals, where all that gets unified in the t-c system. But before episodic memory, it doesn’t include memory of events or imagination. And before introspection, while there is a sense of bodily self, there doesn’t appear to be any sense of mental self.

          Like

          1. But the real question is whether the difference in experience is a difference in consciousness per se or a difference in various unconscious circuits?

            Presumably most of the processing for vision, hearing, and touch in more complex animals is unconscious. So how do you tell if there is a change in consciousness from less complex species or just a change in what a species is able to be conscious of. If you are saying these are the same, then your list isn’t really a categorization of consciousness. It is a list of general neurological capability (consciousness + unconscious capabilities). It also would mean that self reflection most likely isn’t purely a capability of consciousness but it is the process of generating conscious content from unconscious neural circuits.

            Like

          2. At a certain level, I am talking about general capabilities. To what extent these abilities affect consciousness depends on how we define “consciousness”. For example, some people say consciousness is construction of sensory image maps. If so, then any creature with distance senses is conscious. Others insist there must be an affective component, or that we’re not conscious until we have second order awareness of the image and its associated affects. Each camp will give different answers to how consciousness is affected by those general neurological capabilities.

            Like

      2. Again, with respect to the inference from information-processing architecture to minimal sentience/consciousness—“…it depends on accepting that explaining the various components adds up to an explanation of the whole.” Well, precisely—but even precision here has its explanatory limits. When someone once posed a tricky mathematical problem to John von Neumann at a Princeton dinner party he solved it on the spot, remarking that he, “had simply summed the infinite series.” If only summation worked that way in,”adding up” information-processing components toward an explanation of consciousness.

        Like

    2. Hi Mike,

      My comment was not at all about definitions as you seem to believe but, rather, I was addressing your characterization of ‘types’ of consciousness. As to our most relevant January interaction on Eric’s blog, here is the comment thread so you don’t have to look it up:

      MIKE: “… Personally, I think the solution is clarity, which almost always means using the word ‘consciousness’ with qualification: sensory consciousness, affective consciousness, self consciousness, etc. When someone uses the c-word by itself, the usage almost always has a theoretical assertion embedded in it, intentional or otherwise.

      ME: “ … your suggestion about ‘clarity’ … instead appears to support the existence of several different kinds of consciousness which is confusing and untrue rather than clarifying. Sensory, affective, etc. are distinguishable types of consciousness content, all integrated into a unified streaming experience termed consciousness.

      MIKE: “I didn’t mean to imply with those terms that they aren’t part of one unifying consciousness. But you’re right, if I talked about it within the context of one system, the onus would be on me to make that clear so people didn’t think I was talking about multiple consciousnesses.

      SCHWITZGEBEL: “… Stephen’s point in response I endorse. But setting that aside to focus on divisions like “access” vs “phenomenal”, also I think that the proliferation of ‘types’ of consciousness may invite confusion rather than forestalling it. The definition of the type can lead people to wonder whether or not the target is the obvious thing we all know we have, rather than some more obscure and debatable thing.

      Mike, it appears that you agreed with my ‘types’ comment on Eric’s blog (and presumably with Eric’s comment too) and now you wish to disagree with my substantially identical statement above of April 12th. You just wrote, “Obviously I disagree about types of consciousness,” a statement that directly contradicts your statement on Eric’s blog. So which is it?

      Like

    3. [PART 2]

      I also take issue with the statement in your comment to me above: “… which of that functionality is ‘conscious’ depends on which definition of ‘consciousness’ we prefer.” In fact, which of that functionality is ‘conscious’ depends on the evidence. No experimental evidence whatsoever exists to support any of the cortical consciousness hypotheses. Instead, all of the relevant experimental evidence for which structure performs that ‘display’ of content points to the brainstem, which itself has very limited content production capability. The most that can be validly concluded from the experimental evidence about cortical activity is that it produces most of the contents of consciousness, perhaps 98% or more in humans.

      Re your “Hierarchy of Consciousness” … you say consider the list entries to be “various definitions.” That being the case, how can the definitions of words be ranked in a hierarchical fashion? What is the factor that places one word definition lower or higher on the list to render it a hierarchy? To me, it rather looks more-or-less like a listing of consciousness content, not definitions, with primitive content at the top progressing to human content at the bottom.

      Regarding definitions of consciousness, I’m sure you’ve relegated the definition I have proposed (ala Damasio’s) as ‘feeling’, i.e., “the feeling of what happens,” and “the feeling of being embodied and centered in a world” to your “eye of the beholder” list. But I thought you might be interested to learn that the earliest proposal for that definition I have found seems to have originated with William James in the 1890’s. From the excellent James biography by Robert D. Richardson, quoting James:

      Cognition … is a function of consciousness,” which “at least implies the existence of a feeling.” He explains that he is using the word ‘feeling’ to “designate generically all states of consciousness,” including those sometimes called ‘ideas’ or ‘thoughts.’ Feeling remains for James the most general, most inclusive term for “state of consciousness.

      Like

      1. Stephen,
        My reply back then was that I wasn’t saying these were separate independent systems, but integrated interacting systems, all of which may be present, or only some.

        I disagree with Eric that it adds confusion. It may initially, but that’s always true when people find out something is more complex than they thought.

        On cortical consciousness, what would you accept as evidence for it?

        I’ll tell you what I might accept as evidence for brainstem consciousness. First, damage to the C-T( cortical-thalamic) system would only lead to content loss, a loss the patient would be aware of, that is, agnosias rather than anosognosias. Some cortical damage meets that standard, but a lot is the more profound kind that indicates a hole in their consciousness, particularly if it’s in the frontoparietal network. Second, the wiring would need to support it, which I would think would include evidence for extensive excitatory axons coming down from the C-T complex to the tectum to ship the imagery down. (Merker himself indicated that most of the efferent axons are inhibitory.) And finally, enough substrate and associated EEG activity to indicate a comprehensive “display” of some type was happening down there.

        I’m still not wild about that use of “feeling”. But regardless of whether it means affects or all conscious perception and thought, I think the data indicate it happens in the C-T system.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.