I’m still working my way through Simona Ginsburg and Eva Jablonka’s tome: The Evolution of the Sensitive Soul. This is the second post of a series on their book. I’m actually on the last chapter, but that last chapter is close to a hundred pages long, and the book’s prose is dense. Light reading it isn’t.
Still, it includes a vast overview of the study of consciousness and the mind, not just in contemporary times, but going back to the 19th century and beyond. For anyone looking for a broad historical overview of the scientific study of the mind, and is willing to put in some work to parse the prose, it’s worth checking out.
As I noted in the first post, G&J aren’t focusing on human level consciousness, that is, higher order metacognitive self awareness and symbolic thought, the “rational soul.” Similar to the work by Todd Feinberg and Jon Mallatt, their focus is on minimal consciousness, often called “primary consciousness”. They equate this minimal consciousness with sentience, the ability to have subjective experiencing (they prefer “experiencing” to just “experience”), which they relate to Aristotle’s “sensitive soul.”
Even having defined this scope however, there remains lots of room for different interpretations. In an attempt to more precisely define the target of their investigation, they marshal information from contemporary neurobiology and cognitive scientists, along with their theories, to describe seven attributes of minimal consciousness.
- Global activity and accessibility. It’s widely agreed that consciousness is not localized in narrow brain regions. Although the core ignition and distribution mechanisms might be localized to particular networks, it involves content widely available from disparate brain regions broadcast or made available to the other specialty processes that otherwise work in isolation.
- Binding and unification. The unified nature of conscious perception, such as experiencing the sight of a dog rather than all the constituent sensory components. Many theories see this being associated with the synchronized firing of neurons in various brain regions, built with recurrent connections between those regions.
- Selection, plasticity, learning, and attention. We are generally conscious of only one thing at a time, or one group of related things. This involves competition and selection of the winner with the losers inhibited. It also involves plasticity, which enables learning.
- Intentionality (aboutness). Conscious states are about something, which may be something in the world or the body. The notion of mental representation is tightly related to this attribute.
- Temporal “thickness”. Neural processing that is quick and fleeting is not conscious. To be conscious of something requires that the activity be sustained through recurrent feedback loops, both locally and globally.
- Values, emotions, goals. Experience is felt, that is, it has a valence, a sense of good or bad, pleasure or pain, satisfaction or frustration. These are the attributes that provide motivations, impetus, to a conscious system, that propel it toward certain “attractor” states and away from others.
- Embodiment, agency, and a notion of “self”. The brain is constantly receiving feedback from the body, providing a constant “buzz”, the feeling of existence. This gives the system a feeling of bodily self. (Not to be confused with the notion of metacognitive self in human level consciousness.)
G&J refer to this as “the emergentist consensus.” It seems to pull ideas from global workspace theory, various recurrent loop theories, Damasio’s theories of self and embodiment, and a host of other sources.
It’s important to note that these attributes aren’t free standing independent things. They interact with and depend on each other. For example, for a sensory image to be consciously perceived (4), it must achieve (1) global availability by winning (3) selective attention by (2) binding, which results in (5) temporal thickness and strengthens the plasticity aspect of (3). This process may trigger a reaction which goes through a similar process to achieve (6) value. All with (7) as a constant underlying hum, subtly (or not so subtly) stacking the deck of what wins (3).
So that’s G&J’s target. Their goal is to identify functionality, capabilities which demonstrate these attributes in particular species. Their focus is on learning capabilities, which I’ll go into in the next post.
What do you think about these attributes? Do they strike you as necessary and sufficient for minimal consciousness, the “sensitive soul”? Or are they too much, bringing in inessential mechanisms?
I’ll be interested in the next learning capabilities post.
This paper I noted previously in a comment when I had my long post on learning has Ginsburg and Jablonka as authors.
https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01954/full
LikeLiked by 1 person
BTW, just noticed the Subtitle “Learning and the Origins of Consciousness” and decided to buy the book. Needless to say, I think they are on the right track about learning.
Of course, learning and memory are sort of like sides of the same coin. So it is interesting to think that in a way we are conscious so we can create memories.
LikeLike
That paper is a condensed version of the second half of the book and gives a good overview of their thesis.
At the pokey pace I’m going, you may finish the book before I do!
LikeLike
Personally I don’t have much use for (most of) these attributes, because it smacks of specism. It seems like the starting point is human consciousness, and the thought is “let’s see which other species do Consciousness the way humans do.”
I would prefer to isolate the basic unit of consciousness (which I think is the functional use of a symbolic representation), and then see what various systems/organisms can or cannot do with that.
So to look at the attributes individually:
1. global accessibility: what would it look like if there was only one member of the global audience?
2. Binding and unification: what are you binding? Why is some amount of binding necessary?
3. Learning and attention: these are definitely features of normal human consciousness, but when one or the other is lost due to injury do we say Consciousness goes away?
4. Intentionality: this is the one I agree with. It’s necessarily there in a symbolic representation.
5. Temporal thickness: how much time, or how many recursions is enough? And what is recurring?
6. Values, emotions, and goals: one of these things is not like the others. Values and goals are there in the basic unit. But given this grouping, I expect the authors have a more expansive idea of what these are than I am using. Emotions are just another way of organizing the functions involved.
7. Embodiment/agency/“self”: this seems pretty meaningless. I guess I should wait to find out how this could discriminate anything. [Trying to imagine something that has the first six attributes, but not the seventh][failing]
*
[btw, I usually don’t capitalize “consciousness”. Autocorrect does that for me, and I got tired of fixing it.]
LikeLiked by 1 person
When it comes to consciousness, it seems like an inherently anthropocentric concept, and discussion of it in other species inherently anthropomorphic, not necessarily erroneously anthropomorphic, but anthropomorphic nonetheless. In other words, looking at the consciousness of other animals is essentially asking how like us they are. How could we study consciousness in other species, without any reference to the human variety?
All that said, I think you’ll be surprised by how many species have these attributes. It’s not a stingy conception of consciousness. (Although it’s not panpsychic by any measure.)
1. With only one member in the audience, unless that member is a conscious one, there would be no consciousness. The general idea is all the specialty processes together are what provide the functionality of consciousness.
2. Consider watching a car drive by. How do you know the shape and movement are the same thing? That the sound relates to the visual information? If you have good feelings about the make and model, what relates those feelings to the visual info?
3. I’m not sure we can say there is consciousness without attention. You’re right about the learning point, but I’ll go into more detail in another post.
5. To trigger it generally requires a stimulus lasting 50ms or longer (unless masked). Dehaene has a lot of the information on the rest of the timeline in his book, although I don’t know that it’s a precisely consistent thing.
7. This is, to a large extent, a shot at brain-in-a-vat thinking. But the authors, unusually for those fixed on such things, do admit that a virtual body might suffice.
LikeLike
Because you’re engaging specifically:
1. When I say audience member, I don’t mean homunculus. I mean a mechanism that responds, so, a specialty process. What if there is only one such process? Is exactly two enough?
2. I understand that humans can combine concepts into wholes. That can be (plausibly) explained using just mechanisms of symbolic representation. Such combination may be sufficient, but what about such combination makes it necessary?
3. The idea of attention only comes into play when there is an option of more than one representation being made available and there is some mechanism which determines which of the possible representations takes precedence. It’s a useful capacity, but I don’t see why it’s necessary.
5. Any process has a temporal component. Certainly there is nothing magical about 50ms. Complicated processes are going to take longer, and human processes are presumably the most complicated. We can expect the equivalent process in silico might take 5ms. Are there some grounds by which that would not be “thick” enough?
*
LikeLike
1. So, if you have one sensory unit communicating with one motor unit, then no consciousness, really just a reflex. But what if you have two sensory processes communicating with one motor processes? Or twenty sensory processes communicating with a dozen or so association processes and a half dozen motor processes? There’s no bright line, just increasing flexibility.
2. How would it work otherwise?
3. Attention could be viewed as simply deciding which sensory stimuli should receive analytical resources, and which of those should be acted on. If you way it isn’t necessary, then what performs those functions?
5. The temporal aspect is really an empirically observed thing in biological brains. A certain amount of persistence is necessary before subjects can report or act on it. If we move to other substrates, then the rules may well be different.
That last point should be kept in mind for all of these attributes, particularly the lower level ones. They’re meant to be attributes of consciousness in evolved brains. Not necessarily a statement of what an AI version of it might need.
LikeLike
1. So you just drew a bright line by saying one is not enough (“no Consciousness, really just a reflex”), but two or more is enough, because at that point there is no bright line, just more flexibility. So is there a rationale there, or are you just relying on your intuition that reflex is not enough?
2. Not sure what you’re asking, but I think this gets back to the reflex discussion. You could have a large set of sensory inputs which feed into a global broadcast mechanism controlled by some attention mechanism and a variety of output responses without ever combining any of the inputs into concepts. You lose the variety that concepts provide, but why is that crucial for Consciousness per se?
3. In simpler systems (no bright line) there is no need for attention functions.
5. There will always be some temporal aspect, regardless of substrate. My point is why should that be any sort of criteria for Consciousness?
Maybe that’s a difference in our approach. I’m interested in what Consciousness is “made of”, which processes are constitutive of Consciousness, regardless of what system we’re looking at. As you say, where you draw the line is in the eye of the beholder, but I see a basic unit which is included in *everyone’s* (non-dualist, non-panpsychist) conception, and any complex Consciousness is just a matter of using those units in different ways. When someone does in fact draw a line somewhere, I want to know the rationale for drawing it there and not somewhere else.
*
LikeLike
Well, as you noted, for me, to a large degree, it’s in the eye of the beholder. There are things that bring most of us closer to the intuition that there’s consciousness there, and things that don’t, or drive us further from it. For me, attempting to figure our what consciousness is made of is like attempting to understand what fame is made of. You can identify some systematic effects and how they arise, but focusing too much on any one aspect misses that what’s important is the overall pattern.
LikeLike
When it comes to consciousness, I’m finding that the more I learn and think about it, the less I seem to understand. So I don’t really have much to say about this, although I do think JoS has a point about the attributes calling out human consciousness. That struck me while reading them.
OTOH, it’s a place to start.
LikeLiked by 1 person
I find it curious that you guys see the attributes as human centric? There’s no mention of introspection, language, or symbolic thought. What about it makes you think it’s human specific? It makes me wonder if I described them right.
LikeLike
Well, #1 explicitly talks about brain regions, and #2 invokes vision and synchronized neuron firing, which may have stacked the deck. In general, though, many points seem to address how the brain functions rather than what consciousness itself seems to be.
FWIW, when I tried to enumerate consciousness, I made no reference to the brain at all.
It may be they are more interested in the mechanism of consciousness whereas I was just approaching it from what it looks like.
LikeLike
Ah, ok. Well, we are talking about an evolutionary biologist and a neurobiologist here, so their analysis is going to be brain centric. (Although they do survey the philosophical literature.) Or, more accurately, it’s nervous system and biology centric.
But their primary interest is in identifying when minimal consciousness shows up in evolutionary history. In that regard, I’ll give you a hint; there’s nothing in here about episodic memory, much less the other human specific things I noted before.
LikeLike
I’m reminded of a Federal Judge character that appeared several times on The Good Wife. That judge was very insistent that the lawyers tag all their arguments with, “In my opinion.”
So when minimal consciousness showed up in evolutionary history, in their opinion. 😀
LikeLike
You know my stance. They can establish when certain capabilities arose, but whether those capabilities amount to consciousness is another matter. “Consciousness” is in the eye of the beholder. So I agree. 🙂
LikeLiked by 1 person
I’m a fan. I suppose it might be possible to eliminate almost any one of these features and leave most of the rest intact – I wonder what examples Oliver Sacks could come up with. But for an interesting range of critters these things go together. Even #7, interpreted modestly (without all the linguistic and metacognitive firepower that humans bring to this sort of thing). Thomas Metzinger convinced me in some book (was it Being No One?) that animals with even limited cognition have a pretty robust self/environment distinction.
LikeLiked by 1 person
Hmmm. I wonder if 7 is what’s causing people to see this as human centric. I did try to make clear it wasn’t the metaocognitive self awareness of humans.
But I think you’re totally right. Building models of the environment is somewhat pointless if it doesn’t include a model of yourself and your relation to that environment. In fact, the models are ego-centric, always from the perspective of an embodied self.
LikeLike
I’m pretty sure nematodes can distinguish between (certain) input that was caused by the worm v. caused by the environment. (If not nematodes, then some other worm.). But I don’t see them as having “a feeling of bodily self”. Maybe it depends on what you mean by “feeling”.
*
LikeLike
It’s hard to say what the nematode might have, but they don’t seem to build models of the environment, so their need for a model of their own body in relation to that seems limited. They seem largely stimulus bound.
LikeLike
From the article I linked to:
“The nematode Caenorhabditis elegans too does not fulfill the conditions for UAL: it has never been shown to be able to learn in a non-elemental manner; it can only form associations between each underlying feature of a compound stimulus and the reinforcement; hence it does not meet condition i. “
LikeLike