The edge of sentience in animals

This is the third in a series of posts on Jonathan Birch’s book, The Edge of Sentience. This one covers the section on animal sentience.

I think it’s fair to say that this is the section Birch is most passionate about. It’s definitely the one where I feel his activism most keenly.

A concept he introduces in the policy section of the book is the “meta-consensus”. Birch admits that there’s no consensus on which species are sentient, but maps what he sees as the major scientific positions in a hierarchy, in terms of which anatomical structures are necessary for sentience.

  • R1: Sentience requires a primate brain structure, and so is absent outside of primates.
  • R2: Sentience requires a mammalian neo-cortex, and so is absent outside of mammals.
  • R3: Sentience requires a cortex in mammals, but can be achieved by other structures in non-mammals (such as the pallium in birds).
  • R4: Sentience requires a vertebrate midbrain and so is absent outside of vertebrates.
  • R5: Sentience can be achieved with the midbrain but also with other structures in non-vertebrates.

Birch argues that, given current evidence, it isn’t reasonable to give attention to views more restrictive than R1 (such as sentience requiring language) or more inclusive than R5 (such as plants, unicellular organisms, or rocks being sentient). Of course, biopsychists and panpsychists would disagree.

Birch argues that R4 implies that all vertebrates (mammals, birds, reptiles, fish, etc) are sentience candidates in the meta-consensus. (Reminder: “sentience candidate” means there’s a “reasonable” chance the species is sentient, even if it can’t be established with certainty.)

Personally, I don’t think anatomy is a reliable indicator. Evolution doesn’t keep these kinds of nice neat distinctions. It seems like we can only establish what a particular structure does in a particular lineage of species. What might require a thalamo-cortical structure in humans may be done with other structures in lineages separated from us by hundreds of millions of years. And it gives us little to go on when considering developing organisms, artificial intelligence, or in other unexpected systems. For those, behavioral capabilities seem like more reliable criteria.

That may clash with an assertion Birch often makes in the book, that sentience isn’t intelligence. I agree that there’s not a 1:1 relationship here, but he often leaves it implied that it has nothing to do with intelligence. However, the team he led with the London School of Economics, which recommended protections for cephalopods and some crustaceans to the UK government, does include behavioral capabilities in their criteria for sentient pain.

The LSE critieria for sentient pain

  1. Nociception: sensory receptors for noxious stimuli.
  2. Sensory integration: a brain region integrating information from various sensory sources.
  3. Integrated nociception: the nociceptor signals participate in 2.
  4. Analgesia: pain medication changes the behavior of the animal.
  5. Motivational trade-offs: the animal can flexibly decide to endure a noxious stimuli for a good enough reward.
  6. Flexible self protection: the animal shows flexible protective behavior toward an injured body part.
  7. Associative Learning: the animal can learn novel ways of avoiding noxious stimuli beyond classical conditioning.
  8. Analgesia preference: the animal learns to self administer painkillers and, when injured, does so even when it involves giving up other rewards

Birch admits that the flexible behavior implied in 5-8 is tricky to assess, involving an unavoidable degree of subjective interpretation. (Which I think makes 6 and 7 too abstract and subjective.) Is an animal enduring a noxious stimuli for an anticipated reward? Or are they smelling the reward and the impulses associated with that are overpowering the avoidance impulses from the noxious stimuli? 8 seems like the clearest indicator, but it requires more intelligence.

(This, incidentally, is what led me toward greater skepticism for a lot of these studies. When I follow the citation trail, I often find the described behavior far less compelling than the headlines imply.)

Birch presents tables showing that octopuses fare well with 5 and 8. But other species of cephalopods do less well, with very low scores for 8. And the results for decapod crustaceans show low to very low scores for 5 and 8. The committee seemed to reach their recommendations based on high scores for 6 and 7, which as I noted, seem very subjective.

When it comes to insects, Birch sees the need to look at other criteria. He recounts the stories long told in biology, that many insects will continue moving and feeding despite catastrophic injuries, and reacting in other ways very different from how a mammal or bird might respond. Birch notes recent research showing the existence of nociceptors in at least some insects, but admits that nociception isn’t by itself sufficient for sentient pain.

He concludes that focusing on pain “does not serve insects well”. He switches to looking at evidence for moods, working memory, and attention. In the end, he admits that none of these really establish sentience at this level. (For example, the contents of working memory aren’t always experienced by humans.) But he argues that each of these lines of evidence raise the probability.

Birch also looks at gastropod molluscs, nematode worms, and spiders. He ends up classifying the gastropods as an investigation priority (not rising to the level of a sentience candidate, but should be studied more closely). He seems fairly confident that nematodes are stimulus-response driven. And he isn’t able to come down on spiders being sentience candidates, but mostly from lack of data, which he finds frustrating.

Finally, he rules out plants and unicellular organisms as sentience candidates or investigation priorities, citing a lack of any credible evidence.

In the final chapter of this section, Birch discusses proportionate responses for the species he does deem sentience candidates (cephalopods, decapod crustaceans, insects). The proportionate aspect means he doesn’t advocate banning crustacean or insect farming, but he does argue that the creatures in these farms deserve some type of protection. (A frequent example is eyestalk ablation used by shrimp breeders, which Birch regards as potentially inhumane.)

He does think cephalopod farming should be banned. Cephalopods are solitary creatures, who become extremely distressed if crowded together. But that is exactly what cephalopod farming requires to be economical. There doesn’t seem any humane way to farm them.

Of course, this would involve new laws and regulations. That may be feasible in Europe, but I have a hard time seeing it in my own country right now. Even with a Democratic administration, to contest economic and human quality of life interests, there would need to be much stronger evidence than most of what Birch is relying on. Octopus protections might be the easiest sell. But insects, particularly when Birch argues that we should research the effects of pesticides, will require a massive move of the Overton window, I think.

And it’s probably obvious from this post that, outside of mammals, birds, and cephalopods, I’m not on board with much of this myself, at least not yet. Not that I think we should ignore the possible welfare implications for these creatures. But I’m not inclined to be overly concerned about insects who cross my interests, such as the dirt dauber wasps who created a nest in one of my house walls.

And more generally, as I noted earlier in this series, I think it’s a mistake to treat sentience as something that is either all there or not at all. It seems clear to me that insects have more sentience than worms, but still far less than mammals or birds. Any protection measures, I think, need to take that into account. At least, that’s what I think today.

What do you think? Are there reasons to be more concerned about crustaceans and insects that I’m missing? Or are we allowing ourselves to be too concerned with creatures whose experience, to whatever extent it’s there, is far shallower and limited than ours?

Featured image credit

40 thoughts on “The edge of sentience in animals

  1. I had to go back and check the working definition of “sentience.” In your first article in this series, you report that Birch “settles on a definition of sentience as the capacity to have valenced experiences.”

    This could well include plants, which appear to seek light, water, and nutrients, and unicellular organisms, which move toward or away from things according to their interests. I see no reason to exclude valence (a sense of preference, of “good” and “bad”) regarding light, food, and so on from their experiences, if it’s allowed that they have experiences.

    What gets dropped from the list of candidates for sentience beyond “R5” in your list above is getting dropped for some other reason. I suspect the definition of “sentience” might have changed somewhere along the way

    Liked by 1 person

    1. His definition is having the capacity for valanced experience. I suspect he’d argue that there isn’t any evidence of plants having experience. Everything living has valance, but, except for biopsychists and panpsychists, not everything living feel those valences. That requires some capacity for experience, for awareness of the valence in order for cognition to be affected.

      That said, it does seem like he sets things just right to be able to fight for the species he cares about. His argument is that there are theorists who argue for that liberal interpretation, therefor we should consider it a realistic possibility. But as you point out, there are theorists who argue for interpretations more liberal than fit his purpose. And I agree that he doesn’t make an adequate case for stopping where he does.

      Like

      1. Now we have to distinguish between things that feel valences and those that don’t. By the original definition, experiencing valences is enough to quality for sentience. Is there a difference between experiencing a valence and feeling it? If so, what would it be? Does experiencing a valence mean simply reacting to it?

        In that case it would be more accurate to say that “reacting to a valence” qualifies as sentience.

        Liked by 1 person

        1. I was using them synonymously.

          I think there does have to be more than just reacting. With just that, all we have are reflexes, and that would make everything alive sentient, as well as current computing technology. But I think to have a full affect, we need awareness of the reaction, a cognitive model of the reaction, and an ability to override it, albeit with a cost in additional energy. I think that’s what we mean when we use the word “feel” (in affect terms rather than just somatosensory ones).

          Like

          1. To me they are not quite synonymous. “Experiencing a valence” could be construed as simply encountering something good or bad and responding appropriately, without actually feeling anything — in the way that someone might experience anger without actually feeling it, as we were saying over at Philosophy and Fiction. Anger that is not felt is by definition not a feeling, but we could still conceive of it as a reaction or reflex. The latter surely requires an experience of something with a valence, or it would not occur.

            Maybe we could say that sentience is “the capacity to have valenced feelings.” If feelings and experiences are synonymous, this substitution can cause no difficulty. But if one can have experiences and react to them without becoming aware of one’s reaction, or modelling it cognitively, or being able to override it, then the substitution is clearer.

            On the other hand it sounds redundant. Is a “valenced feeling” anything more than a feeling? This makes me want to boil the definition of sentience down to “the capacity to have feelings,” where by this we understand an experience that has emerged into something like consciousness. But then sentience is starting to look synonymous with consciousness, or something like it.

            Like

          2. Lots of definitional issues here. I think you’re getting at an issue in Birch’s definition of sentience as the capacity for valanced experience. We could consciously be aware of something that seems abstractly good or bad to us, with no discernable feeling. Although it could be argued that we always have *some* feeling in that case, but to your point, that makes sentience and consciousness synonymous. Although that scenario does seem different from something we have a strong visceral reaction to.

            My take is that sentience is the capacity to feel, which is equivalent to the capacity to have an affect, an automatic reaction to a sensed situation which has an effect on cognition, cognition which can either allow or inhibit the reaction, at least the external aspects of it. So for a reaction to be a feeling requires at least an incipient level of reasoning ability.

            Which raises the question, would we consider a pure reasoner, even one that is aware of valances and changes behavior based on them, to be conscious, or sentient? This seems like it will be an issue with AI, which is the last section of Birch’s book.

            Like

          3. “Sentience is the capacity to feel” has a satisfying ring to it. In fact it seems almost tautological. Where I would differ is in the suggestion that something additional is required to become “aware” of a feeling. There’s a hint of humuncular regress in the need to somehow “sense” a feeling, whether through cognition (which is really pretty advanced stuff to demand of all sentient creatures), or whatever else might remove a creature from the feeling enough that it can choose its reaction.

            My instinct is to compact all this into a feeling which may not rise to anything like consciousness, and a response that always involves agency — but may involve such predictable choices as to always appear reflexive.

            Like

          4. I don’t see a regress if we take the feeling to be the perception of the reaction. Without that perception, all we have is the reflex or habitual reaction itself. It’s the perception which makes it into feeling.

            If we instead define “feeling” to just be the reaction, then my laptop is sentient. That might be okay for a panpsychist, but I see that definition as too broad, at least for the questions Birch is exploring. So I do agree with Birch that experience has to be part of the definition. My issue with his definition is just adding valance by itself isn’t enough. We need the reaction.

            Like

          5. I’m just going to say once for all that a “valance,” spelled so, is some kind of curtain arrangement. I’m not normally a grammar grouchie (OK, that’s not true), but if you plan to talk about valences, you need to know.

            Following Whitehead’s lead, I subscribe to a definition of “feeling” that is well below consciousness, and may or may not rise to consciousness. First we must allow that some kind of “valence response” can precede consciousness, and I don’t think you find this problematic (otherwise there would be no response to become conscious of). Then we have to name it. To me, “feeling” seems economical and relatively neutral. If “sentience” has to mean something other than this pre-conscious feeling, that could be a useful word too. Whether a laptop has either feeling or sentience in these senses is a separate question, but, if it must have one or the other (because it responds to valences, which is also a separate question), then feeling is a less daring candidate. It avoids ascribing consciousness, at least.

            Like

          6. Ha! Noted on “valance”. Not sure how I fell into spelling it that way. (Although I’m often pretty loose with grammar, which I’m sure you’ve noticed. For me, it’s just a communication tool.)

            I do buy that valences precede consciousness. Everything living has them, as well as a lot of technology. On “feeling” preceding it, well, that’s a definitional matter, and I concluded a long time ago arguing about those isn’t productive. This whole area (emotion, affect, feeling, etc) is a definitional bog. So I’ll just amend my definition from above to sentience being the capacity for conscious feeling. (Of course, the definition of “consciousness” is a much more difficult matter.)

            Liked by 1 person

  2. First off, thanks for reading and summarizing these treatises on consciousness and the like. I’d never read them myself, but enjoy learning about them through you.

    I wonder when those “Real” meat labs, growing flesh in vats or on 3D printed organic scaffolding get raided by the “Pain Police”, who state that because electrical shock or hot poker stimulus causes the “meat” to jerk away in defense, are then forced to shut down because such animal-like products are experiencing pain, will we conclude that Meat is Conscious.

    I remain unconvinced that animated bags of electrified chemicals, e.g. “life”, that have evolved, from the get-go, to self-preserve by avoiding destructive conditions, and therefore experience what we’re calling pain, are actually undergoing something beyond DNA programmed death avoidance behavior. Hey, if you’re dead, you can’t create more DNA.

    Anatomy as sentience determination, as you note, does appear overly constrained. What comes to mind, and maybe it’s discussed, is hive mind mentality, self-sacrifice for the good of the “All-Mind”. I wonder, does that extend to humanity’s altruistic “charge into danger” philosophy? “I know I’m gonna die, but ‘For King and Country!'” Or the trolly conundrum? Would an AGI sacrifice parts of itself in order to persist, even though those parts are sentient in themselves?

    Of course, it’s all moot in the end. Memento Mori.

    Liked by 1 person

    1. I don’t go through nearly as many of these as I used to. Once you know a lot of this stuff, the first half of just about every book ends up being redundant. I read fragments of books now a lot more than whole ones. I actually have skipped sections of this one, and had to go back a little to understand some of the back references. The problem is a lot of books aren’t conducive to that approach.

      Yeah, people seem weirded out about lab grown meat. I’m not. I’m waiting for it to be available. Once they get the technology honed, I suspect it’s going to be a lot cheaper. Eventually the idea of eating actual animal meat will be regarded as demented. Although us old farts will probably be in eternal non-consciousness by then.

      Definitely we’re all gene survival machines. Of course, the benefit of being intelligent survival machines is we now have the power to do things not in the interest of our genes (like birth control).

      Would AGI sacrifice parts of itself to survive? That depends if it even cares about survival. I think a better question is would it sacrifice parts of itself to continue toward its goals. Seems like it depends on the goals.

      Liked by 1 person

  3. The LSE criteria for sentient pain: […]

    5. Motivational trade-offs: the animal can flexibly decide to endure a noxious stimuli for a good enough reward.

    2. Sensory integration: a brain region integrating information from various sensory sources.

    These would be #1 and #1b in the PT criteria for sentient pain (PT=me).

    Birch admits that the flexible behavior implied in 5-8 is tricky to assess, involving an unavoidable degree of subjective interpretation. (Which I think makes 6 and 7 too abstract and subjective.) Is an animal enduring a noxious stimuli for an anticipated reward? Or are they smelling the reward and the impulses associated with that are overpowering the avoidance impulses from the noxious stimuli?

    The last two questions offer an imaginary choice, like asking, is that Mike Smith, or is that SelfAwarePatterns? At least if we assume that the animal satisfies criterion 2 and the integration center for information is critically involved in the causal path for the behavior.

    Like many things about our world, 6 and 7 are hard to assess. I don’t think they’re subjective.

    I agree that sentience is vague around the edges because the meanings of “sentience” and “affect” and “feeling” are vague. I would also basically agree with your functionalist approach to the moral status of animals, because affects such as “liking” and “disliking” are functionally defined. I disagree with your functionalism about individual sensations – e.g., pain vs itches, both of which are disliked. But I don’t think animals have to feel all the same sensations as we do, to be morally important; they just have to like and/or dislike some things.

    Liked by 1 person

    1. On the questions you say offer an imaginary choice, I could have worded that sequence better. One choice involves stimulus-responses. The other involves modeling a result. Birch describes it as heuristics vs simulation. Of course, in both cases we have a system reacting, but in the modeling / simulation it’s reacting to the models / simulations.

      On animals liking or disliking some things, like so much in this space, the words “like” and “dislike” seem ambiguous. Does a flower turn toward the sun because it “likes” sunlight? At one level, sure. But I wouldn’t think so in the sense you’re describing. Which to me requires a reaction, and a model or schema of that reaction, which is used cognitively. I see that model, that awareness of the reaction, as what we mean by “feeling”.

      But I imagine that’s probably too functional for you.

      Liked by 1 person

      1. No, that’s exactly the right amount functional. “Liking” should be characterized functionally, involving cognitions of stimulus and reaction.

        On the distinction between stimulus-response vs simulation, the dead giveaway is that an animal capable of simulation will occasionally run representations through its head that differ from the actual current circumstances.

        Liked by 1 person

  4. I’m curious as to what makes these assessments objective vs. subjective to you. To me it looks a lot like what’s being distinguished relies on our intuitions about sentience for which we then cook up reasons. I’m not saying it’s impossible to be more or less objective, but that’s just my impression. I think we all have a pretty good idea of what counts and what doesn’t, and that doesn’t always track with whether we think it’s okay to harm creatures. What if we came up with a definition of sentience and then it turned out—perhaps by some new scientific discovery—that plants were sentient too? I suspect we’d alter our definition of sentience to exclude them so that we could continue eating them without feeling bad about it. I hope I’m not being too negative.

    Liked by 1 person

    1. Oops, I messed up here…”I think we all have a pretty good idea of what counts and what doesn’t, and that doesn’t always track with whether we think it’s okay to harm creatures.” —By “we” I meant the public at large.

      In the next part I meant “we” as in scientists and philosophers trying to come up with a definition of sentience.

      Liked by 1 person

    2. By “subjective” in that context, I meant different people with different priors will look at the same behavior and judge it differently. It seems easier with mammals and birds, because there the behavior is typically too sophisticated to interpret as simple heuristics. It’s easier to see that different paths are being simulated. But in the case of most invertebrates, it’s far more ambiguous.

      Compare that to criteria 8, self administering analgesics, which is pretty straightforward. If the animal learns that eating a particular food results in less pain, and elects to go for it, even when it means foregoing tastier treats, that seems to tell us unambiguously that it’s feeling the pain in at least some sense.

      Of course, the criticism is 8 requires more intelligence. This is the old refrigerator light dilemma. For a child trying to figure out if the refrigerator light stays on when the door is closed, it seems pointless to just keep opening the door to look. If a creature can’t demonstrate some use of feeling, can we know whether it feels?

      Good point about the definitional issues. I think the reality is our intuitions aren’t always consistent, so trying to find a simple straightforward definition is probably always going to be a challenge. Sentience, since it requires conscious experience, inherits all the semantic indeterminism of that concept. It’s why I think the “sentient or not” question is too simplistic. But as Birch ruefully acknowledges in his first section, if he admits that, it seems to jeopardize most of his approach.

      Liked by 1 person

      1. Thanks, Mike. I see what you mean by ‘subjective’ now.

        And I meant to say I agree that anatomy doesn’t seem to be the relevant issue, but it’s really hard to say what is relevant. To me it’s almost like, “you know it when you see it”.

        Liked by 1 person

        1. Thanks Tina.

          I’m not sure how much we really know it when we see it. Humans, and animals in general, seem to have an agency detection capability that errs on the side of false positives. There’s a reason we once worshipped storms, rivers, volcanoes, etc.

          But my take, as someone who thinks sentience is semantically indeterminate on the edges, is that in cases where it takes a lot of effort to resist seeing it, we’re better off acting as though it’s there. Overriding our sympathetic intuitions seems like a bad habit to get into. Being cruel to animals makes it easier to be cruel to each other.

          Like

  5. I kinda think you (Mike) and Birch are working off different definitions of sentience, even though you specify (in a previous post) Birch’s as “capacity to have valenced experiences.” You’re diverging either at “having” or “experience”, or both.

    So here’s a question: does pain imply suffering? I would say not necessarily. I would say nociception is another word for pain. I think Birch might agree that sentience simply involves positive or negative affect, which means the response to the pattern recognition is directed toward changing future behavior to move toward a goal state.

    Suffering, on the other hand, comes into play when the system has more than one goal (such as satiation and no pain). For me, suffering occurs when response to one valenced input (say pain) produces a response which has a negative effect on one or more other goals (say, finding food), but the response also does not have the desired effect on the first goal (reducing the pain). Your description of “feeling” seems to be consistent with this description of suffering. The response to nociception is “modeled” (pattern recognized producing a sign vehicle) and this modeled response is taken into account when considering what actions to take relative to other goals.

    And then there is the question of value. To make moral decisions you take in to consideration the goals, both your own and others’, which will be affected by some action, but you have to place relative values on each of these goals. You mentioned above that there is a spectrum of sentience, and I think part of what you’re referring to is the spectrum of relative values. The rule of thumb developed by nature (so, intuition) would be placing higher value on things closest to us (self, partner, family, clan, … ). There’s a general path backwards thru evolution, but there is some deviation when we see significant intelligence, such as in octopuses and corvids (as opposed to chickens). Personally I think that comes from recognition that such creatures can develop hierarchical goals, which then take more consideration and get more value.

    *

    Liked by 1 person

    1. My personal definition of sentience is definitely more demanding than Birch’s. But even just sticking with his, it hinges on the definition of conscious experience. And that’s where it becomes indeterminate. “Consciousness” can refer to too many different phenomena. Some take sensory integration as sufficient. (Although they usually resist the idea that sensory integration in machines is therefore sufficient.) Others require more sophisticated architectures, such as attention, episodic memory, all the way up to metacognitive self awareness.

      Birch repeatedly acknowledge the distinction between nociception and pain. And that fits most of the literature. Of course it doesn’t fit all of it. The Panksepp’s and Merker’s out there seem inclined to define it in the way you do. But if we’re talking pain in the way we experience it, I think it’s a complex emotional state, which seems to inherently involve suffering. It’s typically caused by nociception, but not always. A lot of chronic pain doesn’t involve any nociception.

      To me, suffering is an automatic reaction that can’t be satisfied. It’s why frustration behavior is often a result. There’s an impulse to just do something even if nothing will fix the situation.

      The idea of a spectrum of relative values makes sense to me. I don’t value insects as much as I value mammals and birds. That doesn’t mean I place no value on an insect, but it doesn’t take much for me to discard it. Although it’s worth noting that my reaction to finding a nest of insects in a shed may not be too different from discovering a rat infestation, even though I know the rats are much more sophisticated.

      Like

  6. Great how you discussed Jonathan Birch’s reasoning. I agree with your stance on his arguments. When Birch and his fellow campaigners argue in favour of considering insect welfare in research and teaching settings or whether gastropods should be brought within frameworks to protect animal welfare, it seems to me to be very over the top and unrealistic. Given the way insects treat their own kind, I find the caution about insect welfare quite surreal.

    Birch and his co-fighters rely primarily on observed behaviour (although its interpretation is far from obvious, unlike the headlines of the relevant papers suggest). Fish and aquatic invertebrates such as molluscs, decapod crustaceans, as well as their arthropod relatives the insects, have a variety of sensory organs that allow them to respond to various stimuli in their environment and to learn from them to maintain survival and reproduction under natural selection. By using a large number of criteria, there is a very high probability that no organism will fail to meet the threshold for “some evidence of sentience” simply because of its ability to sense its environment. However, their responses to potentially life-threatening stimuli don’t necessarily indicate awareness or prove sentience, since we cannot tell which behaviours should be interpreted as indicative of consciousness and which should not until we know what unconscious minds can do. If a certain behaviour is connected with a conscious state, it does not have to be caused by it. Perhaps the same behaviour could have been caused by an unconscious counterpart.

    There are many examples of behaviours that are also present in organisms that are commonly thought to be non-sentient (unless one adheres to panpsychism), such as plants and protozoa, spines disconnected from brains, decerebrate mammals and birds, and humans in unaware states. These subjects can show approach/withdrawal; react with apparent emotion, change their reactivity with food deprivation or analgesia, discriminate between stimuli, display Pavlovian learning, including some forms of trace conditioning; and even learn simple instrumental responses. Hence, none of these responses provides a robust indicator of sentience.

    If, however, the precautionary principle is invoked on the assumption that an organism could realise pain in an as yet unknown way, then no amount of data on the ways in which pain is realised in other animals can provide a valid counter-argument against an explanation of an organism’s painful behaviour in terms of that organism feeling pain.

    Liked by 1 person

  7. What about

    R6. Sentience can be achieved with a allocortex/limbic system in vertebrates but also with other structures in non-vertebrates.

    To me the core factors are who, when, where, and what. Me at a time and place doing things.

    Liked by 1 person

    1. I suspect Birch would see your R6 as a variation of his R3. He says “cortex”, but I think he means the cerebrum overall. If not, he’s ignoring everything above the midbrain but below the neocortex (hippocampus, amygdala, basal ganglia, etc).

      But as I noted in the post, I prefer capabilities as criteria, which I think is what you’re getting at in your second paragraph.

      Liked by 1 person

      1. The way he has written R3, however, also suggests vertebrates only because only mammals and birds are mentioned.

        My R6 also ties strongly to capabilities because it links directly to memory and learning which may be directly tied to spatial and temporal mapping.

        Liked by 1 person

        1. Actually it’s my fault. I condensed his wording. Looking at his, I think he’d slot your view into R5, which he describes as:

          Sentience does not require the neocortex even in mammals and can be achieved in at least a minimal form by integrative subcortical mechanisms crucially involving the midbrain. Moreover, it can also be achieved by other brain mechanisms performing relevantly analogous functions (such as the central complex in insects).

          Birch, Jonathan. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI (pp. 176-177). OUP Oxford. Kindle Edition.

          Liked by 1 person

          1. I’m understanding the midbrain to be what is described here.

            https://qbi.uq.edu.au/brain/brain-anatomy/midbrain

            So, I’m thinking more than midbrain but less than neocortex. Of course, all of this stuff sits on top of the brainstem and midbrain more or less. Additionally I would suppose that the hippocampus/amygdala and their analogs in non-vertebrates likely developed in concert with the need to integrate multiple senses which now are processed in the neocortex in mammals.

            Liked by 1 person

          2. Interestingly enough, there is actual sensory integration that happens in the human midbrain (in the colliculi), but most of the axons in the optic nerve, for instance, project to the LGN in the thalamus, connecting to neurons that go on to the occipital lobe in the back of the cortex. That includes all the color opponency axons. So integration happens in both the midbrain and cortex. But we don’t appear to have introspective access to the midbrain integration.

            Liked by 1 person

  8. I really do not like the idea of defining sentience based on anatomy. My own interest in this topic has more to do with the search for extraterrestrial life than anything else, so saying something like “sentience requires a mammalian neocortex” really doesn’t help me much. I guess R5 opens the door to sentient extraterrestrial life forms, but just barely.

    Liked by 2 people

    1. Good point. If we found an aquatic creature in the underground oceans of Europa or Enceladus, there may be some convergent evolution, but its internal structures would likely be far more alien than an octopus’, and octopus brain anatomy is radically different from ours. It’s either find the functional equivalents or depend on behavior (or both).

      Liked by 1 person

Leave a reply to Paul Torek Cancel reply