Consciousness must be adaptive

The New York Declaration on Animal Consciousness has been making a lot of headlines.

The declaration itself has somewhat careful language in terms of what it’s asserting, but many of the headlines don’t. The declaration is short, so it’s easy to quote in full.

Which animals have the capacity for conscious experience? While much uncertainty remains, some points of wide agreement have emerged.

First, there is strong scientific support for attributions of conscious experience to other mammals and to birds.

Second, the empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including reptiles, amphibians, and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans, and insects).

Third, when there is a realistic possibility of conscious experience in an animal, it is irresponsible to ignore that possibility in decisions affecting that animal. We should consider welfare risks and use the evidence to inform our responses to these risks.

https://sites.google.com/nyu.edu/nydeclaration/declaration

The ever growing list of signers on the declaration include people like Jonathan Birch, David Chalmers, Peter Godfrey-Smith, Simona Ginsburg, Eva Jablonka, Anil Seth, and others whose work I’ve highlighted over the years.

Ironically, I think the declaration was released on the same day Daniel Dennett died, ironic because I’m sure Dennett would have questioned the premise of the statement. It seems to take as a sharp precise fact of the matter whether certain creatures are conscious. The Quanta writeup on this makes clear the authors are focused on phenomenal consciousness, the “what it’s like” aspect of consciousness, essentially the Cartesian Theater or movie notion I discussed in the prior post.

To me, this highlights the problem with the concept. The idea is that a creature either has or doesn’t have this form of consciousness. Under this binary view, the consequences of getting it wrong are high, since it might cause us to mistreat animals we mistakenly classify as not conscious, or waste efforts on the welfare of creatures we mistakenly classify as conscious.

However, letting go of this notion frees us up to consider the problem from a different perspective, an incrementalist one. To an incrementalist, it’s clear many of these creatures have some limited degrees of consciousness, while not having others.

I’ve discussed the idea of thinking in terms of functional hierarchies of consciousness many times. A simple version might look like this:

  1. Automatic behavior (reflexes and fixed action patterns)
  2. Body and environmental models
  3. Causal models
  4. Introspection

Everything alive has 1, automatic behavior, but so do robotic systems.

2, body and environmental models, is implied by anything with distance senses (sight, hearing, smell), which would include any of the animals discussed in the declaration. It also implies bottom up reflexive attention since the system needs a mechanism to focus its reactions on. All of which dramatically expands the scope of what 1 is in reaction to.

3, causal models, is where some degree of reason and scenario predictions start to come into the picture. It can be thought of as increasing the scope of the reactions in time as well as space. It’s here where the reactions become subject to being overridden, based on what a system has learned. It’s also where I think top down controlled attention starts to come into the picture.

4, introspection, is a system modeling aspects of its own processing in 1-3. At the simplest levels, it might provide added degrees of control. In humans, it enables symbolic thinking and communication and sharing of cognitive states in social contexts.

A hierarchy of this type is admittedly an oversimplification. We could instead take these items and use them as dimensions, and talk in terms of how much of each a particular system has. Still an oversimplification but maybe more clearly recognizing the complexity involved.

So many insects clearly have 1 and 2, but any evidence for 3 seems marginal and subject to interpretation. And there is none that I know for 4. Among invertebrates, only cephalopods (octopuses and similar species) show a strong degree of 3. Many fish do seem to have a little of 3, but from what I’ve read it’s fragmented glimmers compared to what we see in mammals and birds.

When it comes to 4, there do seem to be limited degrees of metacognition in various mammals, such as dogs and rats, although the evidence again seems open to interpretation. It’s stronger in some monkeys and great apes, who seem able to assess the certitude of what they know in situations where treats of different desirability are on the line. All of which is very limited compared to humans.

A key question for many might be when an organism is capable of pain and suffering. A lot here depends on what we mean by “pain” and “suffering”. If we mean adverse reactions, then we have it with 1, but then have to include plants and robots in our definition. If we mean a more sophisticated mental state, then I’m not sure it exists without some degree of 3, that is, when we have systems capable of utilizing the feeling.

But that ties into a broader question: how do we know these organisms don’t have these feelings, or even self reflection, but just aren’t showing it in the way more intelligent creatures do? Strictly speaking, we don’t.

However, this problem looks less pressing when we look at it from an evolutionary perspective. Building environmental, causal, or self models is expensive in terms of energy and development. For these capabilities to be naturally selected, they must provide some fitness benefit. Natural selection can’t act on internal mental states, only on capabilities that increase or decrease the organism’s ability to pass on its genes.

In that sense, if nothing in an insect’s behavioral repertoire shows it making use of a feeling of pain, such as learning from it or flexible behavior related to it, if all we get is reflexive withdrawal and avoidance behavior, then there’s no reason for it to have evolved mechanisms we’d be tempted to label “pain”. I think this is why we can largely dismiss the idea of consciousness in plants; how would it increase their fitness levels?

But, as always, it pays not to be dogmatic with any of this. Science is continually turning up new discoveries. And many scientists seem eager to demonstrate conscious capabilities in animals. I’m totally onboard with them trying, as long as we’re careful about how we interpret the evidence.

In any case, I think an incremental view frees us from an either / or determination on whether to care for an animal’s welfare, to one more oriented around what considerations we should have for it. Under this view, we should care more about mammals and birds than fish, but that doesn’t mean we should completely disregard the welfare of those fish. And while I’ll always try to make it quick when killing insects in the house, I’m not going to be too concerned about their welfare beyond that.

But maybe I’m missing something? Are there reasons to ascribe higher levels of consciousness to many of these animals that I’m overlooking?

Featured image source

75 thoughts on “Consciousness must be adaptive

  1. It’s good you brought up pain and suffering. We’re all just animated bags of chemicals. Maybe pain only exists because /we/ feel it and can anthropomorphically transfer it to other creatures. Or, in response analysis, determine that it exists as a sensation and reaction.
    But maybe it doesn’t really exist. Our realities are interpretations and projections manifested within a biological computer are they not?
    Or maybe there’s a variant of Pascal’s Wager when it comes to the way we treat animals: assume that they can suffer and avoid it when possible. Or, since every living thing is food for some other living thing, figure out a way to reduce or eliminate suffering — even the thought of suffering.
    WHAM! You’re dead. And you never saw it coming. Now, where is that BBQ sauce?

    Liked by 1 person

    1. I think pain definitely exists. (The last couple of years reminded me of it pretty well.) But like anything mental, it’s not fundamental, but complex phenomena that don’t fit our simple stories about it.

      I actually do hope I never see my death coming. I don’t expect to experience death, but dying might be an experience, one I’d just as soon avoid if possible. I once had a conversation with someone worried about the universe collapsing into a new vacuum state which spreads at the speed of light. We’d never see it coming and would be gone instantly. It actually sounded like one of the best ways to go.

      Liked by 2 people

          1. Yeah, but I’ll bet your avoidance of non-existence is based on other positive goals of doing certain things just for the sake of doing them. See my post below. The existence part is not a goal in itself (although it is your purpose, which is a whole ‘nother discussion).

            Liked by 1 person

      1. excellent article thank you. regarding seeing death coming – maybe this is why eating animal flesh is increasingly becoming linked to cancers and other disease – because if you have seen the abbotair system it kills the animals by stunning first – one after another. The one behind sees what is happening to the others and reacts strongly. The intense flow of negative energy (fear mostly) that is generated throughout the animal’s body before it’s fate, is what exists as it is is cut up. And then people eat it. And then wonder why they eventually become ill with a high “protein” diet. This would never be proven empirically of course but sometimes we can use our common sense. There are good proteins and bad proteins. That which is mass produced I believe is often bad. 

        Even though I am very pleased more and more proof and reseach is showing all living beings have varying levels of consciousness, I think we already know this intuitively and with our self awareness. Levels of self awareness determine the level of consciousness – how can we possibly know the “self” awaressness level of other beings? We can observe conscious and emotional states but like fellow humans, never accurately determine the other being’s conscious state in any given time. 

        I have a questionaire I use for clients who want to work on EI and use the data to work out an approximate self awareness level. Looks like the below chart. The Psychologist who developed the scale was strongly attached to validating his work as much as is possible.

         Low awareness profile:

        6

        5

        4     XX

        3     XX      XX

        2     XX      XX       XX       XX

        1     XX      XX       XX       XX       XX

        ________________________________________

               EN        PS       CE        DF       EM        IA

               (A)       (B)       (C)        (D)       (E)        (F)

        Liked by 1 person

        1. Thanks!

          Most of the stuff I’ve read about animal slaughter is that they are herded into a enclosed space and stunned into unconsciousness, and then bled. But they can’t see the dead ones in front of them, and the ones behind them can’t see the stunning. Although being in an unfamiliar place can lead to stress, which can toughen the meat, something the cattle industry tries to minimize. 

          A meat laden diet is definitely associated with higher cancer rates. But I think it’s just the effects of eating that kind of diet. (Although some of the stuff industry feeds that cattle can be an issue too.)

          Thanks for the chart, but I’m afraid it’s greek to me. I have done some reading on Emotional Intelligence (if that’s what you meant by EI). The hard part is getting into the habit of checking our emotional state, particularly when we’re upset.

          Like

  2. I’m confused. Why would consciousness determine how we treat or mistreat others. The species that we mistreat, kill, main, poison, etc. the most is Homo sapiens, who I hear have declared themselves to be conscious mammals.

    As to eating the flesh of other species (meat, fish, shellfish, insects, etc.) well we have eaten the flesh of other people for millennia, no?

    Liked by 2 people

    1. If not consciousness, what would you see as an alternative?

      Myself, I think consciousness is too vague a concept to be of much use anyway. We have intuitions about certain creatures being fellow beings. It seems a bad idea to override those intuitions, since it’s a habit we could carry over into how we treat each other. But those intuitions favor creatures more like us, so I think some people would like to override them from the other direction.

      That said, I’m not convinced there’s a any strict fact of the matter on consciousness or moral propositions, so I’m a hopeless philistine anyway.

      Liked by 1 person

        1. I definitely think the definition is a big issue, but not in the sense that there’s one true definition we need to discover, but that there’s no fact of the matter between a range of plausible definitions. In that sense, it’s like trying to define “love”, “religion”, “biological life”, and a host of other phenomena. Understanding these concepts means understanding their haziness.

          Like

  3. The avoidance of suffering is my number one priority. David Pearce the Hedonistic Imperative and so forth. I have become vegetarian (can’t avoid plants sadly) but may progress to veganism. A personal conviction of course but … Pascal’s Wager et al

    Liked by 1 person

    1. I have a lot of respect for anyone who can stick to a vegan diet. I’m actually hoping lab grown meat develops and makes it easy for most of us to avoid eating anything that might be sentient (reactionary politicians aside). And I’m onboard with minimizing unnecessary suffering, with a high evidentiary bar for “necessary”. 

      Liked by 2 people

  4. I’m working on a review of the sci-fi novel Blindsight by Peter Watts which basically asks the question ‘What is consciousness for?’ Watts’ conclusion is that its not clear that it has a function at all. Granted, he is not a neuroscientist (PhD in ecology) but he’s one of the hardest of ‘hard sci-fi’ authors and often backs up his stories with citations. When researching the book he kept asking the question, ‘Can you imagine a non-conscious system doing the same thing?’ and the answer kept being ‘yes.’ In a recent interview he said that the week his book came out about half a dozen papers came out with the same conclusion, that it doesn’t seem to have a purpose from a Darwinian perspective. He quoted one paper by David Rosenthal which said (Watts’ paraphrasing) that consciousness has no function, its a ‘side-effect’. You might be more familiar with their arguments than me so what do you think of their conclusion?

    Liked by 1 person

    1. I read Blindsight many years ago, so my memory is a bit hazy. I recall it being an interesting book and thought experiment. What if human consciousness is a fluke and not required for intelligence? And given the way a lot of philosophers discuss it, I can see why Watts explores the idea that it’s epiphenomenal. But it seems to largely result from seeing it as something separate from the functionality and capabilities of the brain. rather than a portion of those capabilities.

      For me, the key thing is we have a lot evidence for the functionality, and none for the extra add on. Of course, many people say that just means the add-on is non-physical, electromagnetic fields, or some form of exotic physics. My take is we should accept that introspective judgments are as fallible as any other form of perception. If we do hold out for a physical add-on of some type, the idea is that it would be a spandrel, a side effect of some other adaptive trait. My question would be, what is it a side effect of? And if it is a side effect, why wouldn’t we think it’s a typical one of that trait?

      But I’m a functionalist. I see consciousness as functionality, with no strict fact of the matter on exactly which functionality, but usually including the stuff I list in the post. I’m familiar with David Rosenthal as the originator of Higher Order Thought Theory, but not the paper where he discusses consciousness having no function. I have to admit I haven’t read him at length, but I have read a lot of the people influenced by him, like Joseph LeDoux. But they seem to thoroughly situate consciousness in an evolutionary framework.

      Still, it’s an interesting (and disturbing) story idea!

      Liked by 1 person

      1. You are right that the story is interesting/disturbing. I think if you took a poll of SF readers on the book that shook them the most, it would be this one. Lots of Book-tubers and reviewers say as much. I’m open to other avenues of explanation other than functionality. If we frame the questions in evolutionary terms we’ll get an evolutionary answer. Other philosophical approaches ask different questions. The review of the book is rather long and very meaty! Should be ready in a few days.

        Liked by 1 person

  5. This is very much part of what I’ve been cogitating on lately, and have comments on most of the above comments, but I’ll put them all here. Lucky you.

    First, a reminder that I think the basis of consciousness is a specific kind of information processing requiring a two-step process: the creation of a sign vehicle (which bears mutual information) followed by the interpretation of that sign vehicle, where “interpretation” means linking some action to the sign vehicle for a purpose. For “sign vehicle” think of something like a neurotransmitter. What’s being transmitted? Mutual information. So this type of info-processing can go from the very simple to the very hierarchically organized.

    So the question I think you’re raising is what is the relation between consciousness and morality? Clearly some people suggest that consciousness is the source of moral patiency. Thus the moral patiency of animals may depend on their consciousness. I personally think moral patiency is related to consciousness, but is not the source. I think the source of moral patiency is goals. When choosing what action to take, we take into consideration our own goals as well as the goals of other “agents”, and how much impact the action will have on all those goals. A key point here is that the moral agent (the one taking the action) needs to apply separate values to each of the goals under consideration. So you will tend to value your own goals higher than the same goals in others. For us, we highly value the goals of avoiding pain, illness, etc., and we will give some of our goals (say, going to the fridge to get a beer) less value than some goals of other humans and animals. You probably won’t injure your friend or your cat just to get the beer, but you might not worry stepping on ants on the way.

    So what does consciousness have to do with this? Consciousness (by my definition) implies at least some sort of goal (purpose). The information (sign vehicle) is necessarily interpreted for a purpose. So how does this affect how we treat animals if they’re all conscious? Not all goals have the same value. We tend (at least in modern Western culture) to value intelligence highly. I think many of us are ok with killing animals which don’t seem to display highly involved intelligence traits like play, curiosity, as long as it’s for a good purpose (food) and doesn’t involve excessive suffering. Our mental pictures of cows, pigs, and chickens tend to be them just standing around in a field, maybe eating, whereas we see dogs, cats, horses, etc., running around, playing with things, etc. This is also why showing videos of animals in confined, suffering conditions is effective.

    Finally, people often conflate pain and suffering, but I think those are significantly different. Pain is the signaling of damage, and while we have the goal of avoiding pain (and so, damage), we will sometimes accept pain to achieve a different, more highly valued goal (“no pain, no gain”). For me, suffering requires the following:

    1. To have more than one goal,
    2. to recognize a deviation from a goal,
    3. to take an action to move back to the goal state,
    4. which action is detrimental to one or more other goals, and
    5. the action fails to be effective in achieving this goal

    So, chronic pain is suffering because you take action, say, paying attention to the pain (or maybe releasing system-wide depression-inducing molecules like serotonin?), at the detriment of other goals, such as writing blog posts.

    So are we ready to consider how AI fits in?

    *

    Liked by 1 person

    1. As usual, I see a lot of resonance in our views. It seems like your goals fit with the automatic reactions and behavior at my first level, and your interpretation is provided by my levels 3 and 4. 

      The only reason I don’t talk in terms of goals is it seems bit ambiguous. Are we talking about the personal goals of the system, what it comprehends as its goals? Or are we talking about the ultimate evolutionary adaptation? 

      For example, I get hungry and desire to eat. Often then my goal is to satisfy that hunger. I know eating is necessary for health, but there are many things I should do for health that I often neglect. But I don’t neglect eating, because the hunger is there to make sure I remember to do it. (It’s also damned inconvenient when I’m on a diet, that is when I know my health goal isn’t completely aligned with the satisfying-hunger one.) So when I go to eat lunch, my goals is to satisfy hunger. But the reason I’m hungry is because my body needs to maintain homeostasis, a goal I only vaguely comprehend before I was educated.

      In the case of suffering, I think what’s required is that the goal, the reaction, isn’t volitional. It’s automatic, resulting in reflexive physiological changes we can’t avoid, such as elevated heart rate, heightened alertness, and other stresses to the system. There’s a reason we have frustration, and often need to physically do something when in that state. Our system is primed for action, but typically it’s not an action we can indulge in, we don’t know what the right action is, or there is no productive action to take. So we’re stuck in that amped up state, our stomach in knots, until the parts of our brain that drive those reactions eventually habituate to the situation (the famed “acceptance” stage).

      On AI, the big thing for me is that those systems don’t have our evolutionary background, a background which typically is not going to be conducive to what we want them to do. So they do have goals, but the nature of how those goals are implemented seem very different from the physiological reactions we often have to contend with. That doesn’t mean we couldn’t build an AI that operated similar to us, but I don’t know how much of a market there will be for them. We want the intelligence without the baggage, if we can get it.

      Like

      1. Lots to unpack here.

        First, you’re missing something if you think the “interpretation” I’m referring to only happens at higher levels. Yes, I’m using a non-standard definition of “interpretation”, but that’s just the closest word I know. The very simplest form of consciousness, what I call the psychule, necessarily includes the response to the sign vehicle. I’m calling that response the interpretation. The sign vehicle itself has some amount of mutual information with every thing in its causal chain. The response determines which part of that causal chain is being responded to, because the response is selected for some purpose (goal). I think what you’re referring to in your levels is just responses to pattern recognitions further up in a chain of causation.

        I also have a definite, non-standard definition of “goal”. A system has a goal if

        1. There is a state of the world (internal or external or both) such that
        2. the system can detect a deviation from that state, and
        3. the system responds to that detection by taking an action which is intended to (selected to) move the world toward the goal state

        So, you are a system made up of lots of subsystems. Some of these subsystems have goals (actually, I expect they all do). One of them is the system that regulates hunger. You also have a subsystem, your autobiographical self (the one that has a global workspace, attention schema, etc.) which can generate sub goals. Sometimes these sub goals are created in response to a signal (sign vehicle) from another system (“time to get food … there’s a tasty cookie right there”), and sometimes these sub-goals are in opposition (“I want to be healthy, … I’ll go find a carrot”). In both cases the the response was arranged by selection, but the hunger response was selected by evolution, whereas the “eat healthy” response was selected by your autobiographical self system. I need to point out that the goal of the hunger response is to maintain satiation. The purpose of the response comes from the goal of the system selecting the response, so the goal of the system that selected the hunger response is homeostasis. The hunger response, with its goal, is just a mechanism which works toward that higher goal, along with the other mechanisms, like the pain response.

        I think I agree with your remark on suffering. The physiological “do something” response is number 3 in my list.

        *

        [gonna make my AI response separate]

        Like

        1. On interpretation, right. I was focusing on where we agree. But I do remember that you apply your framework at a much lower level. And I know your solution to the conscious / unconscious divide is that the unconscious processes are themselves conscious, just not the consciousness of us typing these comments. I don’t find that a productive use of “consciousness”.

          But I don’t doubt those processes exist in some fashion, and consciousness is in the eye of the beholder. So I can’t tell you your version is wrong, just that my interest is in our consciousness and the consciousness of systems similar to us. The “consciousness” of sub-personal mechanisms are interesting in how they produce their effects and contribute to the whole, but not in the sense of fellow beings.

          Like

      2. On AI, the current AI, which is the Chatbots, do not have any significant goals other than to produce a response. That response mechanism was selected by the training parameters. But I should point out that my roomba has goals, including homeostasis (it goes and finds the charger when low on juice).

        But I think we will definitely be making AGI’s that not only will have multiple goals, but those goals will include taking into account the goals of other entities. These will be moral AI’s, and there will definitely be a market. In warehouses there will be various kinds of robots with various agendas, but I’m also kinda hoping I’ll be able to afford a robot that will cook and clean, among possibly other things (medical diagnosis?) in the not too distant future.

        I have hope for these kind of robots, because that’s what the VersesAI folks are working on. See, eg., this: https://mdpi-res.com/d_attachment/systems/systems-12-00163/article_deploy/systems-12-00163.pdf?version=1714828142. (I think I tweeted this recently)

        *

        Like

        1. As I noted in the post, there’s no doubt that robots have the level 1 I listed, so I’m onboard to that extent. 

          On the rest, while I don’t doubt we’ll eventually have robots that can do the things you list, I think they’ll approach those tasks from a completely different orientation than we do. In short, human and animal minds are a tiny slice of the space of possible minds. AIs, if we call them “minds”, will come from places very different. 

          Although to the extent they are social participants, I don’t doubt there will be “moral” AIs. There will also be some that don’t work right and turn out “immoral” from our perspective. But the underlying impulses that bring them there will be very different from ours. 

          (Unless we go out of our way to make artificial life. If we then try to use that life as machines, we’ll likely get what we deserve. But I don’t see any particular reason for us to go down that path. Though it does make for good science fiction.)

          Like

      1. That’s cool, but I’m suggesting that conscious affect is necessarily indicative of goals. If you have affects, it’s because you have goals, both innate (avoid pain, hunger) and secondary (wealth, social status, etc.).

        *

        Like

  6. I take issue with your conflation of “what it’s like” with a Cartesian theatre. The Cartesian theatre is one account of “what its like,” and not a very good one. But would you say that because there’s no Cartesian theatre, there’s no “what it’s like”? This is what many thought Dennett was doing, and quite rightly they gave him grief for it, because obviously there is a “what it’s like.” We just have to find a better account.

    The idea of levels of consciousness seems reasonable at first, but there are some problems. Because some robotic systems have sensors, they would fall into Level 2. Others arguably have powers of prediction and response, and would fall into Level 3. Whether anything has introspection seems unknowable, in the way that you don’t really know that I have introspection. Also, everything alive has some kind of sensory apparatus, so 1 and 2 have to be collapsed into a single level. I don’t see why touch should be excluded, especially given its intimate connection with pain.

    There probably are different levels of consciousness, and even different kinds, but figuring out what they are is tough because we have so little information. When spiders run away from us, do they feel fear? When plants turn toward the sun, is it because the warmth or the light or the pure energy feels good? I agree that if a plant can’t respond to an injury there would be no point in feeling pain, but when a stem is broken off and regenerates, does the plant feel something and respond?

    Who the hell knows? When something happens to us and we do something about it, we quite comfortably assume a “what it’s like” is involved. I think most of us naturally extend the courtesy to other living things. Trying to prove that it’s wrong to do this is a curious activity, perhaps more psychology than science.

    Liked by 1 person

    1. I’d say denying the Cartesian Theater does deny at least a common account of “what it’s like”, plausibly the most common. But the problem with that phrase is its ambiguity. It’s often stated as through it’s a precise term, but it’s not clear what someone means when they say it. If the version being denied is a poor one, what’s a better one?

      In my hierarchy, you could collapse the first two if your criteria is solely about whether the responses and actions are automatic. But the second level acknowledges the strong intuition we have when we see an animal with eyes, that there’s more going on than in a worm. And there definitely is more going on. The processing is far more sophisticated, enabling a wider repertoire of behavior, although not as sophisticated and wide as an organism with temporal models.

      There’s no doubt that studying animal cognition is hard. I take my hat off to the ingenuity many scientists bring to the endeavor. But engaging in it means questioning our intuitions, and being open to powerfully counter-intuitive answers.

      As I noted in the post, neural processing is biologically expensive. For some of it to be dedicated to feeling, it likely is because there’s some adaptive benefit, and so some observable capabilities associated with it. We have to be careful not to project our own experience on different systems.

      “Trying to prove that it’s wrong to do this is a curious activity, perhaps more psychology than science.”

      Just a reminder that you’ve taken me to task for similar statements. Although in my case, I’m used to being considered a hopeless philistine. :-)

      Like

      1. I don’t think anybody is using “what it’s like” as a precise term. For that matter, I don’t think anyone’s using “consciousness” as a precise term. And the idea of “more going on” is difficult to quantify. There is more sheer stuff in a 70-year old human than a three-year old, but that doesn’t mean there’s more going on. There’s pretty much the same amount of stuff in a sleeping cat and a cat stalking a bird, but arguably more going on in the stalking cat (but maybe not!). “Going on” is another imprecise term. This is just the state of the art. “What it’s like” will have to do; it’s a vague gesture, but if someone denies that they know what it means, I’ll assume they’re just being argumentative, unless of course they actually are an automaton or a zombie.

        On the question of “psychologizing,” I admit to mixed feelings. When it’s used to suggest that someone is basically too infantile and cowardly to think for themselves, I’ll object on the grounds that this is not an argument. That may have been the context of our earlier discussion. When it’s used to suggest that everyone has an agenda, as Nietzsche uses it, then I’ll accept it as fair game, in the event that someone thinks they’re above having an agenda. It’s an interesting topic; maybe there’s a post in it.

        Like

        1. Well, I for a fact don’t know what “what it’s like” means. I thought I did when I first heard it. But scrutiny revealed I didn’t, and I suspect my initial mistake is common. I often ask people to elaborate what they mean by it. The “you’re just being difficult” response is all too common. 

          When people do actually answer, they seem to give a variety of answers, often matching different conceptions of consciousness, which we’re agreed is itself a vague concept. Of course, the people who respond most clearly are the skeptics, but their clarity only seems to invite accusations they’ve got the wrong idea, almost always with no attempt to describe the right one.

          Like

          1. I want to tell you you’re just being difficult, but in your case I’ll extend the benefit of the doubt. In the past you’ve discussed colour perception, and I don’t know how you could say anything useful without knowing what it’s like. You may be overthinking this, and in need of a dose of Wittgenstein.

            Liked by 1 person

          2. Grateful for the goodwill!

            I actually think the widespread idea that conscious perceptions are simple is a serious barrier to understanding them. One of the things I drill into in the color post is what color is from a functional perspective. This is complex stuff, often touted as the most complex thing in nature. Not appreciating the complexity, I think, it why consciousness is thought to be such an intractable problem. But any problem is if we start by ruling out possible solutions.

            Like

          3. Alfred North Whitehead says in Modes of Thought that “the emphasis upon the higher sense percepta, such as sights and sounds, has damaged the philosophic development of the preceding two centuries” (p. 74), and in Process and Reality that there is “a fundamental misconception to be found in Locke. . . Locke assumes that the utmost primitiveness is to be found in sense-perception. . . The more primitive types of experience are concerned with sense-reception, and not with sense-perception. This statement will require some prolonged explanation. But the course of thought can be indicated by adopting Bergson’s admirable phraseology, sense-reception is ‘unspatialized,’ and sense-reception is ‘spatialized.'”

            I’ll spare you the prolonged explanation, which does not go in your direction at all (it heads off into process panpsychism, obviously). But granted that sense-perception isn’t basic, it’s still there to be reckoned with. To talk about colour without even knowing what colour is like would be to indulge the absurd. Why would such a conversation even come up?

            Like

          4. I go a different way than Whitehead, I think, because I see the most primitive elements from the subjective perspective, whatever we take them to be, to themselves be vast complex processes. They’re not complex from that internal perspective, because it’s never been evolutionarily adaptive for us to understand how the sausage is made (quite the opposite). It’s similar to how software has no need to deal with the inner workings of a single bit, but it doesn’t mean the bit is fundamental when we look at it at the hardware level, typically a transistor.

            The sensation of color seems to be a complex calculation of distinctiveness and salience, involving, among other things, the wavelength of light hitting our photoreceptors, the retinal state from other recent stimuli, the surrounding context of lighting and shadows, the focus of attention, a constellation of triggered associations both innate and learned over a lifetime, and introspective feedback mechanisms.

            The questions that occur to me: how much of that is essential to understanding what color is? How meaningful is it to talk about it in another animal with only a subset of those processes or different processes and affordances? Which portion of it would a robot need to have to know what color is?

            Like

          5. This line of thinking suggests a “what it’s like” that a robot might not have. To that extent, I think you do know what it means.

            How it works is a different question. For what it’s worth, Whitehead also regarded the so-called primitive elements of our subjective experience as the result of vast complex processes.He just drew a stronger connection between their responsiveness to the environment and our own.

            Liked by 1 person

          6. My own takeaway is that there’s no one true answer. There are only the capabilities we have, and the capabilities of other systems which may be more or less similar to ours. So instead of like something or what it’s like, I think in terms of how much like us it is, which is going to be more of a matter of degree than any bright line.

            It seems like how it works is the set of available solutions, and setting them aside is what makes the issue look like an intractable mystery. It’s like asking how we can have a delicious meal in front of us, but refusing to consider things like ingredients, kitchen appliances, and cooking instructions.

            Granted, if someone just wants to ponder the human condition, then the dirty details may not be germane.

            Like

          7. Before we can consider “how much like us it is,” we need to understand what “it” means. Whatever that may be, to say that it’s “like us” is strangely vague. I think what you’re trying to say, or maybe trying not to say, is “how much like our experience ‘what it’s like’ is [for other systems].”

            Humans experience degrees of “it” in deliriums, in dreams, in certain types of anesthesia. We can talk about degrees of “what it’s like” without rendering the expression meaningless.

            I’m not sure you’ve grasped that Whitehead is also interested in how things work. You seem to think he’s merely pondering the human condition and wants to avoid the “dirty details” (did you mean the unwashed vegetables?).

            Liked by 1 person

          8. When I use “like us”, I’m thinking in terms of any dynamic system. So fellow humans are more like us than chimpanzees, which are more like us than dogs and cats, who are more like us than frogs and fish, who are more like us than worms. A Roomba is like us in the very limited fashion of goal directed movement, but less like us than any of the examples above. But it’s more like us than a hurricane, or a rock.

            I think the difference between that phrase and “what it’s like” or “something it is like” is the referents are more clear. Nagel’s phrase seems to imply a standard that draws some kind of bright line in nature, but doesn’t identify that standard. I think it’s a tag that most people think they recognize in consensus, until forced to elaborate and the answers are compared. Earlier you said you thought I might be overthinking this. But from my perspective, most people don’t focus enough thought on it.

            I wasn’t thinking of Whitehead in particular with my remark. Just that I don’t think taking a reductionist stance dismisses or trivializes the human condition. I haven’t read Whitehead enough to comment on him in particular.

            Like

          9. Thanks, I misunderstood your phrasing. I thought it had some bearing on “what it’s like.” Of course we’re more like chimpanzees than dogs or cats, but we can say that without any reference to “what it’s like.” The real difference between the phrasings is that yours invites us to look elsewhere, where “the referents are more clear.” But turning our attention away from something does not necessarily make it disappear (unless you’re an idealist). To work on easier problems may just be to refuse to look at hard problems.

            If we assume that “what it’s like” is some sort of referent, this leads us to thoughts of ghosts or qualia, and of a “bright line” that separates such mysterious referents from ones that are more clear and distinct. The frame is Cartesian: everything is a substance, and “what it’s like” must be some kind of substance. This brings such difficulties that we’re tempted to talk as if there is no “what it’s like.” That’s a pragmatic choice, and one you seem to have made.

            Yet when we talk about something being delicious, we’re talking about what it’s like. Its deliciousness surely has to do with ingredients, kitchen appliances, and cooking instructions, but if someone asks “How does the soup taste?” it’s no good pointing at them. Nor is there some ethereal “deliciousness” we need to point at. How can we talk about deliciousness, if there’s no clear referent? It’s a hard problem. Silence is one option, but if we stick our heads in the sand, we shouldn’t be satisfied that the problem has gone away. I think this is Nagel’s point in reminding us that there something it’s like to be a bat.

            Like

          10. I wouldn’t say I’m inviting us to look elsewhere, but to look at the question from a different perspective (actually a range of perspectives). It’s when we insist this can only be solved through the subjective perspective, with all its limitations and blind spots, that I think consciousness remains an intractable problem.

            I do deny the existence of the version of “what it’s like” that’s taken to refer to the Cartesian framework you discuss. However, for the other versions people say they mean with those phrases, I take Pete Mandik’s advice and remain quietest about them. ( https://philarchive.org/rec/MANMAQ ) I try to avoid those phrases in my own descriptions, because it’s never clear whether they refer to the Cartesian ideas, something functional, or something else.

            Talking about sensory experiences, like something being delicious, can be difficult. That difficulty arises because, as I noted above, there’s an enormous amount of information processing going on in that experience, much of which we don’t have access to. It shouldn’t surprise us it’s difficult to put into words. But when we do talk about it, we typically compare it to other things we’ve tasted (referents), or our reaction to it (it’s good, will have again, or it’s gross, never again). The key is remembering why taste evolved and its role in the overall causal framework. I think Nagel’s insistence on considering it outside of that framework is one of the places he goes wrong.

            Like

          11. I’m pleased to see the mention of perspectives; as you know, I’ve been blogging lately about “perspectival realism.” I agree it would be a mistake to rely solely on the subjective perspective, but it’s also a mistake to minimize it or try to do without it.

            I think you’ve said before that you lean towards “quietism.”

            To say that the taste of a soup is difficult to put into words because the information processing is so complicated, is not just to understate the situation. It is to be misled by a model— as if by fathoming the complexities we could theoretically describe the taste to everyone’s satisfaction. The way to convey the taste of soup is with a spoonful of it, and then it’s quite simple. We forget this perspective at our peril. If it lies outside the framework, then the framework has missed something. I believe that’s what Nagel is saying.

            Like

          12. I thought the perspective point might resonate with you. And I definitely don’t advocate dismissing the subjective perspective. I just don’t think we should privilege it, particularly when it’s our only source of information, or it outright conflicts with scientific data.

            Your point about how we characterize soup tasting, I think, gets to the heart of the difference between the fundamental and reductionist camps. One assumes that describing taste, or any sensation, is an ontological limitation in principle. That stance implies there’s something fundamental about experiential properties. It seems to make property dualism or panpsychism hard to avoid. The other assumes it’s more of a practical limitation, akin to trying to track all the cream molecules just mixed into a cup of coffee.

            I could argue for my side, but I suspect we won’t settle it here. 🙂

            Like

          13. It sounds like you’re saying that if we can just investigate someone’s brain thoroughly enough, we can actually begin to experience for ourselves what they’re experiencing. You can’t mean using a microsocope, or any other usual observing instrument. You might be thinking of a helmet that does things to the receiver’s brain so that the receiver experiences the sender’s mind. Then the receiver is in a sort of theatre, watching the sender’s brain activity as an experience. But if this is possible, why did we ever give up on the Cartesian model? We can make it work!

            Like

          14. No, we can’t have the experience just by studying it, in the same way that my laptop cannot be in the same informational state as my iPhone, even if it has that entire state within it in a virtual simulator. But we can, in principle, have enough information that the experience would carry no surprises. I should note that “in principle” may mean a Laplace’s demon level of understanding. In practice, it will always be easier to just have the experience if we can. If we can’t, then we have to be satisfied with an approximate knowledge (to varying degrees) of the experience. So we can never know precisely the experience of a bat, but we can get ever closer approximations.

            Like

          15. Looking at « what it is like » (WiL) for robots somehow bring the subject from AI to ALife.
            Humans and animals have a WiL performance. Today robots are made of matter which came before life and humans in evolution (energy=> matter => life => humans).
            Life being much simpler than human mind, it may be logical to investigate first ALife possibilities, and then look at how such understanding could “apply” to humans. Relating silicon to biological cell as interdependent entities could bring an entry point to ALife

            Like

          16. I think most people assume that something must be alive before it can be conscious. This implies that if a robot is conscious, it must be alive. But some people have an intuition that a robot could be conscious without being alive. This implies that other things could be conscious without being alive. It’s an interesting point.

            Like

    2. “I take issue with your conflation of “what it’s like” with a Cartesian theatre.”

      Agree with that. The more I hear the expression, the less it means to me.

      If it has any meaning whatsoever, it seems to apply to the view of some group of philosophers who want to take their conscious experience and magnify it into another class of reality from the physical. Maybe that was the crux of what Dennett was trying to get at.

      Like

      1. I would say the same of “consciousness.” The more I hear it in certain circles, the less it seems to mean. This is especially true of certain philosophers who would prefer not to bother with it at all, unless they can fit it into their preconceptions about what the world is like.

        Like

  7. Your hierarchy at one point made sense to me but now I’m not so sure.

    To have a model as in 2, you need space and time awareness. Time awareness requires memory. Once you have memory you have, at least potentially, a causal model and learning potential. So, I see 2 and 3 so closely tied in that you can’t really have 2 without 3. One implies the other.

    This is largely the basis of my explanation for why spacetime mapping and episodic memory ended up in the same anatomical vicinity and became key to enabling unlimited associative learning.

    As for 4, isn’t that really just memory too? Consciousness may not be an illusion but introspection certainly could be.

    The “pain” or “feelings”argument is a red herring. There is no reason feeling pain, remorse, love, or hate are absolute requirements for consciousness. The main requirement for consciousness is your 2+3. It is spacetime mapping, memory, and learning. A bee may feel no pain but it must have a model of its physical and social world and individual organism’s location in it. Bees typically forage within a few kilometers of their hive but can range up to seven or eight kilometers if nearby resources are unavailable. Without memory no organism could find its way in the environment and be able to return to any given spot.

    Liked by 2 people

    1. You can get overly reductive and deflationary and simply describe all the layers above the first one as predictions, with the layers only increasing the scope of the predictions. But then my phone’s ability to recognize (predict a match of) my face counts as being conscious.

      I’ll grant there is a time component for environmental maps, particularly navigational ones. But I used the label “causal models” for a reason. There’s a difference between semantic memories, which even in navigation scenarios can be relatively simple associations (turn left at this landmark), and episodic memories and imagination. 

      I’m not up on the latest findings with bees, but if we look at flies and ants, they seem to be heavily dependent on environmental heuristics, such as the pheromone trails ants follow.

      In general, we have to be careful about assuming animals navigate the same way we do. 

      Liked by 1 person

      1. “But then my phone’s ability to recognize (predict a match of) my face counts as being conscious.”

        Not really. I didn’t say anything about prediction. Prediction, even in biological organisms may be mostly or entirely unconscious. You need to be careful distinguishing between what the brain does and what consciousness does. Consciousness may be nothing more than a feedback mechanism to a primarily unconscious brain – in other words, something like introspection in minimalist form – a sort of merging of memory and updated information mapped to a spacetime grid implemented in neurons.

        Liked by 1 person

        1. Right, the key is always what distinguishes conscious processing from unconscious processing. If introspection in minimalist form is present, it should make an observable difference in capabilities. If you say it’s just memory and navigational grid, then why doesn’t the same criteria apply to a Roomba or self driving car?

          Like

          1. Are you saying only neurons can provide consciousness? What about them give them, and only them, that capability? Is it only human neurons, vertebrate neurons, or do any biological neurons count? What about neuromorphic neurons? Or software neurons (which the Roomba probably has)?

            Like

          2. “Consciousness must be adaptive”

            Are Roombas subject to Darwinian evolution? Do their circuit boards evolve through natural selection?

            You are commenting on a declaration involving living organisms, so why bring in Roombas? Compare an organism that searches out its own fuel, can reproduce, and possibly produces an internal experience (that is the topic, isn’t it?) with a circuit board with motor and wheels. Not even apples and oranges. It’s more like apples and rocks.

            “Do any biological neurons count?”

            I’ve been fairly specific that neurons in the limbic system that track place and time, that also seemed to be tied to the development of episodic memory, are the minimal foundation.

            Like

          3. In evolved systems, generally any functionality has to be adaptive. That doesn’t mean the same functionality can’t exist in designed systems. Consider that movement in animals is adaptive. By the logic you’re implying here, robots shouldn’t be able to move. Obviously the adaptive requirement doesn’t apply to designed systems.

            I think I’ve shared this paper on the evolution of episodic memory with you before. Has robust evidence been discovered since then for episodic memory outside of mammals and birds?

            https://www.pnas.org/doi/full/10.1073/pnas.1301199110

            Like

          4. Nothing I said implies robots can’t move. You titled your post “Consciousness must be adaptive” but now have exempted robots. My argument isn’t that robots can’t move. It is that they are not conscious when they.

            You can define “episodic” narrowly enough that you probably can only find it in fairly advanced vertebrates. It’s a bad term. The key is whether future behavior is guided by past experience. Being able to use past experience requires memory

            Cephalopods includes the octopus and cuttlefish. Just a little searching will find several papers that address it.

            Episodic-like memory is preserved with age in cuttlefish
            Alexandra K. Schnell, Nicola S. Clayton, Roger T. Hanlon and Christelle Jozet-Alves
            Published:18 August 2021https://doi.org/10.1098/rspb.2021.1052

            Insects might raise the most doubt since the number of neurons is so relatively small. However, this paper makes some strong arguments for consciousness in insects based on the same criteria I am using.

            What insects can tell us about the origins of consciousness
            Andrew B. Barron andrew.barron@mq.edu.au and Colin KleinAuthors Info & Affiliations

            https://www.pnas.org/doi/full/10.1073/pnas.1520084113

            quotes from paper follow

            Here we propose that at least one invertebrate clade, the insects, has a capacity for the most basic aspect of consciousness: subjective experience. In vertebrates the capacity for subjective experience is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space. This integrated and egocentric representation of the world from the animal’s perspective is sufficient for subjective experience. Structures in the insect brain perform analogous functions. Therefore, we argue the insect brain also supports a capacity for subjective experience. In both vertebrates and insects this form of behavioral control system evolved as an efficient solution to basic problems of sensory reafference and true navigation.

            Some of the central-place foraging ants and bees have remarkable navigational skills and spatial memory and are clearly able to organize their behavior with respect to more than simply their immediate sensory environment. They will perform targeted searches in appropriate locations and at appropriate times (97) for resources they have experienced previously. Several insect species have been shown to be able to plot novel routes based on learned landmarks and goals, evidencing a spatial relation of landmark information (98, 99). The honey bee dance communication system requires a dance follower to determine a flight vector relative to celestial cues from symbolic and stereotyped dance movements (100). All these behaviors require a form of neural modeling of space.

            Like

          5. I don’t think there is any problem with the concept of episodic memory. It’s a philosophical decision whether we regard it as necessary or not for consciousness.

            Cephalopods are definitely an exception. Based on what I’ve read, it does seem like they have episodic memory in a capacity similar to many mammals. But my question was around vertebrates.

            In terms of insects and many other invertebrates, when I follow the citation trail to the actual evidence for higher forms of cognition, I find it meager and very open to interpretation. But I’ll admit I haven’t followed every citation in every paper, particularly the newer ones.

            Liked by 1 person

          6. Have you tried Google Scholar?

            https://scholar.google.com/scholar?hl=en&as_sdt=0%2C11&q=insect+consciousness&btnG=

            Probably a number of those papers doubt insect consciousness but you will also find a wide range of papers from different fields and perspectives that accept some form of it.

            O crouse, we understand the anthropomorphism problem of observing behavior and inferring something like human consciousness accounts for it.

            But there is also a reverse anthropomorphism problem. Consciousness might have a similar physical implementation across many diverse species but could still be radically different from human consciousness. Insects could be conscious because they have a similar mapping mechanism to more complex organisms, yet they could still completely lack any feeling of pain or suffering. We don’t have a problem thinking a blind human is conscious but imagining an insect with no sense of pain is conscious becomes stretch for many.

            Liked by 1 person

          7. I have used Google Scholar, but a lot of what I’m referring to was using straight Google to follow the citations from Feinberg and Mallatt in The Ancient Origins of Consciousness, and Ginsburg and Jablonka in The Evolution of the Sensitive Soul. All of these authors are very sympathetic to early consciousness. At first I took their citations at face value, but after digging up a few I started getting more skeptical.

            Of course, Feinberg and Mallatt define consciousness in such a way that their thesis remains fairly solid. For them, exteroception is sufficient to call it consciousness. But Ginsburg and Jablonka insist on affects being present, and used Feinberg and Mallatt’s review of the literature for their analysis. It was the citations I checked for these that seemed underdetermined on some of the conclusions those books took from them.

            But most of the papers I’ve seen assert consciousness in insects define it in such a manner that it’s true. Most are clear that they’re not talking about human level consciousness. That’s why I talk in terms of hierarchies or dimensions.

            Both anthropomorphism and anthropocentrism are problems anyone studying this has to be aware of. Myself, I think talk of consciousness is inescapably anthropocentric. We’re better off just studying the capabilities of the species and letting the results speak for themselves, rather than trying to force them into a vague protean pre-scientific category. But any paper with the word “consciousness” in it is going to get a lot more attention, an incentive to use that word as much as possible.

            None of that means we should mistreat animals under our control, even when their capacity to suffer is slight or absent. Overriding intuitive sympathies can be a habit that bleeds over into how we treat each other.

            Like

  8. What do you think the motivation is behind this declaration? Be kind to animals because scientists say they may be conscious?

    “if all we get is reflexive withdrawal and avoidance behavior, then there’s no reason for it to have evolved mechanisms we’d be tempted to label “pain”.”

    But I would think pain is the reason it avoids harm. How could a creature know what to avoid without pain? Or am I being too common sensical?

    Liked by 1 person

    1. Yeah, as usual it’s complicated. There’s a difference between nociception, which is a part of the body signaling a noxious stimulus to the spinal cord and brain, and the evaluation done in the brain we experience as pain. All that’s needed for a withdrawal reflex and simple avoidance behavior is nociception. 

      We don’t experience our hand burning when we touch a hot stove to know to withdraw it. Typically by the time we’re conscious of it, we’ve already yanked our hand back. We experience it to learn not to put our hand there next time. And spur thinking about how to deal with the damage.

      If you’re interested in the details, the National Library of Medicine has a guide for recognizing pain in research animals (warning: some of the material might be disturbing): https://www.ncbi.nlm.nih.gov/books/NBK32655/

      Like

      1. Ah, I see what you mean now. It’s funny about yanking our hands back before actually feeling the pain. There is something we feel, but it’s hard to say what. I have wondered about that with burns and cuts in particular (while running water over my hand with the expectation that it will soon begin to hurt). Sometimes I’ve even psyched myself out and pulled away from something that couldn’t hurt me, thinking it was something else.

        Like

        1. It’s kind of interesting studying how nociception and pain can be disassociated. Philosopher Richard Brown (I think) once recounted how he was walking with his sister on the beach when they were kids. He noticed she was limping and asked her about it. She looked down at her leg, only then noticing a large gash, and immediately began crying out in pain, which she hadn’t noticed until that moment.

          In my own case, when I was very young, the top of my feet started itching, so I started scratching. I kept scratching even when the the skin was broken and my feet were a bloody mess. It never felt painful when I was scratching. All I felt was relief. But it hurt like hell once they figured out the allergy I had and I had to wait for my feet to heal.

          Interestingly, they often can’t give preterm babies anesthesia. It’s risky for their immature systems. But it was found that the babies take any painful treatment much better, heart rate and breathing stay much calmer, if it’s done while they’re laying on their mother’s chest. 

          And of course there’s phantom pain, pain without nociception. My cousin was once addicted to opiates, which he had started taking due to back pain, which seemed to continue long after his back was supposed to be healed. Turned out it was the opiate addiction triggering the feeling of pain. Once he was out of withdrawal, the pain disappeared.

          Pain is a complicated thing.

          Liked by 1 person

          1. It is indeed complicated. I feel like I saw something a while back about using video games to help people suffering from chronic pain. I guess the idea was just to stop thinking about it. Strange stuff. You wouldn’t think it would work, but apparently it did.

            Liked by 1 person

  9. Very helpful comments!

    The Declaration’s background web site (presented with many engagingly beautiful and touching pictures) explains that it is about “phenomenal consciousness” or “sentience”, namely the question of which animals can have subjective experiences such as sensations, emotions and feelings of pleasure or pain. The key issue underlying the Declaration is the recognition of the potential for animals to be conscious, and the call for their welfare and ethical treatment to be considered in the light of these findings.

    Ten studies are cited that report certain cognitive or behavioural behaviours observed in various non-mammalian species, such as learning, memory, planning, problem solving, and self-awareness, which, it is argued, strongly suggest that a wide range of animals, including vertebrates and many invertebrates, may have subjective experiences.

    The authors argue that it is quite adequate to interpret these cognitive or behavioural responses as evidence of consciousness in cases where the same behaviour, if found in a human or other mammal, would be well explained by conscious processing.

    This seems questionable to me, because “deciding whether a non-verbal behavior reflects conscious vs. unconscious cognitive processes requires not only that the behavior be explainable in terms of conscious processes, but also that non-conscious explanations are inadequate”, as Joseph LeDoux and Richard Brown put it.

    There is a very instructive study on this issue (Mason GJ and Lavery JM (2022) What Is It Like to Be a Bass? Red Herrings, Fish Pain and the Study of Animal Sentience; Front. Vet. Sci.).

    Mason and Lavery stress that while many researchers use observed cognitive and behavioural responses in animals to infer sentience, not all responses necessarily indicate the presence of (phenomenal) consciousness (P-consciousness).

    They identify types of measures that should not be used to infer sentience because they can be performed by organisms that can reasonably be assumed not to be sentient, notably those lacking a nervous system, such as plants and protozoa, spines disconnected from the brain, decerebrate mammals and birds, and humans in unconscious states (for instance because they are anesthetized, asleep, or being exposed to subliminal cues they report as undetectable, as occurring without P-consciousness).

    As Mason and Lavery show, these individuals exhibit unlearned avoidance or approach responses, such as various unconditioned behavioral responses to noxious stimuli, and such responses can be modulated, including by analgesics. Likewise, blindsighted” people who cannot see because of damage to the visual cortex are still able to avoid walking or reaching into obstacles, and to visually track or grasp stimuli that they report they cannot see.
    Furthermore, they provide evidence that such responses can be modulated, suggesting that the modulation of avoidance behavior by affectively relevant manipulations does not require sentience.

    Discrimination between presented stimuli can occur without awareness. For example, humans exposed to cues that they have no awareness of can also still discriminate between them, if asked to choose: they report feeling as if they are completely guessing, yet they respond correctly significantly above chance. Thus discriminating between available stimuli does not require P-consciousness. Furthermore, discrimination during some learned tasks can also occur without awareness, as illustrated by many examples, hence simple Pavlovian conditioning, in which subjects associate a predictive cue with a reinforcer, generally does not require P-consciousness. And the same also holds for instrumental learning, in which innate responses to unconditioned stimuli are modified in form and timing. It also works through the spine. For example, Spinally transected rodents can learn to retract their hind legs for a period of time to avoid shocks to the foot.

    It would certainly be interesting to review the extent to which the studies listed on the background web site of the Declaration rely on cognitive or behavioural responses that can be found in absence of P-consciousness.

    Finally, the document states that for other vertebrates (reptiles, amphibians and fish) and many invertebrates (cephalopod molluscs such as octopus and cuttlefish, decapod crustaceans such as hermit crabs and crayfish, and insects such as bees and fruit flies), there is a chance high enough to warrant serious consideration of their welfare. With regard to such aspirations, James Rose has aptly observed (in relation to fish, but certainly also applicable to the other species mentioned): “Considerations of the welfare of an individual fish seem very hypothetical and academic when the normal existence of fishes is considered. The lifestyle of billions of individual fishes is one of predation, involving perpetual eating of fishes by other fishes and by many other vertebrate or invertebrate predators. In addition to this, disease is commonplace, as are threats from adverse changes in habitat. Most fishes do not survive to reproductive maturity, but the reproductive adaptations of fishes compensate for this. Individual survival is obviously not as relevant as the transmission of aspecies’ genotype.”

    Liked by 2 people

    1. Thanks for that overview! I missed the background site. Although to your point, I’ve learned not to take those kinds of claims at face value. Following the citation trail and reading the actual study usually reveals that the evidence is far more limited and open to interpretation than the press releases imply.

      But as I noted in the post, I think the whole p-consciousness standard is meaningless anyway. It’s better to focus on observable capabilities. And to Rose’s point, it’s not like their life in nature is sweetness and light. 

      But I do think we should strive not to do anything that leads to unnecessary suffering in animals under our control. Even in cases where their capacity to suffer is absent or very limited, if we get in the habit of being callous toward systems we intuitively feel empathy for, those habits can bleed into how we treat each other.

      Like

    2. I originally saw this in Quanta Magazine and had the thought that it should be possible to go beyond simple behavioral observation to find similar neural structures in conscious organisms. I think a good candidate structure are structures found in the hippocampus of vertebrates. There are similar structures in arthropods and cephalopods Essentially they gridlike neural patterns that account for place and spatial locations. Other time neurons have also been found. The structures seem to map more than simple coordinate-like locations but include social relations and memories but including what may be mental rehearsals of future actions.. The structures may be related to the abilities of learning and memory.

      Like

  10. Where did affect go from your four level classification system? It’s needed for level 3 to have any biological payoff, because there’s no point in all that modeling if you’re not going to evaluate options. But still, it was nice having explicit mention of affect in earlier posts.

    Separately, I see no reason to think that an incremental view of consciousness makes it any less important. More complicated, sure, but that’s different.

    Liked by 1 person

      1. My problem with 2 and 3 separate is that I think it would be hard to have 2 without some awareness or sense of time (at least ordering of events, which may be what time is anyway). Once time is in the picture there is a potential for memory. You could be argue that time is another way of thinking about memory. I go back to Barbour’s End of Time where he talks about time capsules. The only why we know time has passed is by consulting a record that reflects a past. Those records are memory in an organism. Once there is time and memory, then you can have a causal model and learning. And, again, logically it would seem difficult to have a body and environment model without ordering of events (memory) and learning, because there would be no way to create the associations required for a model.

        My argument would be simply that to have a body and environment model requires learning and memory. Aspects of the model could be hard-coded, but if the entire model is hard-coded then it would be impossible to distinguish from a reflex. There would be no learning or adaptation to circumstances.

        Affects really can get entangled with the anthropomorphism problem, so I can see why Mike would want to avoid issues with it. Unfortunately affect in a broad sense of the term drives the learning process. No species will pick the red over the green unless the red provides some payoff that can be remembered and, for which, there has developed a causal association. Affect, even in its human form, probably boils down to pheromones and neurotransmitters.

        “Pheromones are involved in almost every aspect of the honey bee colony life: development and reproduction (including queen mating and swarming), foraging, defense, orientation, and in general the whole integration of colony activities, from foundation to decline.”

        https://www.ncbi.nlm.nih.gov/books/NBK200983/

        Liked by 1 person

    1. I stopped labeling “Affect” explicitly, because as the experience of emotion, it presupposes consciousness. And people argue about what is required for that label. Many argue that we have it with 1 and 2, often influenced by seeing affect displays such as facial grimacing, crying, teeth barring, etc. But they’re usually unwilling to apply this standard to machines.

      I agree it only makes sense to talk about affect with 3 (although maybe it doesn’t take a sophisticated form of 3). In this view, an affect is part of the causal modeling related to 1. An affect that can’t potentially be overridden (with effort) isn’t an affect, but a reflex. Evolutionarily, it only makes sense to feel if the feeling can be used in reasoning to some degree.

      Of course, humans affects usually happen with 4 in place to allow us to predict, discuss, and analyze them in social interactions. Some would insist that we only get “true affects” then. But that rules them out for possibly all non-human animals. And I always remind myself that we consider ourselves conscious even when we’re not actively introspecting.

      In the end, it depends on your philosophy. I don’t think there’s any strict fact of the matter where the “real affects” are.

      Like

      1. Almost all concepts have fuzzy boundaries, but yes Virginia, there is a fact of the matter about affect. The world contains clusters of properties that go together. This very much includes the biological world, and minds in particular. The way the world guides language use is not generally transparent to the language user, resulting in linguistic disputes that aren’t just linguistic disputes but have roots extending into the real world.

        Like

        1. I think when old terms that refer to complex phenomena start being stretched outside of their original context, it pays to keep an open mind, particularly around edge cases. ”Affect” seems to have begun in philosophy and been taken up by psychology, but within a human psychology context. When we start talking about animals with only a subset or very different context from the human one, there’s going to be a lot of gray area on whether the same word applies.

          Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.