The benefits of functionalism for animal welfare

Last week, Science News had an article about the difficulty of studying animal emotions, on understanding what an animal in a particular situation is really feeling. It’s an interesting article, although not one with much new information for many of you. However, I want to focus on one point raised by one of the researchers interviewed.

But there’s an important caveat, Mendl says. The experiment, called a judgment bias task, points to whether an animal is experiencing something in its life positively or negatively. However, the task doesn’t demonstrate something more basic — whether an animal can have subjective experiences to begin with.

Animal welfare studies assume that animals are sentient, because if they weren’t, talking about their well-being wouldn’t make sense, Mason says. “But none of the measures we use can assess or check that assumption, because we simply don’t yet know how to assess sentience,” she notes.

It seems to me that Mendl’s concern arises from considering phenomenal consciousness to be something separate and apart from access consciousness, that is, from assuming that subjective experience is an add-on bolted on top of the functionality, the capabilities of the animal, and something that is either present or absent like a light that is either switched on or off.

As I’ve noted before, this stance is problematic because it essentially manufactures the hard problem of consciousness, the idea that consciousness can’t be explained physically. If we pre-emptively exclude all the functional explanations from consideration, then the problem starts to look intractable.

Of course, the people making this move argue that they’re not excluding anything, that the functional explanations just don’t get us there. Yet, we know that if we remove certain functionality, phenomenality is affected. And the reverse is also true, if we lose phenomenality, like in cases of blindsight, we also lose functionality, such as the ability to know whether we’re able to make visual detections and discriminations and act accordingly. Even David Chalmers admits that phenomenal experience closely coheres with functionality.

François Kammerer, in a recent paper, argues against the idea that we should use phenomenal consciousness as any kind of ethical guide. He’s coming at the issue from an illusionist perspective, that phenomenal consciousness either doesn’t exist (strong illusionism), or is different than it seems (weak illusionism).

Kammerer makes an argument, also put forth by Peter Carruthers and others, that whether phenomenal consciousness is present will always be scientifically indeterminate, even for an omniscient observer. I agree with this argument, although I come at it from the perspective that the difficulty is in agreeing on what functionality is necessary and sufficient for the label “phenomenal consciousness” rather than any stance that it doesn’t exist. (Admittedly, the distinction between my stance and weak illusionism largely amounts to language choices.)

Kammerer concludes that we should reach our ethical conclusions based on functionality and capabilities, such as the demonstrable desires of animals and to what extent those desires might be frustrated. I think this is headed in the right direction, with the caveat that additional criteria are needed (such as learning) to know we’re not just dealing with reflexes.

The key is, once we decide to focus on observable capabilities, we can stop agonizing over whether an animal has “the lights on”. The “lights on” standard has long been problematic, because it tends to shift with public sentiment. For centuries, people were sure animals didn’t have the lights on, therefore they could be mistreated without moral consequence. It’s only in the last century or so that general opinion has shifted on this. Grounding our ethics in observable capabilities many help guard against it shifting back.

Unless of course I’m missing something.

35 thoughts on “The benefits of functionalism for animal welfare

  1. “whether phenomenal consciousness is present will always be scientifically indeterminate, even for an omniscient observer”

    Yes, but if you carry this to its end conclusion, the existence of any phenomenal consciousness other than my own is also scientifically indeterminate. How can we even rely on self-reports? They only demonstrate that something is producing the motor activity of speaking.

    Liked by 2 people

    1. True. Kammerer in his paper touches on this, but then concludes it’s beyond his scope.

      That said, it’s easier for us to infer each other’s experience because we’re the same species with the same capabilities. Although this becomes less certain for brain injured patients, other mentally compromised individuals, infants, or other special cases. We can use brain scans and nonreporting behavioral cues, but those are only useful after the observed patterns have already been correlated with self reports in other individuals.

      But if someone buys philosophical zombies, I don’t know if there’s any evidence that can be conclusive.

      Like

  2. Is this like arguing whether consciousness is a spectrum or not? If it is, then as functional consciousness increases, so does phenomenal consciousness. If it’s not then it’s “lights out” for everyone but humans?
    Thinking this along the lines of “what can I eat”… I’ll eat salmon because their threshold of functiousness, in my mind, has not been met. But, I won’t eat dog, because it has. But, if I’m not a functionalist, and the lights are out in every creature but humans, then all creatures are on the menu. Is that right?
    Regardless of the presence or not of consciousness, couldn’t what a creature wants also be framed as what they don’t want? Namely, pain. At what pain threshold do you draw the line for what you’ll eat? After all, animal welfare, aside from zoos or laboratories, is all about raising critters for consumption, no?
    “Ah, Porky had a good life. He enjoyed the mud, ate acorns and gruel. Then I killed him and consumed his flesh. I made sure to limit his pain, in the end, with a heavy blow from the knacker’s hammer.”

    Liked by 1 person

    1. A spectrum is the way I look at it, a dimmer rather than a binary switch. On one end is us, on the other things like rocks. A chimpanzee is more conscious than a dog, which is more conscious than a mouse, which is more conscious than a frog, fish, etc. Although, strictly speaking, there’s no fact of the matter. So we could look at it as only humans are conscious, but then we’d have to decide how to regard mentally compromised, immature, or senescent humans. Unless we just want to be speciesist about the whole thing.

      But a spectrum allows us to recognize that non-human animals do have some moral status. Just how much depends on each species’ specific mental capabilities. Although arguably we shouldn’t put any of them in unnecessary distress.

      On eating those different species, the thing to remember about most non-human animals (emphasis on “most”) is that they have very limited autobiographical models of self. (At least to the extent anyone so far has been able to determine.) So as long as they’re treated humanely while they live, and put down humanely, you’re arguably not depriving them of anything they can imagine. (It’s possible this is just rationalizing, but if so I haven’t been able to find the hole in it yet.)

      I noted “most” above, because there are species where reportedly more caution is warranted: great apes, elephants, dolphins, whales, etc.

      Liked by 1 person

      1. Like a farm full of animals all living in terror that they’re going to be killed and eaten someday…
        “Oh, but pigs DO have a purpose. One could say the most important purpose,” purred the evil cat.
        As long as they don’t catch on… Light up the BBQ.

        Liked by 1 person

  3. It seems to me that Mendl’s concern arises from [A]considering phenomenal consciousness to be something separate [B]and apart from access consciousness, [C]that is, from assuming that subjective experience is an add-on bolted on top of the functionality, the capabilities of the animal

    I marked these ideas A, B, and C to call your attention to the fact that they’re all different. A does not imply B or C, nor does the combination A+B imply C. (This last point is tricky, but note that C implies that access comes first; and anyway, ) I want to focus on the fact that A doesn’t imply C.

    My car can go because it has an internal combustion engine. Take internal combustion away, and the car is only useful for shelter. But that doesn’t mean you can’t have a functioning automobile without internal combustion! Of course you can. Go buy an electric car. Internal combustion is separate, or at least separable, from automobility (automobile-ness?), but it’s not apart from automobility. And it certainly isn’t bolted ON TOP of automobility.

    Liked by 2 people

    1. So you’re saying phenomenality comes from the implementation details? Maybe it does, but that presupposes that those details have no effect on the functionality. We seem to be back to epiphenomenal properties. I can’t see any way to demonstrate that it isn’t true. On the other hand, I can’t see any way to demonstrate that it is.

      Like

      1. Return to the analogy. A gasoline engine is not epiphenomenal! It has a significant, actually vital, role to play in my car! Likewise for the particular phenomena a human experiences.

        Now, this doesn’t mean you can’t build a different organism, or robot, that has different phenomenal experiences to achieve extremely similar objectives. An electric motor – back to the cars – is still a motor, after all. It’s just not a gasoline motor. Likewise, different implementation details yield different phenomenal feels, but they are still some kind of phenomenal feels provided that they play a certain role. That role would include relative immediacy in cognition, and lower levels of doubt than inferences about the world external to the organism. (Not zero doubt; once you describe an experience you face the possibility of misdescription; just lower.)

        Liked by 1 person

        1. There are observable differences between a car with a gas engine and one with an electric one. They require different inputs (gas vs charging) and different types of maintenance. In other words, they provide the same overall functionality, but there are a range of micro-differences, observable functional differences.

          If you’re saying phenomenal consciousness is like that, then we’re just talking about more functionality, which then should be measurable. Or am I missing something?

          Like

          1. Well it depends what you mean by “functionality”. Normally it means something more than “observables”. The word “function” usually suggests purpose: as for example, the purpose of a cup is to hold water. Insofar as that’s true, it doesn’t matter if the cup is ceramic or plastic. The difference between ceramic and plastic is very much observable, but I don’t think it’s helpful to call it a “functional” difference.

            Like

          2. Consider, would you give a young child a plastic cup or a ceramic one? Which are you more likely to take on a picnic, or to the beach? Which is more likely to be used for a ceremonial occasion? Which one is better for hot liquids, like tea or coffee?

            Like

          3. And who are you going to love more, a human being or an advanced robot? Every physical difference can have a purpose to someone who thinks that feature is important. If you generalize to all observable features as “functional”, you collapse the distinction between functionalism and physicalism. I’m happy to be a “functionalist” if that just means I’m a physicalist, but I thought the former was supposed to be more specific.

            Like

          4. The distinguishing feature of functionalism is the mind is about what the brain does rather than what it is, activity rather than a substance. This distinguishes it from identity theory, which is more about the is. From the SEP article:

            Functionalism is the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.
            https://plato.stanford.edu/entries/functionalism/

            Of course, differences in substances (plastic vs ceramic) can result in differences in function. But it’s the causal role which counts.

            Like

          5. “solely on its function, or the role it plays, in the cognitive system of which it is a part”

            Isn’t that including the defined (“cognitive system”) into the definition of what you are trying to define or explain (“consciousness”)? It’s nonsensical to say the “function” of thought, desire, pain or other mental state is its “function in the cognitive system” because “cognitive system” implies mental states.

            Like

          6. Yeah, if I’d written that, I probably would have left off the “cognitive” adjective and just went with “system”, or maybe “information processing system” but the author is probably trying to be more theory neutral. I think we owe her at least some interpretational charity though.

            Liked by 1 person

          7. I finally figured out what bothers me about what you said here – it leaves something important out when you say “what the brain does rather than what it is”. After all, all distinctive “is” leads to distinctive “does”, even if the latter is only “Fluoresces strongly at 8.04 keV in response to higher energy X-rays”.

            As the SEP author says, function a la functionalism is the role something plays in the cognitive system. Any other “function” is moving the goal-posts.

            Your earlier comment “We seem to be back to epiphenomenal properties” relies on moving the goal posts. Now would be a good time to admit that was a mistake.

            Like

          8. Certainly any substance reduces to processes. (At least above fundamental physics.) But identity theories don’t talk about those processes. (At least not the ones I’ve read.) If someone wants to talk about those lower level processes and their causal role, I’d find that interesting, but now it seems like we’re back in functional waters.

            I’m sorry if I misconstrued your view. But I think I still need your help to see what I missed. On the distinction you made between A, B, and C, let’s focus on A as a separate phenomenal consciousness. Let’s say A’ is the non-conscious aspect of A. What causal effects does A have that A’ doesn’t?

            Liked by 1 person

          9. A is phenomenal consciousness, B is access consciousness, and A’ is supposed to be “the non-conscious aspect(s) of A”. (A phrase that cries out for clarification!) A’ is, I think, going to be a weird construction that would make Republican gerrymandered districts look like solid commonsensical groupings of territory.

            So that makes it hard to reason about, but we can still compare some known facts about phenomenal consciousness that potentially contrast with A’. Namely, A plus some other common brain states (say X) leads to B, access consciousness. And when combined with yet other common brain states (Y), A leads to emotional impacts E. We don’t know how often A’ (without A) and X yield B, or A’ and Y lead to E. But it seems like a good bet that it would be less often – a lot less. Kinda like removing all internal combustion from my car would lead less often to significant acceleration of the car.

            Now, if you go about replacing A with some other process that can do some of the same jobs, you might get better results. Just as, if you replace my internal combustion with a compatibly geared electric motor, you can have a good car again. But the very fact that you had to replace it shows that it was doing something important.

            Liked by 1 person

          10. Ok, thanks.

            For A’, I meant it to represent early pre-access sensory processing. (Sorry, I should have clarified that.) So:

            A’=early sensory pre-access processing
            A=phenomenal consciousness
            B=access consciousness
            E=affective feeling

            If I understood you correctly, A exists before and is crucial for B, which in turn is crucial for E. (Which I think matches Ned Block’s ontology.)

            Some questions.
            1. What is the difference between A and A’? What does A provide B that A’ doesn’t?
            2. How do you know the answer to 1? What access to A do you have other than through B?

            Just to put my own cards on the table, my answer to 1 is that A can’t be pre-access, because it requires B and E to be A. Pre-access, we’re only dealing with A’. Which makes 2 moot for me.

            Like

          11. Corrections to:

            If I understood you correctly, A exists before and is crucial for B, which in turn is crucial for E. (Which I think matches Ned Block’s ontology.)

            I would not commit to say A exists before B, although parts of it must. B is highly impactful for E, but E can probably happen to some degree without B. Even A’ possibly has some small impact on E that could bypass both A and B, as far as I know.

            I don’t have a great answer to (1); that requires more neuroscience than I know. Or possibly than anyone knows yet? As far as I know, Ned Block may be wrong and (A) requires global workspace activation, automatically bringing (B) along. But that may not be true – which is why it’s worth it to me to dispute this point.

            To (2), the primary answer is evolutionary theory. No expensive activity can survive if it isn’t helping the organism somehow. And all brain activity is expensive. Phenomenal feels must be doing something for us (and marking valuable and disvaluable states is probably an important part of that).

            Secondarily, sometimes we (seem to) remember a feeling that we didn’t notice at the time because we were too distracted, or too shocked. Now if you want to call that “delayed access consciousness”, I guess that’s fair game; but a simple elegant explanation would be that phenomenal consciousness and access are two different things, and that sometimes when we miss the first chance for access, we get lucky and get another chance later.

            Like

          12. I definitely agree that phenomenal feels are adaptive. We learn to not touch a hot stove after the phenomenal experience of touching it and feeling and remembering the burn. We learn whether a particular food has energy by the sweetness of its taste. And the reds and yellows of ripe fruit call our attention to it. That doesn’t mean it always works right, or that there may not be the occasional spandrel. But in general, phenomenal experience provides crucial adaptive functionality.

            I think your final paragraph sums up one of the central problems with a separate phenomenal consciousness. Is the fridge light on when we don’t have the door open? Is phenomenal consciousness phenomenal when we’re not accessing it? Or is it the accessing of it itself that makes it phenomenal?

            The other problem, in my view, is that access provides a plausible mechanism for what makes sensory and affective processing phenomenal. Remove access, and it becomes very mysterious. It’s like trying to understand pizza after preemptively ruling out any talk of recipes, ingredients, ovens, and cooking methods.

            But a lot of intelligent people seem convinced it’s separate. So maybe I’m missing something.

            Like

          13. There is always a possibility that further scientific investigation will shed new light on the interrelationships. There are many causal layers and groupings in the brain; it may or may not become natural to identify some of them as responsible for verbal reports (V), for example. While others may provide strong intermediation between early sensory processing (your A’) and emotional reactions (E), and between A’ and some further module which in turn informs V. Maybe not; maybe it will ignite verbal disputes or a collective shrug of shoulders. The only way to find out is to do the science.

            Like

          14. I’m definitely onboard with doing the science. But if I had to bet, the science will underdetermine this. It will likely elaborate on what we already know, that early recurrent sensory processing is only selectively accessed. Philosophically it could be regarded as a form of consciousness that overflows access, or a form of pre-conscious processing, only some of which makes it into consciousness. In the end, science likely won’t be able to establish a fact of the matter. Verbal disputes or collectives shrugs may be a good description.

            Like

    2. I think part of the problem is how is the function of “automobility” or consciousness defined.

      As always, there are sometimes vague boundaries but “automobility” might be considered as much related to steering and wheels as propulsion.

      The same sort of issue exists with consciousness. Is its function simply to produce some set of behaviors that the organisms/devices can perform that make us think they are conscious? In that case, we probably already have rudimentary conscious machines – Furbies, various spidery robots, jumping and climbing human-like robots. If consciousness is a spectrum, then these things may be on the low end but they would have to be judged to have it.

      On the other hand, if consciousness is a specific implementation used in biological organisms, then its ability to produce controlled motor actions by organic muscles based on information from the environment is the function. Devices would be excluded.

      Like

  4. Our consciousness is like the white light we experience. A mental construct that has evolved for a purpose, to address the behavior control problem and allow us to focus our level attention. As understood from Graziano

    Liked by 1 person

    1. I’m a fan of Graziano’s work, particularly his approach. I particularly like his standard model efforts to reconcile his attention schema theory, global workspace, and higher order thought theories.

      Like

  5. I see you are approaching a new guide to ethics, but not sure you’re ready to make the full jump. I’ve (somewhat recently, past few years) come to the conclusion that whether something is a moral patient (deserving of moral consideration) depends simply on whether it has goals. Note: “having goals” is a very broad category, because the having of goals is determined by the “creator” of the entity. (This is related to final cause, btw.) Thus, pretty much any artifact, anything that can be said to have a function, has the goal of its creator. So the ethical thing to do is recognize goals where you can and cooperate with those goals to a reasonable extent, which means it’s a matter of comparing and balancing your own goals against those you recognize in others, and compromising your own goals to a smaller or larger extent, depending.

    You can see how this would apply to living things, which have at the least the goal of staying alive. Some animals have additional goals, like staying not hungry and not in pain. But as I said it applies to artifacts as well, which is why the wanton destruction of things, like breaking windows, is immoral unless there are overriding goals, like escaping a burning building.

    So are you on board with this?

    *
    [don’t know if I need to mention that consciousness implies goals]

    Liked by 1 person

    1. I can’t really say this is any new approach to ethics for me. My views overall still lean pretty Epicurean: don’t cause unnecessary suffering, with a high evidential bar for “necessary”.

      I think we’ve discussed this before, but goals, by themselves, aren’t enough for me. There needs to be a goal or value system that interacts with a reasoning system, with the reasoning system able to predict how the value system will respond, but that can’t ignore the (sometimes contradictory) value system impulses, except with a lot of energy. Put another way, there needs to be a system for quick and dirty draft evaluations of situations, ones that affect the entire physical state of the system, but that can be overridden (albeit not easily) by the part of the system that runs predictive simulations.

      It’s worth noting that this is pretty specific to how animals evolved. I think it’s definitely possible for it to exist in an engineered system, although I don’t know how useful it will be. For example, once a robot’s central systems have been apprised of the damage to one of its limbs, there’s no reason for the signal of limb damage to keep coming, or for it to continue escalating the readiness of the system the way it often does in animals.

      So sorry, not quite on board.

      I think consciousness implies learned intermediate goals in service of the innate ones.

      Good questions James, as always.

      Like

      1. Okay, you made me think about this some more, and things changed just a little. It’s still about goals, but it goes back to the maker. With respect to an artifact, the real moral patient is the agent that created /arranged/is using/will be using it. I guess that would be your reasoning system.

        But my question now is, how much of a reasoner does it have to be. Do ants count?

        *

        Like

        1. Hmmm. I’m not sure there’s really a fact of the matter answer to this. For my intuition, it would have to be a noticeable reasoner, one that could be detected somehow. For example, it’s easy to see the reasoner operating in most mammals. It gets harder with amphibians and fish, and very hard with arthropods. Essentially what we’re talking about is volition.

          In that sense, ants are tough. I’m not sure how much volition they have. I think it was E.O. Wilson who made ants spell his name by writing it in pheromone trails. If they have a reasoner, it seems very limited.

          Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.