Last week, Science News had an article about the difficulty of studying animal emotions, on understanding what an animal in a particular situation is really feeling. It’s an interesting article, although not one with much new information for many of you. However, I want to focus on one point raised by one of the researchers interviewed.
But there’s an important caveat, Mendl says. The experiment, called a judgment bias task, points to whether an animal is experiencing something in its life positively or negatively. However, the task doesn’t demonstrate something more basic — whether an animal can have subjective experiences to begin with.
Animal welfare studies assume that animals are sentient, because if they weren’t, talking about their well-being wouldn’t make sense, Mason says. “But none of the measures we use can assess or check that assumption, because we simply don’t yet know how to assess sentience,” she notes.
It seems to me that Mendl’s concern arises from considering phenomenal consciousness to be something separate and apart from access consciousness, that is, from assuming that subjective experience is an add-on bolted on top of the functionality, the capabilities of the animal, and something that is either present or absent like a light that is either switched on or off.
As I’ve noted before, this stance is problematic because it essentially manufactures the hard problem of consciousness, the idea that consciousness can’t be explained physically. If we pre-emptively exclude all the functional explanations from consideration, then the problem starts to look intractable.
Of course, the people making this move argue that they’re not excluding anything, that the functional explanations just don’t get us there. Yet, we know that if we remove certain functionality, phenomenality is affected. And the reverse is also true, if we lose phenomenality, like in cases of blindsight, we also lose functionality, such as the ability to know whether we’re able to make visual detections and discriminations and act accordingly. Even David Chalmers admits that phenomenal experience closely coheres with functionality.
François Kammerer, in a recent paper, argues against the idea that we should use phenomenal consciousness as any kind of ethical guide. He’s coming at the issue from an illusionist perspective, that phenomenal consciousness either doesn’t exist (strong illusionism), or is different than it seems (weak illusionism).
Kammerer makes an argument, also put forth by Peter Carruthers and others, that whether phenomenal consciousness is present will always be scientifically indeterminate, even for an omniscient observer. I agree with this argument, although I come at it from the perspective that the difficulty is in agreeing on what functionality is necessary and sufficient for the label “phenomenal consciousness” rather than any stance that it doesn’t exist. (Admittedly, the distinction between my stance and weak illusionism largely amounts to language choices.)
Kammerer concludes that we should reach our ethical conclusions based on functionality and capabilities, such as the demonstrable desires of animals and to what extent those desires might be frustrated. I think this is headed in the right direction, with the caveat that additional criteria are needed (such as learning) to know we’re not just dealing with reflexes.
The key is, once we decide to focus on observable capabilities, we can stop agonizing over whether an animal has “the lights on”. The “lights on” standard has long been problematic, because it tends to shift with public sentiment. For centuries, people were sure animals didn’t have the lights on, therefore they could be mistreated without moral consequence. It’s only in the last century or so that general opinion has shifted on this. Grounding our ethics in observable capabilities many help guard against it shifting back.
Unless of course I’m missing something.