This talk by David Chalmers on the relationship between consciousness and moral status is pretty interesting. You don’t have to watch the video to follow this post, but it’s in response to arguments he makes in the talk.
The video is 75 minutes but the talk only lasts about 50 minutes with a Q&A afterward.
Chalmers discusses two views. The first says that anything with phenomenal consciousness should have a moral status, that is, be something whose welfare we should be morally concerned about. The second says that only sentient systems, those capable of feeling affects: pain, pleasure, happiness, sorrow, hunger, or suffering of some kind, should have moral status. He argues for the first view and opposes the second one.
Right off the bat this raises the question of what we mean by “phenomenal consciousness”. Chalmers falls back on the old standby, that’s it’s like something to be that system. Unfortunately, the phrase “like something” is really just a synonym for “phenomenal consciousness.” Using it doesn’t provide any insight into what we’re actually talking about.
After a discussion about philosophical zombies and their lack of moral status (if you buy p-zombies as a concept), Chalmers discusses philosophical Vulcans. P-Vulcans are like the Star Trek variety except more severe. In Star Trek, Vulcans have a culture where they have iron control of their emotions. But a p-Vulcan simply has no emotions, or affects of any kind. Chalmers argues that p-Vulcans should have moral status.
But what does it mean to say that something is conscious without affect consciousness, that is, without sentience? Such a system would have awareness of its environment, its body, and maybe even be able to introspect. It would have somatosensory feelings, but not emotional ones. In other words, in the parlance often used to describe the hard problem of consciousness, it wouldn’t feel like anything to be that system.
Chalmers contends that such a system could still have motivations, just not affective ones. Affects are normally thought of as having two or three dimensions: valence, arousal, and motivational intensity. (Valence refers to seeing something as good or bad, arousal to how jacked up the system is: heart rate, breathing, muscle tension, etc, and motivational intensity to how strongly the system is inclined to respond in a certain way.) Chalmers seems to be saying we can have just the motivation without the other aspects. That may be plausible.
But if we remove affects from the mix, then what we have left would be sensory awareness and reasoning. The question then becomes, what separates such a system from something like a self driving car, a Mars rover like Curiosity, or some other sophisticated autonomous robot? Certainly these systems remain far less sophisticated than a human, or even any mammal. But they arguably are approaching the ability to navigate their environment as well as many simple animals, animals that many people are tempted to consider conscious.
Yet no one really considers the robotic systems to be conscious. One reason is that they’re not biological, but part of that distinction is they don’t have the same motivational systems that animals have. In other words, they don’t have affects. They take in information from the environment and about themselves, and use that information for decision making, but all without affect.
Before concluding that the answer is simply to require affects for consciousness or moral status, consider humans with conditions that Chalmers mentions, such as anhedonia: the inability to feel pleasure, or asymbolia: feeling pain with no unpleasantness. These can come about due to brain injury or pathology. Another condition is akinetic mutism, where people appear to be largely affectless, with no motivation to do much of anything. Despite their low or absent affect, no one really questions whether people with these conditions are conscious.
All of which is to say that this is a more difficult question than it might appear.
What do you think? Is there something else to an affect-less phenomenal consciousness other than sensory awareness, reasoning, or introspection? If so, what? Or should we insist that only a system with affects can be considered conscious? If so, what does that say about people with the above brain pathologies? Are there even fact of the matter answers to these questions?