Peter Hankins at Conscious Entities has a post looking at the morality of consciousness, which is a commentary on piece at Nautilus by Jim Davies on the same topic. I recommend reading both posts in their entirety, but the overall gist is that which animals or systems are conscious has moral implications, since only conscious entities should be of moral concern.
From Peter’s post:
There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.
Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult…
I left a comment on Peter’s post, which I’m repeating here and expanding a bit.
I think it helps to consider what an organism needs to have in order to experience pain. It seems to need an internal self-body image (Damasio’s proto-self) built by continuous signalling from an internal network of sensors (nerves) throughout its body. It needs to have strong preferences about the state of that body so that when it receives signals that violate those preferences, it has powerful defensive impulses, impulses it cannot dismiss and can only inhibit with significant energy.
We could argue about whether it needs to have some level of introspection so it knows that it’s in pain, but it’s not clear that newborn babies have that capability, yet I wouldn’t be comfortable saying a newborn can’t feel pain. (Although it used to be a common medical sentiment that they couldn’t, few people seem to believe that today.)
When asking if plants feel pain, you might could argue that they can be damaged, and may respond to that damage, but I can’t see any evidence that they build an internal body image. They do seem to have impulses about finding water, catching sunlight, spreading seeds, etc, but it doesn’t seem to amount to anything above robotic action, very slow robotic action by our standards.
Things get a little hazy with organisms that have nervous systems without any central brain, such as c-elegans worms. These types of worms will respond to noxious stimuli, but it’s hard to imagine they have any internal image in their diffuse and limited nervous system. You could argue that their responses to stimuli constitute preferences, but these seem, again, like largely robotic impulses, although subject to classical conditioning.
But any vertebrate or invertebrate with distance senses has a central brain or ganglia. They build image maps, models, of the environment and its relation to themselves. Which means they have some notion of their self as distinct from that environment, and likely have at least an incipient body image. Coupled with the impulse responses they inherited from their worm forebears, it seems like even the simplest species have the necessary components.
I often read that insects don’t feel pain, but when I spray one, it sure buzzes and convulses like it’s in serious distress, enough so that I usually try to put it out of its misery if I can. Am I just projecting? Perhaps, but I prefer to err on the side of caution (admittedly not to the extent of letting the bug continue to live in my house).
I think people resist the idea of animal consciousness because we eat them, use them for scientific research, or, in many cases, eradicate them when they cross our interests, and taking the stance that they’re not conscious avoids having to deal with difficult questions. Myself, I don’t think the research or pest control should necessarily stop, but we should be clear about what we’re doing and carefully weigh the benefits against the cost.
But what about something like an autonomous mine sweeping robot? It presumably has sensors to monitor its body state, and I’m sure given the option, its programming is to maintain its body’s functionality as long as possible. When it becomes damaged from setting off a mine, is there any basis to conclude that it’s in pain?
I did a post on the question of machine suffering last year. My thoughts now are much the same as then, that unless we engineered the machine’s information processing systems with a certain architecture, it wouldn’t undergo what we think of as suffering.
Above, I said that to feel pain, the system would need to have strong preferences about the state of its body image, resulting in impulses it could not dismiss and could only inhibit with significant energy. I think that’s what’s missing in the robot example. It presumably can monitor its body state and take action to correct it if there is opportunity, but if there isn’t opportunity, it can log the issue and then calmly adjust to its current state and continue its mission as much as possible.
Living systems obviously don’t have this capability. We don’t have the option to decide whether feeling pain is useful, to have the distress of what it is conveying go away. (At least without drugs.)
The robot is also missing another important quality. It isn’t a survival machine in the way that all living organisms are. It likely has programming to preserve its functionality as long as possible, but that’s only in service to its primary goals, which is finding mines. It has no dread of being damaged or of being destroyed entirely.
Which brings us back to the original question that Hankins and Davies were looking at. Regardless of how intelligent it might be, could we ever regard such a robot as conscious? If not, what does this tell us about our intuitive feeling of what consciousness fundamentally is?
I’ve done a lot of posts on this blog about consciousness. A lot of what I’ve described in those posts, models, simulations, etc, could often be said to amount to a description of intelligence. I’ve mentioned to a few of you recently in conversations that this realization is bringing me back to a position I held when I first started this blog, that consciousness is, intuitively, intelligence plus emotions, that is, intelligence in service of survival instincts.
But maybe I’m missing something?