The system components of pain

Image credit: Holger.Ellgaard via Wikipedia

Peter Hankins at Conscious Entities has a post looking at the morality of consciousness, which is a commentary on piece at Nautilus by Jim Davies on the same topic.  I recommend reading both posts in their entirety, but the overall gist is that which animals or systems are conscious has moral implications, since only conscious entities should be of moral concern.

From Peter’s post:

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult…

I left a comment on Peter’s post, which I’m repeating here and expanding a bit.

I think it helps to consider what an organism needs to have in order to experience pain.  It seems to need an internal self-body image (Damasio’s proto-self) built by continuous signalling from an internal network of sensors (nerves) throughout its body.  It needs to have strong preferences about the state of that body so that when it receives signals that violate those preferences, it has powerful defensive impulses, impulses it cannot dismiss and can only inhibit with significant energy.

We could argue about whether it needs to have some level of introspection so it knows that it’s in pain, but it’s not clear that newborn babies have that capability, yet I wouldn’t be comfortable saying a newborn can’t feel pain.  (Although it used to be a common medical sentiment that they couldn’t, few people seem to believe that today.)

When asking if plants feel pain, you might could argue that they can be damaged, and may respond to that damage, but I can’t see any evidence that they build an internal body image.  They do seem to have impulses about finding water, catching sunlight, spreading seeds, etc, but it doesn’t seem to amount to anything above robotic action, very slow robotic action by our standards.

Things get a little hazy with organisms that have nervous systems without any central brain, such as c-elegans worms.  These types of worms will respond to noxious stimuli, but it’s hard to imagine they have any internal image in their diffuse and limited nervous system.  You could argue that their responses to stimuli constitute preferences, but these seem, again, like largely robotic impulses, although subject to classical conditioning.

But any vertebrate or invertebrate with distance senses has a central brain or ganglia.  They build image maps, models, of the environment and its relation to themselves.  Which means they have some notion of their self as distinct from that environment, and likely have at least an incipient body image.  Coupled with the impulse responses they inherited from their worm forebears, it seems like even the simplest species have the necessary components.

I often read that insects don’t feel pain, but when I spray one, it sure buzzes and convulses like it’s in serious distress, enough so that I usually try to put it out of its misery if I can. Am I just projecting?  Perhaps, but I prefer to err on the side of caution (admittedly not to the extent of letting the bug continue to live in my house).

I think people resist the idea of animal consciousness because we eat them, use them for scientific research, or, in many cases, eradicate them when they cross our interests, and taking the stance that they’re not conscious avoids having to deal with difficult questions.  Myself, I don’t think the research or pest control should necessarily stop, but we should be clear about what we’re doing and carefully weigh the benefits against the cost.

But what about something like an autonomous mine sweeping robot?  It presumably has sensors to monitor its body state, and I’m sure given the option, its programming is to maintain its body’s functionality as long as possible.  When it becomes damaged from setting off a mine, is there any basis to conclude that it’s in pain?

I did a post on the question of machine suffering last year.  My thoughts now are much the same as then, that unless we engineered the machine’s information processing systems with a certain architecture, it wouldn’t undergo what we think of as suffering.

Above, I said that to feel pain, the system would need to have strong preferences about the state of its body image, resulting in impulses it could not dismiss and could only inhibit with significant energy.  I think that’s what’s missing in the robot example.  It presumably can monitor its body state and take action to correct it if there is opportunity, but if there isn’t opportunity, it can log the issue and then calmly adjust to its current state and continue its mission as much as possible.

Living systems obviously don’t have this capability.  We don’t have the option to decide whether feeling pain is useful, to have the distress of what it is conveying go away.  (At least without drugs.)

The robot is also missing another important quality.  It isn’t a survival machine in the way that all living organisms are.  It likely has programming to preserve its functionality as long as possible, but that’s only in service to its primary goals, which is finding mines.  It has no dread of being damaged or of being destroyed entirely.

Which brings us back to the original question that Hankins and Davies were looking at.  Regardless of how intelligent it might be, could we ever regard such a robot as conscious?  If not, what does this tell us about our intuitive feeling of what consciousness fundamentally is?

I’ve done a lot of posts on this blog about consciousness.  A lot of what I’ve described in those posts, models, simulations, etc, could often be said to amount to a description of intelligence.  I’ve mentioned to a few of you recently in conversations that this realization is bringing me back to a position I held when I first started this blog, that consciousness is, intuitively, intelligence plus emotions, that is, intelligence in service of survival instincts.

But maybe I’m missing something?

29 thoughts on “The system components of pain

  1. I don’t think a brain is required to feel pain. In humans, pain often stimulates movement to avoid it without involving the brain, the nerve signals go to the spinal cord and go straight back to actuate muscle responses, if I am not mistaken. A brain is required to consider pain and to ascribe nuances, but the sensation is simply one associated with a particular kind of nerve (pain nerves, as opposed to pressure or temperature).

    Liked by 1 person

    1. This gets into how we define “pain”. If a patient is brain dead, but still has spinal withdrawal reflexes, is there any pain being felt when their finger is pricked or burned? Is pain actually pain if there’s nothing to put the sensation into some kind of context?

      Like

  2. I have a hard time with the whole insects don’t feel pain thing. I get that their nervous systems are supposedly different than ours, and so they probably don’t feel pain as we understand it. But when they’re injured, they sure do act like they’re feeling something.

    Liked by 2 people

    1. I read something the other day by somebody making a distinction between “defense circuits” and “fear”. I suspect that’s the stance taken by a lot of biologists, that insects have such circuits but not “higher order processing”. While I doubt an insect grasps the full implications of its injuries for its survival, I agree they sure seem like they’re feeling their own version of pain.

      Liked by 1 person

  3. Hi,
    my comments are always very short and sporadic. But I follow your postings. But after thinking about consciousness I couldn’t really explain why we weren’t all philosophical zombies.
    The only explanation I was left with, as I wrote before, is that life without conscoiusness would be like playing poker without money. There must be something at stake, otherwise why give consideration to each other and a society would not function.
    Isn’t this the same as the overall gist of the article. consciousness exists because of it’s moral implications. This is it’s evolutionary reason.
    greetings

    Liked by 1 person

    1. Thanks, and good hearing from you!

      I’m not sure I’d put it that way, but I think I get your point. It seems like the two issues, the definition of consciousness and the definition of a moral subject, are tightly intertwined, with the desire for wellbeing being at the center.

      Like

  4. Discussing C. elegans worms, you flirt with the idea that there needs to be a central location where incentives are weighed against each other. I think you should not just flirt, but embrace this idea.

    Morsella writes:

    phenomenal states play an essential role in permitting interactions among supramodular response systems —agentic, independent, multimodal, information-processing structures defined by their concerns (e.g., instrumental action vs. certain bodily needs).

    If you’re carrying a scorching plate to the dinner table (Morsella’s example) rather than dropping it, so as not to waste the food, you’re painfully aware of the tradeoff. That paper is well worth a read.

    Liked by 1 person

    1. Thanks for the link! Reading the abstract, I like where he’s going. I’ll try to take a closer look at the paper when I get a chance.

      I’m open to the possibility that a distributed system could have the same functionality, but evolution seems to favor the centralized approach, indicating that it’s probably the less costly, more efficient solution. I don’t think integration is the full answer, but it is definitely a crucial aspect, and integration is a lot easier when all the processing areas are close to each other.

      Like

  5. I think it’s the opposite – its actually how much we don’t feel pain/feedback from senses that are from damage or potential damage events.

    We think we are feeling it when we are actually feeling it less, because the pain is a feedback to our thinking. Consider the act of stealing honey from bees – yes, you have the pain of stings, but the caloric payoff is immense! Pain is part of a cost/benefit analysis of the brain. Is the pain worth it for the gain? Of course if the pain of stings just stopped you from gathering the honey, made you adverse, you’d never gain it. So an absolute – pain/don’t do it responce misses so many opportunities. Where as if you REDUCE the pains effect from an absolute no go to varying levels of negative feedback, now you can potentially learn to take the stings in exchange for the big honey payoff!

    Your feeling pain is actually a matter of how much you don’t feel pain. Real pain would stop you from doing something, instantly. What you deal with is watered down pain.

    Liked by 1 person

    1. Hmmm. Not sure if I’m catching your point. Certainly pain comes in various intensities, and I definitely think the brain weighs the pain of a scenario against any potential reward. It’s why it can inhibit the primal instinctive reaction, because it is instinctively attracted to a reward, such as the honey. But the higher the intensity of the pain, the more energy is required to inhibit the reaction, so that inhibiting great pain needs the perception of great reward (or the avoidance of a sufficiently bad outcome).

      Or perhaps I should ask, what statement in particular did I make is this observation in opposition to? It seems like a nuance corroborating observation.

      Like

      1. Unless you’ve committed to the idea that it is a question of how much you are not feeling pain, then you’re writing from the regular default position of talking about feeling pain. When have you considered an alternative to ‘I feel pain’? So why would you write any specific observation that that I could point out on the matter? Likewise where have you made an observation that you’re human? I can’t pick that out in particular either, but it’s there in the text – without a specific example of it being there.

        But the higher the intensity of the pain, the more energy is required to inhibit the reaction

        The more virtual energy. It’s not like it takes real energy.

        But in the end, pain can only be overridden to the extent it is not felt – if there’s ten bucks but you have to reach into some fairly icy water to get it, it’s not so much you’re feeling pain from the icy water as you feel so very little pain from it. If you felt absolute pain you just couldn’t do it. People think in terms of them feeling pain, but it’s kind of like if someone stapled their finger and then said they feel death. Most people would say that’s a bit extreme to call that minor injury a matter of ‘feeling death/dead’. It’s really ignoring the abundance of remaining life that waters down the effect of that damage. But when it comes to pain, people report they feel pain, ignoring the abundance of watering down of pain involved.

        Liked by 1 person

        1. So, if I’m only feeling 40% of the maximum possible pain I could be feeling, what you’re saying is I’m not feeling 60% of the pain? If so, I agree that’s true, but I would think most people’s attention is going to be on the 40%, wishing they were at 0% or at least some lower level. I know when I’m in pain, it’s hard to have the glass-half-full outlook.

          Virtual energy? I’d agree it requires mental energy. But I see everything mental as physical, so overriding the electrochemical firing from the brain’s emotional circuits requires electrochemical firing from other circuits (probably from the prefrontal cortex). It seems like a higher intensity of signalling (number of neurons, firing rates, etc) from those emotional circuits would require stronger signaling to override. All this signalling requires more neurons to fire at a higher frequency than the baseline, which burns through more ATP, the primary energy source for cellular activity. This makes sense when you consider that willpower is subject to fatigue.

          It seems like the pain I’m going to feel with the icy water will be the same, but whether I decide to endure it will come down to how badly I want the $10. It it’s $1000, I’m much more likely to endure it. But while my hand is in the water, wouldn’t I be fighting the urge to withdraw it, burning ATP in the process? And if retrieving the money takes a while, I might eventually reach a point where I can’t take it anymore and withdraw my hand, at least for a recovery period. But how long I endure the cold may change depending on how much time I have. It seems like there are multiple dimensions of sliding scales here.

          Sorry, if I’m still missing your point.

          Liked by 1 person

          1. The idea is to look at was is used to dilute absolute pain. It’s a way of looking back stage, while saying ‘I feel pain’ is being in the audience, I’d say.

            Virtual energy?

            Yes. I mean, if I have a computer with the number 1000 in it, I don’t use up more electricity to have that number as opposed to 1 (not in any serious way). It’s not like having 1000 marbles to represent 1000 and one marble to represent 1 – the energy cost difference between the two aren’t like that at all. Yet testing if 1000 > 1 results in true, even if 1000 marbles/bits of energy were not used. Why would your pain somehow be represented as a genuine amount of energy in scale with that pain when 1000 is not represented in more energy than 1? Why wouldn’t your pain just be a virtual value? More to the point, which would be most efficient, in evolutionary terms? Genuinely using up energy to represent pain, or using virtual values?

            While fatigue would use up virtual currency itself – some runners have literally has muscle meltdowns. They’ve managed to make their virtual system out of sync with their physical systems actual capacity.

            Liked by 1 person

    2. “Yes. I mean, if I have a computer with the number 1000 in it, I don’t use up more electricity to have that number as opposed to 1”

      I agree. But I think the comparison is complicated. If I’m trying to recognize an object, say an object that could turn out to be a dog or a cat, I doubt the pattern recognition that takes place in my brain will use that much additional energy whether the dog or cat pattern comes out on top. The amount of electrochemical and ATP energy usage is probably not that different for each. Of course, recognizing a pattern is much more complicated than comparing two registers in the computer, but the point is that the energy usage in both scenarios shouldn’t change on the outcome.

      But you’ve probably had the experience of your computer doing something that led to it getting hotter and the fan coming on. (It could be a virus scan, search index, or anything along those lines. I once saw a computer’s CPU spike every time a certain printer was plugged into its USB port.) This happens because the extra work that the CPU is performing increases its temperature, eventually triggering the sensor that activates the fan. Your computer is using more electrical energy in this state.

      When we’re in pain, our brain is receiving signals which it interprets as a violation of its preferred body image. That triggers emotional circuits and releases hormones that increase our heart rate, brain wave frequencies, and many other physiological symptoms. Resisting the default impulses that go along with this requires the prefrontal cortex to send inhibiting signals. But it can’t do that once and then be done. It has to keep doing it the entire time the impulse is being triggered, and if the activation of the impulse is intense and frequent, it will require an inhibition that is comparatively intense and frequent. Otherwise the inhibition signal may overwhelmed.

      All this activation and inhibition burns a lot of energy. It’s two processes working in tension against each other. It’d be a lot more comfortable for us if it worked more like the pattern recognition or number comparison, like it could work for an engineered robot. Unfortunately, that’s not the hand nature dealt us.

      On runners, I wonder if the condition you’re describing has any relation to the second wind runners experience on long runs. That’s always sounded suspicious to me, like maybe it was something very much like what you describe, the brain getting out of sync with the state of the body.

      Like

      1. All this activation and inhibition burns a lot of energy. It’s two processes working in tension against each other.

        I don’t think at best you have processes trying to access memories towards doing the task, while at the same time memories in regards to not doing the task. But the activations and inhibitions themselves – I don’t think they burn a lot of energy. I don’t think it’d make evolutionary sense for them to do so and most likely in a young child’s mind, where there are far fewer memories to call up on either side, you probably see very little in the way of energy use when it comes to inhibition.

        Like

        1. Hi Callan,
          A little late but you seem to miss some negative life experience. Gladly for you I would say. I think last month there was an interesting article in the Economist science section about the negative outcomes in health by people keeping secrets. Those things you describe as virtual can shorten your life span. Just google secrets and health
          Pain can wear you out. People commit suicide because of pain. That doesn’t fit your theory also.
          Also your idea of pain being just absolute and unbearable like there is only one real pain by definition unbearable and the rest is watered down pain. I don’t know. can’t really get into that concept.
          greetings

          Like

          1. Hello Oscar,

            Why don’t they fit the theory? You seem to be treating ‘virtual’ as non existent. The text you are reading right now isn’t ink on paper – it’s virtual. It’s an LCD pixel grid virtual construct of the written word – it’s not the written word, but it is something. It’s virtual.

            Also your idea of pain being just absolute and unbearable like there is only one real pain by definition unbearable and the rest is watered down pain. I don’t know. can’t really get into that concept.

            The point is we think we deal with pain, when really we deal more with watering. In fact, ‘water’ is the only thing that lets us ‘deal’ at all. Without it we are pure reaction. Anyone in agonizing pain generally loses control of themselves because they become pure reaction. Someone holding their hand six inches above a lit candle might report ‘I feel pain’, but clearly they don’t lose control of their entire body in a pain spasm. So obviously they are not just feeling pain.

            Like

  6. I left this comment, which also addresses your notes here on plants:

    Regarding plants, although not cognitively aware of the sensation of pain, plants (from 3.5 billion years old algae to angiosperms) not only experience suffering in the form of chemical panic felt by the entire organism via electrical impulses transmitted across the plasmodesmata , but it is now known that they live in fear of their ferociously peculiar understanding of pain (See Nasir, J., 2001: Paranoia in plants. Clinical Genetics. 59(5): 302-303).

    Located deep inside the plant genome, isolated within the first intron MPK4, lay three ancient genes (PR1, PR2, PR5) that have revealed to researchers that MPK4 is devoted to negative regulation of the PR gene expression. This gene expression is anticipatory. It is expectant. It is preparatory. It is suspicious. It is, in a word, fearful. If translated to the human experience, the PR gene expression is what a human observer would identify with as a deep-rooted, physiologically hardwired anxiety; a most ancient paranoia.

    Liked by 1 person

    1. Thanks John! It sounds interesting. I dug up the paper. Unfortunately, although brief, its terminology was hopelessly too inside-baseball for me.

      It seems like a lot depends on how loosely we want to define terms like “suffer”, “fear”, or “pain”. These terms, to me, imply perceptions and emotions, at a minimum. The later set of terms you use: “anticipatory”, “expectant”, or “preparatory”, I could see being used without those cognitive capabilities, but I could also see them being used for technological systems. But I’m not sure I’d buy that using them in that way adds up to the first set.

      But maybe I’m missing some key concepts here?

      Liked by 1 person

      1. Oh no, you’re not missing anything. It’s a huge stretch to say “fear,” but I use that example as one proof for the existence of The Owner of All Infernal Names. Natural Theology is fun to play with 🙂

        Liked by 1 person

    1. ontologicalrealist,
      What does it mean to be a subject or have subjectivity? My answer is it means an entity is aware of its environment and itself, and has strong preferences and desires about its state, preferences and desires it can’t dismiss or ignore, and can only inhibit with significant energy. I’m pretty sure a rock doesn’t have those attributes, but that, for instance, a dog does.

      Totally agree with your last sentence.

      Liked by 2 people

  7. Well, I don’t know how we can determine whether something feels pain outside of a comparison to ourselves in some fashion, but I’m with you on your attitude toward bugs. Plus, I see their behavior as strong evidence that they do feel pain, regardless of their physical similarity to us, or lack thereof. It seems that since they wiggle and squirm and seem to be in serious distress when injured, their behavior is enough evidence to support concluding that they do, even if their physical characteristics don’t give us much to go by now.

    Liked by 1 person

    1. I think you’re right. We have no objective way to ever determine if another creature is in pain. We can only judge by their behavior, and possibly their heart rate and hormone levels (if tracking that is at all practical in bugs). But it does seem like if we saw a mammal going through what a sprayed bug typically goes through, we’d be horrified.

      Liked by 1 person

      1. That’s a good point. An insect just doesn’t evoke much sympathy from us, at least not compared to other animals, and I suspect it’s because they’re so alien. It’s sort of a continuum for me with really horrifying insects (black widow spiders and scorpions) at the lower end of the sympathy scale. Really, any spider is disgusting for me, but if I know it can cause harm, that gives me less reason to feel bad about killing it. I know some people who catch these critters and let them loose outside, but I’ve also heard of them getting bitten or stung while making such attempts.

        Liked by 1 person

        1. It’s also worth noting that insects don’t return our empathy, or generally have any for each other. (I suppose ants might have some for other members of their nest, but I haven’t seen any evidence for it.) Still, while my empathy doesn’t extend to going through the effort to carry them outside, I do try to give them a quick death, if I can.

          Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.