The complex composition of pain

When I was very young, the top of my feet started itching, so I started scratching.  The itching continued for weeks and months, with me constantly scratching.  My poor mother, seeing my red and scratched feet, implored me to stop.  But the itching was relentless and I was maybe five or six, so I kept scratching.  Until my feet were a bloody torn up mess.  Eventually I was diagnosed with a milk allergy.

The thing is, I never remembered feeling pain when I scratched the itch, only relief, even when my fingers were coming away bloody.  To be sure, there was pain, but not when I was scratching, only in the periods when the itching subsided.  And of course, there was plenty of pain as I healed once the itching was gone.

Cover of The Complex Reality of PainI recalled this episode while reading Jennifer Corns’ The Complex Reality of Pain.  I became interested in Corns’ book after Richard Brown interviewed her on his Consciousness Live podcast.  Corns is a scientific eliminativist toward pain, that is, she thinks pain exists, but that while it remains a productive concept in our everyday “folk” language, it isn’t a productive concept in scientific and medical investigations.  This makes her position less stark than Daniel Dennett’s, who is a full eliminativist when it comes to pain, in that he doesn’t even see it as productive to retain it as a concept in everyday language.

Now, your first reaction may be to wonder whether these people have ever had a headache, tooth ache, stubbed a toe, broken a bone, etc.  But there is a logic to their position.

Consider our folk version of pain.  When we suffer some damage to the body, we feel it as a brute and unanalyzable experience.  Under this understanding, nociceptors in the body detect damage, or potential damage, and send their signal to the brain, where it is felt as pain.  The End.

Except, it turns out pain is poorly correlated with actual tissue damage, or even threat of damage.  There is my personal story above, where I didn’t feel something as pain that under different circumstances clearly would have been painful.  (My experience is somewhat explainable under gate-control theory.)

There are also plenty of cases where people feel pain in the absence of any damage.  I have a cousin, let’s call him “Jake”, who had serious back problems, leaving him in terrible pain.  Jake was prescribed opioids.  Eventually he had back surgery to fix the issue.  But the pain continued, with Jake having to take ever higher dosages of the opioids, until he overdosed and nearly died.  He was then pulled off the opioids, and suffered agonizing pain.  But as the withdrawal symptoms eased, so did the pain.  Jake’s pain, after the actual body issues had been corrected, was caused by the addiction.

But aside from such addictions, post-healing pain is not unusual.

Corns points out that even viewing pain purely from a phenomenological perspective, it isn’t a simple thing.  The pain of a stomach ache is very different from the pain of stepping on a nail.  Pains can be throbbing, shooting, stabbing, pinching, cramping, burning, or stinging, among many other disparate sensations.

And the science is that there is no single mechanism that accounts for pain, no single pathway that can be traced from the sensory receptors all the way through the brain.  There are in fact innumerable mechanisms.  Corns points out that pain is actually an extremely idiosyncratic process, with each individual instance having a unique combination of mechanisms.  These mechanisms eventually converge on our categorizing it as pain, causing us to communicate about it.  But prior to that convergence, there is no single unifying mechanism that can be objectively pointed to.

Apparently in recognition of this situation, the International Association for the Study of Pain (IASP) defines “pain” as:

An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage.

In the recently updated version on their site, they expand this definition with key notes, some of which include:

  • Pain is always a personal experience that is influenced to varying degrees by biological, psychological, and social factors.

  • Pain and nociception are different phenomena. Pain cannot be inferred solely from activity in sensory neurons.

  • Through their life experiences, individuals learn the concept of pain.

  • A person’s report of an experience as pain should be respected.

In other words, pain only seems to exist subjectively.

But Corns is not even satisfied with this definition.  She attacks the notion that pain is an evaluated negative threat affect, an emotional feeling related to an interoceptive (internal body) sensation.  She points out that the affect and the sensation can be disassociated, with people insisting that they are in pain, but not bothered by it.

So, pain doesn’t exist?  While the steps that bring us to this conclusion seem to make sense, it feels like something must have been missed.

My take is that while Corns is right to reject Dennett’s full throated eliminativism, her position of scientific eliminativism has its own issues.  If the term “pain” remains useful in everyday language, we should examine why it remains useful, and strive to account for that usefulness scientifically.

Now, it seems plausible that pain doesn’t exist at certain levels of organization, similar to how chairs and tables don’t exist in particle physics.  Pain may be something we talk about at the psychological level that maps to only disparate mechanisms at the neurobiological level.  But again, I think we want to see those mappings accounted for.  Dismissing pain from science entirely feels like overkill.

I also think Corns is hasty in rejecting the affect / sensation pairing.  It is true that we’re not talking about just one affect here, but more of a category of disparate threat prediction affects that we associate with our pain concept.

And yes, the affect and sensation can be dissociated, but the dissociation seems to involve lobotomies, cingulotomies, various pathologies, or use of drugs such as opioids.  In other words, the dissociation happens in brains that are damaged, or whose workings are being interfered with.  When a healthy brain feels pain, there seems to be both an interoceptive sensation and an affect about that sensation.  If one or the other is missing, it doesn’t seem like we consider that to be pain.

In cases of dissociation, the person may feel a sensation they remember previously being painful, but without the negative affect, and so say they’re in pain but not bothered by it.  But if they had never had a negative affect associated with that sensation, it’s doubtful they ever would have categorized it as pain.

That doesn’t mean the interoceptive sensation necessarily originates from sensory neurons in a body location.  Jake’s opioid addiction resulted in him feeling an interoceptive sensation about his back that, after he was healed, no longer originated from his back.

Where then did it come from?  Warning: this is my own speculation.  It’s grounded in the neuroscience I’ve read, but I haven’t seen it explicitly within the context of pain research, and there may be complications I’m ignorant of.

First, we have to acknowledge that a pain affect is an extremely complex and idiosyncratic psychological state, with many causal factors including memories, perceptions, levels of fatigue, and mood, in addition to the interoceptive sensation itself.  That means that once “learned”, that affect could be triggered by those other causal factors, not just the sensation coming in from the peripheral nervous system.

But wouldn’t that just leave us with the affect itself without the sensation?  Here I think it’s important to remember that most of the connections between regions in the cortex are reciprocal.  So if region A fires a pattern that excites region B, region B probably has connections back to A, enabling it to recurrently excite A.  This allows activity in the two regions to reinforce and bind with each other, which probably happens when a sensation leads to a negative affect.  But it also means that a negative affect caused by non-sensory factors, can effectively back-propagate to the sensory cortices, inducing them to fire as though they had received a signal from the body region.

In other words, a psychological state can lead to real pain in the absence of actual body damage, or it can lead to signals from actual body damage being inhibited.  This would explain placebo effects, as well as their reverse: nocebo effects.

However it happens, pain can happen in the absence of body damage, and there can be bodily damage without pain.  Corns points out that a lot of chronic pain sufferers aren’t suffering from a body condition, but a brain one.  That doesn’t make their pain any less real, or any less tragic in the effects in can have on their lives.

It’s important to emphasize that these factors are generally not conscious ones.  In other words, this isn’t an argument that people consciously choose to be in pain.  But the mind, primarily the unconscious mind, has a powerful effect on what we do feel.

It’s a very counter-intuitive conclusion, but we get those when we study the mind.

Unless of course I’m missing something?

53 thoughts on “The complex composition of pain

  1. Pain is one of the oldest biological mechanisms, so it’s not surprising it would manifest in myriad ways. Also not surprising that a complex brain would take the physical aspects to psychological levels.

    That even something as fundamental as pain is so hard to unpack shows just how complex living systems are!

    Liked by 2 people

    1. Saying pain is one of the oldest mechanisms is actually a major assumption. Joseph LeDoux would argue that threat detection and survival mechanisms are actually far older. (Going back to unicellular organisms.) And the science seems to show that those survival mechanisms and pain aren’t necessarily correlated.

      I don’t think that means pain isn’t ancient, just not nearly as ancient as what underlies it.

      Interestingly, one of the things Corns discusses is Indian hook-swinging ceremonies (think “A Man Called Horse”), where the men undergoing the ritual claim not to experience pain, and show no signs of it. If true, then they’ve managed to convince themselves out of feeling pain, actually making it an opinion. o_O

      Liked by 2 people

      1. “Saying pain is one of the oldest mechanisms is actually a major assumption.”

        You’re reacting as if I’d claimed it was the oldest. “one of” just means it’s ancient. I’m pretty sure we agree completely on that.

        “…the men undergoing the ritual claim not to experience pain, and show no signs of it.”

        I’m not sure about the claim, but it is possible to control one’s reaction to it.

        Fighters, and athletes in general, learn to manage pain. On some level, it is just a signal coming into the brain. One can choose to not respond to it consciously. There are reflexes involved one can’t control, but a lot of our reaction to pain is learned and psychological.

        Little kids are a good example. If they hurt themselves in some minor way, they look to adults for how to react. If adults show great concern, the kid has a strong reaction. If the adult is casual about it, the kid often just moves on accepting it. Dogs are kinda that way, too.

        Modern society, especially, makes a big deal about pain. A lot of it literally is in our heads.

        Liked by 1 person

        1. Oops, sorry. Missed your qualifier. My bad. I was too eager to talk about pain vs survival circuits.

          I actually would say that all pain is in our heads. In a lot of cases it’s triggered by sensory reception, but that’s not guaranteed.

          What’s interesting about fighters, athletes, and excited kids, is that often when the adrenaline is flowing, they don’t even feel the pain, or they feel it in some reduced manner. It’s only afterward, when they have a chance to come down, that the pain sets in.

          The problem is when the pain is only in our heads, dismissing it is far from simple. I wonder how the Indians do it, if any drugs are involved, or it’s only based on spiritual beliefs.

          Liked by 2 people

          1. I’m familiar with how adrenaline, or other distraction, can mask pain. That’s not what I’m referring to. On a general level, I mean the sort of pain that can be denied — it’s not debilitating.

            Pain can be so extreme we have to respond to it. In some cases our body will do it automatically. But think about those people who force themselves to hold their hand over a flame. The Amer-Inds weren’t the only culture with pain rituals. I saw a documentary about a group with a male initiation ritual involving biting ants worn into a glove the young man had to wear for a period. Apparently the ant bites were extraordinarily painful. And then there are people who’ve wired pain inputs to mental pleasure circuits.

            So, yeah, pain is a hugely complicated subject.

            I’m talking about the kind of pain that can be denied. A kid falling down is usually harmless. If adults react like it’s a big deal, the kid reacts likewise. What I find interesting there is that the level of pain allows that choice. As adults, we sometimes forget that and get wrapped up in every little pain — something we pass on to our kids.

            Another level up are fighters and athletes who learn to ignore pain (to a certain level). It’s not masking by adrenaline, it’s treating pain as an opinion your body is expressing. And ignoring that opinion.

            When it comes to spiritual belief, the power of the mind is strong enough, I think it’s possible people truly don’t experience the pain. As with sexual masochists, they translate it to religious rapture. Which really says a lot about how our brains can process pain.

            With the Amer-Inds, or those modern day tribes with the ants (and based on what I saw there), it’s really a matter of just manning TF up. It clearly hurts like hell. It’s proving you can play through the pain — proves you are worthy.

            Like

          2. It seems like we’re talking about four levels here (although maybe I’m missing some):
            1. No stimuli indicating damage or potential damage and no experience of pain.
            2. Stimuli with no experience of pain.
            3. Stimuli and experience of pain that is consciously ignored.
            4. Stimuli and experience of pain that is given in to.

            Definitely 3 happens. We’ve all been in situations where we simply overrode our pain to continue doing something. As you note, the intensity matters for whether that can succeed.

            But Corns says that the Indians in the hanging ceremonies claim and appear to be in 2. Based on what I cover in the post, it is conceivable they’ve psychologically convinced themselves into really being in 2.

            Like

          3. Did Corns actually speak with men who’d gone through the hanging ceremony? Did she witness any? What is the provenance of data that the men actually felt no pain?

            The ceremony I saw with the ants was clearly a trial of pain. I saw another documentary about a tribe that was gathering honey by raiding huge honeybee honey combs that were hanging from a cliff. A few trained tribe members scaled down the cliff on ropes to reach the site. Members below used burning grass to smoke out the bees, but some bees hung around anyway.

            The point is, the raiders got stung routinely, and I saw a shot of one guy, hanging off a rope, casually pluck a bee that stung his cheek off his face and toss it away. Close shots showed these guys had a number of scars and wounds.

            It’s another case of your #3: playing through the pain. As I mentioned, athletes experience this regularly. For that matter, my morning walks, an hour long with as brisk a pace as I can manage, and with lots of hills where I live, are not, by any means, comfortable. It’s a bit of pain I put myself through because I see good reasons for it. (Gets me outside, keeps my CV system healthy, keeps my mind more clear, etc.) As with most regular exercise, the trick is developing a bit of an addiction to it. (I overslept this morning and missed it. It feels weird. Definitely won’t miss tomorrow.)

            That all said, it’s also possible those Indians worked themselves into a form of religious rapture. But I’d want good evidence. The problem here is that personal testimony would have good reason to be false — the ethic of the ceremony seems to demand denial of feeling pain. (You mentioned earlier about wanting to read papers to dig into claims made. This would be a case like that for me.)

            Like

          4. Good point on wanting to see the citations. Corns doesn’t mention any personal experience and she doesn’t really making any citations on this that I can find, which is unfortunate. And quick googling around didn’t immediately get me anything on it, at least in regards to the pain, or lack of pain experience.

            She mentions the ceremony in a list of examples for #2. Others include episodic analgesia (such as not feeling pain during a game or other event, something I’ve personally experienced), congenital insensitivity to pain, trepanation in East African societies, couvade in New Guinea and Brazil, and self-injurious behavior (SID) such as self cutting.

            Much of this might amount to a placebo effect. If you believe enough that whatever procedure you’re undergoing is good for you, it affects your psychological state, and changes your attitude, at both a conscious and unconscious level, toward the stimuli. Maybe when the fear is removed, a large part of the affect disappears, which in turn reduces the recurrent feedback between affect and sense, so you literally don’t feel it as much as someone not into the ritual, game, or whatever would.

            Like

          5. I think those hanging ceremonies are archaic (and possibly illegal in places), which is why I wondered what personal reporting or observation Corns might have. From what I can recall of A Man Called Horse (not much; saw it in a drive-in, and my date and I weren’t really watching the movie) pain was very much an element of the process. The ceremony was all about enduring the pain to prove worthiness.

            I would certainly take issue with conflating trepanning, episodic (or religious) analgesia, and self-cutting. For one thing, from what I understand, the whole point in self-cutting is to feel something. It’s that remapping of pain to pleasure I mentioned earlier.

            Psychological state definitely affects the perception, no question about that. Fear does amplify perceived or imagined pain. I think I’ve mentioned I used to quietly panic inside when I had to have blood drawn or get an injection. Needles and razor edges being one of my few phobias. But getting into skydiving for a while put it in perspective — the actual pain involved is trivial. The panic was all driven by the phobia. But why am I afraid of a common medical procedure but willingly jumping out of airplanes? That was upside down, so my mind began to see it differently. I still don’t love needles, but the panic is mostly (truly) gone.

            Like

          6. As a boy, I was actually deathly afraid of shots. I remember having a knot in my stomach everytime we went to the doctor’s office, because there was a good chance I wasn’t getting out of there without a shot.

            Then, once when I was in the hospital and blubbering about a shot I was about to receive, my Dad asked, “Does it really hurt that bad son?” I suddenly realized it didn’t hurt that bad, that it was in fact a very minor sting, and that nothing bad had ever come about from getting a shot. I did have to mentally rehearse to myself for a time that it wasn’t a big deal, but it seems like within a year or so, shots ceased being anything noteworthy.

            Going back to your point about kids taking their cue from parents, my Mom had shown me nothing but sympathy when getting the shots, and so they had remained a traumatic thing. My Dad simply pointing out it wasn’t a big deal seemed to break some cognitive block.

            Liked by 1 person

  2. I really like your analysis. I am reminded of an article over 20 years back in which the author explained that there was a large traffic of neural feedback regulating pain in rats. It is far from a bottom-up, periphery-to-central process. So your hypothesis about Jake seems entirely plausible.

    I like the IASP’s definition of pain, too. Clearly these folks recognize, if only implicitly, that definitions come late in scientific inquiry.

    Liked by 1 person

    1. Thanks!

      I like the IASP definition too. It some ways, it’s a reaction to earlier practices where there was a narrow definition of pain, and a lot of people’s reports of pain that didn’t fit didn’t get taken seriously. But as a very pragmatic guidance, it probably ends up capturing the reality pretty closely.

      Like

  3. It seems that if we look at something closely enough, it disappears. This also seems natural. If you stick your nose into a crack in a large tree, can you still see the forest? (Back when I was professing, I likened college education as people shoving your face into the bark of a tree and shouting in your ear: “Can you see the forest now?”)

    I also see a trend that when we find out something we thought was simple is actually complex, we never just slap our heads and say, “Gee I thought it was simple, now I do not.” We go whole hog and say things like free will is an illusion and pain doesn’t exist.

    We are funny, but we don’t laugh enough.

    Liked by 1 person

    1. Good points. I have to say, in just about every college course I took that I thought was going to be interesting, I did feel like my face had been shoved into that bark. And for the ones I had no particular interest in, it left me wondering when I’d ever use that bark.

      Definitely agreed that we don’t laugh enough!

      Like

  4. Mike,
    You mentioned that Richard Brown podcast to me weeks ago, as well as that you were going to review Corns’ book. I provided a rough assessment of her position based upon that interview (found in this comment https://selfawarepatterns.com/2020/07/12/kurzgesagt-on-intelligence-and-prospects-for-engineered-intelligence/#comment-85094). I’m pleased to find that your final assessment very much corresponds! I was worried that she’d drive you closer to some of the theorists who I consider misguided and/or overly skilled in the art of rhetoric.

    Other than our agreement that the “pain” term will remain useful in science, did you find any of her less controversial accounts informative?

    It seems to me that in an epistemic sense, there are two distinctly different varieties of brain function. One of them harbors no “phenomenal” component. Here the brain exists as a computer — it accepts input information and processed this for output function. No teleological dynamic exists here however, and so this form of function should tend to fail under the more “open” circumstances which thus can’t effectively be programmed for. Thus for life which needed to succeed under more open circumstances, evolution seems to have added a relatively tiny teleological dynamic, or the phenomena (such as “pain”) by which you and I experience existence.

    Furthermore given the social tool of morality, apparently the field of psychology remains unable to formally acknowledge this teleological dynamic’s purpose so far (which is to feel as good as it can for as long as it can, I think). Thus not only should psychology remain a “soft” variety of science, but our mental and behavioral sciences in general should remain this way given that void in effective overarching theory. I’ve often mentioned this to you in the past, though at some point it might begin making sense.

    Liked by 1 person

    1. Eric,
      “Other than our agreement that the “pain” term will remain useful in science, did you find any of her less controversial accounts informative?”

      I think my main takeaway is how mainstream in pain science it is that pain is a cognitive construction. (Note: “cognitive” here does not mean “conscious”.) It’s reflected in the IASP’s definition and key points. If something so primal seeming as pain is such a construction, it seems to provide a lot of support for the theory of constructed emotions and related views. I actually went back to Lisa Feldman Barrett’s book to review what she said about pain, and it pretty much lined up with the pain science.

      As I’ve noted before, it’s not so much that the primal emotion people are wrong, it’s that their terminology is different. What they’re calling an “emotion” is a lower level action program, and the consciousness they see at that level is anoetic consciousness, processing in humans that is below the reach of introspection and therefore usually thought of as unconscious.

      Liked by 1 person

      1. Mike,
        I presume that the house you live in was constructed by means of people with building materials. I wouldn’t say that it was “cognitively constructed”, that is unless I meant merely “imagined”. And if so then I wouldn’t say that your house otherwise exists. Would you agree with me that if your house were not constructed somehow by means of associated building materials, then it wouldn’t exist, except perhaps conceptually?

        If so, when you say it’s mainstream in pain science that pain is “cognitively constructed” (and yes stipulate that “cognitive” doesn’t equate with “conscious”), what are you saying that these modern scientists believe? How are you defining “cognitively constructed”?

        Liked by 1 person

        1. Eric,
          By “cognitive”, I mean it’s not a lower level process, but one that can be affected by numerous psychological factors, including memory, non-somatic perceptions, beliefs, overall mood, etc. If someone gives you a sugar pill that you believe is an analgesic, it has real affects on how much pain you feel. If you’re distracted or excited, you literally may not feel an injury until later.

          On the other hand, if you suffered an injury that has healed, but harbor fears about the pain you felt, and there are context cues that put you in the same psychological state you were in when you were in pain before, that can literally put you in pain again, despite being healed. That’s a large part of the chronic pain suffering population.

          Liked by 1 person

          1. Mike,
            Given that you didn’t explicitly disagree, I take it that you agree that if you had a house that was “cognitively constructed”, or thus had no non-mental material composition, then it would merely be an imagined house. Furthermore the definition for “cognition” that you’ve just provided matches my own, as in “psychological factors, including memory, non-somatic perceptions, beliefs, overall mood, etc”. So here we’re square.

            Furthermore I’m not going to discount the position that there can be certain cognitive effects associated with the sorts of things that we feel. Fooled people might so strongly believe that a placebo will cure their chronic migraines, that there will be some measurable effects in a given study of many subjects. That’s why we administer them — not as treatment, but rather to help account for the biases which people bring into these studies. They aren’t needed for cat studies specifically because cats have no idea what we’re doing to them anyway. And the opposite should be the case when scientifically documented pain relievers are provided to various believers in the contrary. Human biases tend to mess up our studies (not to mention, science in general).

            My problem with Barrett’s theory of constructed emotion, is that it shouldn’t be tenable for emotions like sadness, curiosity, frustration and so on, to be learned given base affects (whatever those might be), because then we should see extreme variations in what causes us to feel such things on the basis of chance learning. Evolution must have required that specific dynamics evoke “sadness”, for example, beyond environmental conditioning effects alone.

            I suspect that scientists have found no circuits for the various emotions that we feel (including the seven that Jaak Panksepp proposed, as well as whatever Barrett proposes as base affects today), because not one of them exist. Instead try conceptualizing all qualia, whether a headache, or curiosity about what I’m trying to say, or the scent of a flower, to exist somewhat the way pixels light up on your computer screen. As you know, there are no electronic circuits that produce specific types of screen images. Instead your computer provides information to your screen, and this information animates the function of those pixel mechanisms in ways that produce a vast spectrum of images. That’s essentially the way that I consider all qualia to exist. The brain provides information to qualia producing mechanisms, thus creating all that we feel. (And it seems to me that “qualia” exists as a very effective definition for “consciousness” itself. This is to say that anything with it displays consciousness, though not otherwise.)

            So what in the brain might exist as qualia producing mechanisms? Beyond the electromagnetic radiation associated with neuron firing, I don’t know what other mechanism might harbor sufficient informational fidelity. Note that unlike the informationism position which holds that thumb pain qualia can exist when certain information on paper is converted to other information on paper, this would instead provide a causal solution to the quandary.

            Furthermore note that from Barrett’s proposal, base affective circuits should exist in the brain, though no such circuits have yet been found. So I seem to have beat her at her own game. From my proposal those circuits haven’t been found, specifically because they don’t exist any more than various screen image circuits exist. And yes, this model does fit in well with the psychology based dual computers model of brain function that I’ve developed, or the basic variety of theory which modern psychologists make do without.

            Liked by 1 person

          2. Eric,
            If I don’t explicitly disagree, you shouldn’t take that as agreement. You shouldn’t take it as disagreement either. It only means I didn’t respond to it. Sometimes I overlook it, other times I choose not to respond. Sometimes I do disagree but explaining why would involve too much explanation.

            Honestly, the distinction between cognitive and non-cognitive processing in the brain has always been a hazy one for me, and lots of reading in neuroscience has made it even hazier. Really at this point when I use that word, it usually refers to non-sensory cortical processing. Given the highly interconnected nature of the cortex though, that processing tends to have lots of causal factors we mentioned. Of course, as we’ve discussed before, even sensory processing in the cortex has many causal factors, not all of them from the incoming signal from the sensory organ.

            Your discussion about placebos assumes that relief from placebo isn’t “real” relief, that it’s not addressing the “real” pain. But that’s failing to heed the science. There is no “real” pain. There is only the evaluation in your brain. Knocking out a particular mechanism may lead to relief, but it may not. And if the brain changes its evaluation for reasons other than change in the mechanism, that is still real relief.

            Barrett doesn’t argue against base affects. She just argues that they’re only a subset of the causal factors in our felt emotions.

            My experience is there is indeed a lot of variation in what causes sadness in people. Culture has a lot of power in effecting what emotions we feel based on certain situations. An ancient Roman politician having an affair with another male would have a very different reaction if it became public knowledge from an American politician in the same position in 1950.

            Jaak Panksepp claimed to have found identifiable circuits for his primary emotions. His evidence came from affect displays after he stimulated those circuits, which he equated with actual affects. But his camp believes in anoetic consciousness, a type of consciousness below the level of introspective access. Barrett and LeDoux consider what’s happening at this level unconscious survival circuits. The Panksepp camp use the word “emotion” and “conscious”, but they mean different things by it. It’s all part of the terminological mess in this field that LeDoux lamented.

            Liked by 1 person

          3. Mike,
            So then what’s your answer for the “house” question? I presume that the house you live in (and as I recall from an old post, was once nearly flooded out), was constructed by means of people who used building materials. I wouldn’t say that it was “cognitively constructed”, that is unless I meant merely “imagined”. And sure, I do think that it does tend to be cognitively constructed by you and others also. So do you agree, disagree given some kind of explanation, choose not to answer, have an explanation that would be too involved to provide at the moment, or something else?

            On how the distinction between cognitive and non-cognitive processing in the brain seems hazy to you regardless of how much neuroscience you’ve read, I consider this entirely appropriate. It seems to me that no amount of neuroscience will ever provide anyone with such information, and given that that sort of thing should reside at a higher level of abstraction. Theoretically “Spanish” must in some sense exist in the brain of a person who speaks it, though studying such a language by means of neuroscience should not be effective. That’s an extremity of course, but my point is that we need higher level answers for higher level questions. It seems to me that “cognitive versus non-cognitive” is a question which resides above neuroscience, and specifically in psychology.

            I propose a difference between cognitive and non-cognitive processing which is quite distinct. Here the entire brain functions non-cognitively, though it can produce a cognitive dynamic by means of certain mechanisms. It’s quite like the way that your computer creates screen images by animating the function of pixels. And what sort of mechanism does cognition exist by? I suspect certain types of electromagnetic waves associated with neuron firing.

            I do agree with you and James Cross that cognitive stuff (through either your information conception, or my mechanism conception), does tend to alter other cognitive stuff. Here’s an obvious example. If I have a reasonably itchy foot, but also get smacked hard in the thumb, I shouldn’t feel that itchiness anymore given exponentially greater input to my conscious processor from the whacked thumb input.

            I realize that in the past I’ve mistakenly interpreted Barrett to deny basic affects without associated teaching. Today I realize that she does not. But given that no basic affective circuits have been found, I’m also able to note that her theory is thus vulnerable to my own. My account says that basic affective circuits haven’t been found, because they don’t exist. Instead I propose that all qualia requires mechanical instantiation as I’ve discussed.

            I certainly agree that sadness and such is highly dependent upon cultural dynamics.

            Jaak Panksepp’s seven emotions of “PLAY”, “PANIC/GRIEF”, “FEAR”, “RAGE”, “SEEKING”, “”LUST” and “CARE” are ideas that he hoped would help straighten things out, but hasn’t. I don’t consider any of them fundamental, but rather to exist under a vast field of potential qualia. Regardless, is it true that he was able to find ways to elicit each of these through electrical signals in various parts of the brain? I suspect not.

            I do support his position that non-introspective dynamics may be useful to call “conscious”. This is to say that if there is something it is like for something to exist, or “qualia”, then consciousness will exist for the experiencer. It seems to me that LaDoux and Barrett should try to get on board with this, and so help make terms in the field more uniform.

            Liked by 1 person

          4. Eric,
            I don’t disagree with your house point, but I don’t see it as relevant in this discussion. The implication is that a cognitive construction is less “real” than non-cognitive neural construction, a distinction I don’t agree with.

            My point about cognitive vs non-cognitive is I’m not sure how coherent the distinction is. I shared a talk a while back by someone pointing out that cognition was a concept developed to replace the part of the mind we once thought resided in an immaterial soul, something that happens between sensory and motor processing. But I think the distinction is an artificial one we project onto the system. The fact is, sensory processing, at least in the cortex, is heavily influenced by processing we refer to as “cognitive” and “motor”.

            Of course, your model is a dualistic one (a naturalistic physical dualism). So I can see you mapping the old distinction back into it. Although it’s now widely accepted that a lot of cognition is nonconscious. Not sure if you mean to drag nonconscious processing into your second computer. Although if I recall correctly, you see it as “semi-conscious”.

            I’m not totally on board with the distinction Barrett makes between affects and emotions. It seems to equate “affect” with the survival circuits, although that’s an equivalence a lot of animal researchers make.

            As I noted, Panksepp successfully showed a correlation between some circuits and affect-display. Is that equivalent to the felt emotion? Depends on your definitions of “consciousness”, “affect”, “felt”, and “emotion”.

            What would you say is the difference between non-introspective consciousness and nonconscious processing?

            Liked by 1 person

          5. Mike,
            On my “house” scenario (which I now understand you don’t disagree with), the point is that sometimes things can exist, and indeed sometimes must exist, non-cognitively (pace idealists and perhaps panpsychists). I believe that evolution required the human to have a wide spectrum of emotions given specific varieties of circumstances that it needed to deal with. Furthermore I of course believe that past cognition will tend to affect future cognition.

            What I perceive Barrett to be selling however, is that a human cannot become “sad” until after somehow being taught sadness by someone else, and even need spoken words for that education. Let me know if you think I’ve got her wrong.

            Furthermore I don’t believe that human emotions simply emerged in the human and perhaps a tiny number of similarly evolved creatures. Instead sentient creatures in general (which may even include the ant), should have developed such traits over maybe a half billion years of evolution. The human may have added a bit more to the pot as well given its own circumstances, which is to be expected. So when Barrett and LaDoux define “consciousness” such that it only exists in creatures with the capacity to reflect upon their own thought, I’m forced to assess them as part of the problem rather than solution. I consider that definition unhelpful in general.

            I agree that “cognitive” versus “non-cognitive” may not be very useful a distinction under theorists who you respect. Under my own such theory however the distinction is perfectly clear. Here the entire brain is non-conscious (just as a computer that we build is), but it also creates a conscious form of computer from which to base some of its function upon. This auxiliary form of computer exists as the medium by which you and I perceive existence. From my psychology based model consciousness harbors three varieties of input, one variety of processor, and one variety of output.

            (In the future when you reference my model, could you do so under the “dual computers” label rather than “dualism”? Even with your “natural” stipulation, this is clearly a straw man move. Chalmers refers to his own position this way as well, and I consider it full substance dualism. I do realize that you must not like how I characterize informationism to be less than a natural position, though I am at least quite able to defend that characterization.)

            On “semi-conscious”, no I don’t use that term. What I think you’re referring to is my dislike for the “unconscious” term. I consider it used in too many ways that tend to get mixed up. Instead I use “non-conscious” (or not sentient based function), “quasi-conscious” (when a mixture between the two varieties of computer is being referenced), and “altered states of consciousness” (to reference degraded states such as sleep or intoxication). That all fits in just fine under my model.

            Non-introspective consciousness here will exist for anything with qualia. This is the stuff that I believe brains produce by means of associated mechanisms (perhaps EM fields), and you believe exist by means of certain information that’s properly converted into other information. In either case I don’t consider it necessary for there to be anything “functional” here. Instead evolution should have used such physics to create a teleological dynamic so that life could better deal with more open circumstances. Then as for non-conscious processing, this is displayed by the computer that I’m typing on, and the processing of all non-sentient function.

            Liked by 1 person

          6. Eric,
            On Barrett’s view, it is true that she sees our felt emotion of sadness as something we learn. But remember, she doesn’t deny basic innate affects. So if we define “sadness” as a negative valence with low arousal, Barrett would say it’s possible for that to happen without learning. But most of what triggers it in a mature human is learned.

            Generally I agree that emotions are broader than humans. I don’t agree with Barrett or LeDoux on this. But even in other animals, there’s a learned aspect to what they feel. Although it’s simpler than in humans. I think it’s more elaborate in mammals and birds, but far simpler in amphibians, fish, and arthropods, to the extent of barely being present.

            On Barrett and LeDoux’s definition of consciousness, LeDoux subscribes to higher order thought theories, but I’m not sure what Barrett’s preferred theory is, if she has one. She discusses prediction a lot, but that’s compatible with a lot of theories.

            I understand no one likes having their views referred to in a way they see as strawmanning. I feel much the same way when you describe mine as supernatural, but if it’s what you really think, I’d rather hear about it and argue against it. That said, I’ll try to remember not to use the d-word for your theory.

            Ah, it’s “quasi-conscious” instead of “semi-conscious.” Hopefully you noticed that I did use “non” rather than “un”.

            On non-instrospective consciousness, so if you can’t introspect it, how do you know it’s conscious? You say if qualia are present, but how would you know they are in fact present? For example, from a purely phenomenological perspective, what separates that processing from the processing that regulates your blood sugar or hormones?

            Liked by 1 person

          7. Thanks Mike. As I’ve said before, you’re a man who displays tremendous integrity on these matters. This isn’t what I’d expect from any of the big names who adhere to the informationism stance. I can imagine the extent to which Dennett would spin things rather than admit that from his position certain information on paper that’s converted into other information on paper, will create what he knows of as “thumb pain”. (Of course he’d regress into his “qualia doesn’t actually exist” routine.)

            Regarding my position that it isn’t useful to mandate introspection for the creation of qualia / consciousness, let’s even take this from your stance rather than mine. Of course you believe that there is a kind physics by which brains produce qualia, and it boils down to certain information processed into other information. Thus imagine us building a machine that produces a highly negative opposable digit qualia in this way. (Note that I’d go along with whatever information medium you’d like to use to build such a machine.)

            Once we succeed, should the experiencer of this qualia also “introspect”? Unless we define qualia itself to inherently be introspected, then no, it should not introspect. By design our pathetic machine should just be something that suffers rather than exists as an evolved conscious entity. And when you ask me how it can know that it’s conscious given that it can do nothing more than suffer, and thus shouldn’t be able to effectively contemplate its own existence, my answer runs like this. Because the pain itself damn well hurts it! The existence of qualia itself should create the conscious entity, and even one as pathetically primitive as this. It should reflectionlessly suffer.

            If you ask me what separates this processing from the processing that regulates blood sugar or hormones, I’ll tell you “not much”. Each of them should exist by means of non-conscious processes rather than conscious processes. For conscious processing we should need to create a far more advanced machine, or one that “thinks”. This is to say that it would need to interpret inputs such as its qualia, as well as construct scenarios about what to do to potentially make itself feel better. Evolved conscious entities in general should have this capacity.

            Like

          8. Thanks Eric.

            I don’t think you’re being fair to Dennett, but he’s always been a controversial figure, so he’s an easy target.

            I wasn’t asking if introspection is a necessary component of consciousness. We could choose to define it that way, and some do. After all, it is the traditional way we distinguish conscious from non-conscious processing in humans. But we can also recognize processing that humans can introspect happening in other animals, which we can then consider to be conscious, even if it happens without introspection.

            But my question was more about, in humans, if processing is happening below the level of introspection, how do we know if it’s conscious?

            Liked by 1 person

          9. Mike,
            I suppose that one good way to demonstrate whether Daniel Dennett is more the slick politician that I consider him to be, or the man of integrity that you hope he is, would be to present him with my “thumb pain” thought experiment. As an informationist, would he submit that if certain information on paper were properly processed into other information on paper, then something in that shuffle would then experience what he does when his thumb gets whacked? If so then I’ve very much underestimated his integrity. Unfortunately we shouldn’t have the opportunity to try unless or until some version of my thought experiment becomes popular enough to reach people of his station. (And wow, I see that no less than Mark Bishop has commented on your new post! It seems to me that he could potentially help with that…)

            I guess I didn’t quite address your question last time, though that answer should still provide a helpful foundation. So here we’ve built a machine with qualia which is otherwise functionless. And as you’ve implied, many evolved varieties of life should demonstrate such physics as well, and even use it functionally for the survival of their various species.

            To address your actual question as I perceive it, let’s consider a healthy and educated language speaker like yourself. If you experience any quale, then it should theoretically be possible for you to acknowledge it and thus say to yourself something like, “I felt that quale”. So that would demonstrate introspection. Biological processes without qualia, which is to say without phenomenal consciousness, should not be possible for you to introspect. Why? Because if you don’t feel them then there shouldn’t be anything for “you” (as a conscious entity) to reflect upon. As I define it, that’s not how consciousness works.

            (I think I’ve mentioned before that I consider this “introspection” business to be highly anthropocentric. What use might it be, if it doesn’t address function before the language speaking human?)

            Regardless I think that I’ve otherwise provided an answer which resonates with yours. If something happens which an introspector can’t possibly introspect, then it should be effective to say that it lies beyond the realm of consciousness.

            Liked by 1 person

          10. Eric,
            Okay, good. So now, swinging back to the idea of anoetic consciousness. Anoetic processing, seemingly by definition (although getting clear definitions on any of this is always a challenge), is beyond the reach of introspection in humans. We might have access to its effects, but not to the processing itself. So using the introspection criteria, how would it be conscious in humans?

            And if it’s not conscious in humans, why should we regard it as so in other animals?

            Liked by 1 person

          11. Mike,
            I’m not all that comfortable using the “noetic” and “anoetic” terms here. Imagine the joke that the hard science of physics would have become if it hadn’t continually reduced redundant and misguided terms into the core terms used by physicists today. Without such reductions the field would surely look far more like our mental and behavioral sciences currently do, or “soft”.

            Unless by “noetic” you mean something other than “conscious”, or a term which I consider quite usefully defined as “function which is sentience based”, then let’s not over complicate our discussion. Or if we are going to use this term, then let’s make sure that this is exactly what we mean or specifically clarify otherwise. For example, if “anoetic” is indeed referred to as “not conscious”, or “not sentience based”, then “anoetic consciousness” would be a clear contradiction in terms.

            Anyway yes, under normal conditions as an intelligent language speaker you have the potential to introspect any quale that your brain produces. Though you will have such access, you’ll have no access to what creates any quale — that sort of thing would exist beyond consciousness. (Thus isn’t my “dual computers” analogy effective in this regard? Here we have a basic non-conscious mode of function which creates an auxiliary conscious mode of function. Rather than attempt to combine the two into a single mode of function, you might give this a try yourself.)

            So you ask me, if the stuff that we can’t introspect isn’t conscious for the human, then why should we consider that sort of thing conscious in lower forms of life? But we both shouldn’t and don’t, for example, consider the physics which creates pain in rats to be something which rats are conscious of. In all cases it’s the quale that something can be conscious of, not what creates the quale.

            I guess the mixup that some people make in this regard (like Joseph LaDoux) is to decide that if introspection helps experimentally define what’s conscious in a standard language speaking human, then any creature which does not have the advanced ability to introspect, must not be effective to call “conscious”. If these people were instead to realize that introspection should be considered an extra trait which the modern human possesses rather than a useful way to define the “consciousness” term, then perhaps mental and behavioral sciences wouldn’t have taken this particular detour.

            Liked by 1 person

          12. Eric,
            I can understand not being comfortable with Endel Tulving’s terminology: anoetic (unknowing), noetic (knowing), and autonoetic (self knowing). And I agree that anoetic consciousness seems like a contradiction in terms. But that’s what people in the Panksepp, Merker, Solms camp seem to be postulating as baseline consciousness. Without it, what Panksepp calls primary emotions become nonconscious reflexive survival circuits, part of the causal melange that eventually feeds into emotional experiences, but not the composition of them.

            At this point, we’re now close to Barrett and LeDoux’s view. However, unlike them, I don’t think that view mandates that only humans have emotional experiences. In Barrett’s case in particular, I think it comes down to terminology, with her making a distinction between affects and emotions, a distinction I’d agree with you is anthropocentric.

            As I noted above, we can choose to define consciousness in such a way that introspection is required, but its not necessary. We can also choose to view the processing accessible by introspection that animals also possess as still being conscious, even if most of them can’t themselves reflect on it.

            Which view is the “correct” one? I don’t think there is a fact of the matter. But admittedly, that depends on whether you see phenomenal consciousness as an objective thing.

            Liked by 1 person

          13. Mike,
            First I’ll get into the full breadth of why I consider Endel Tulving’s terminology to be misguided. In a minor sense, just as Jennifer Corns would eliminate the “pain” term except for folk usages (which you and I disagree with), I’d highly restrict the “knowledge” term beyond folk usages. Technically when we say that we “know” things, for the most part what we mean is that we “believe” them with some level of conviction. Beyond tautologies and such the only thing that I can ever “know” to exist, is that “I” do somehow. Thus here I must bow to the great René Descartes.

            Even when Tulving’s “knowledge” term is interpreted as “belief” however, I consider this to not be a basic enough idea that really screws things up once expanded upon. Instead of things which believe, and/or have self belief, or conversely have no such capacity, consider using the concept of sentience, which is to say things which exist by means of qualia production. This directly references an effective definition for conscious existence, or lack thereof, as well as the creation of “self” itself through qualia production. The hypothetical machine that I earlier spoke of us creating would not be conscious, though it would be armed with physics from which to create a conscious entity (whether under your plan or mine). When powered up this machine would not create something which believes, or perhaps even believes that it exists, but rather something that suffers the qualia that’s commonly known of as “thumb pain”. It is this physics, I think, that evolution used to create the teleological mode of function which is generally referred to as “consciousness” today, or the medium through which existence is perceived.

            I’m not going to justify everything which resides in the Panksepp, Merker, and Solms camp, but whenever they reference sentience based function in evolved creatures, then this will concern more than “non-conscious reflexive survival circuits”. Here they should be talking about a mode of function by which there is something it is like to exist. Unlike our pathetic thumb pain machine, for an evolved creature this should exist as something more like a teleological mode of computation.

            Phenomenal consciousness will exist for a given subject, which is to say a conscious entity, when qualia is produced. It seems to me that this sort of thing can only be experienced subjectively, though such physics itself should ultimately exist objectively. Apparently evolution implemented such physics to create sentient forms of life thus able to better deal with more open circumstances. Does that sound about right to you?

            Liked by 1 person

          14. Eric,
            I think of “knowledge” as just reliable belief. I don’t require absolute certitude to use the word “know”. That would indeed render the word useless, but I think it’s still a useful word as long as we remember its limitations. But as I noted above, I have my own issues with Tulving’s terminology.

            My understanding of consciousness is not of the brain “creating” a conscious entity, as though that entity were separate and apart from it. That’s your model. Mine is that the brain functions as a conscious system, just as the heart functions as a pump.

            The question is whether the low level processes that Panksepp, Merker, and Solms are referring to count as sentience, with qualia, etc. I don’t think it does. Defining things subjectively is fine as far as it goes, but it seems to give little insight into the objective reality, a reality I think involves numerous complex and idiosyncratic mechanisms.

            Liked by 1 person

          15. Just a second Mike, you seem to have created a false dichotomy here. When I say that the hypothetical machine that we build would create a conscious entity, it’s only “separate and apart” in the sense that an epistemic distinction is being made in a causal realm. Similarly a tree might create leaves, or a heart might create the flow of blood through your veins. So I’m not actually talking about anything different from what you are. I could just as well use your terminology and say that our hypothetical thumb pain machine would function as a conscious system (pathetically so, but still).

            Clearly our mental and behavioral sciences need effective reductions for the mountain of material which exists in them today. You use a hierarchy to try to help manage that vast collection of emergent verbiage. Ultimately however science should need to begin from a solid foundation and work its way up to the human from an experimentally solid position, or the way that things occur in our hard sciences. To me it’s crazy that people instead begin from notions regarding the amazingly complex human, and then try to work backwards from those notions to potentially grasp the nature of this amazing creature.

            I suppose that I should weigh in on your new “Dimensions of animal consciousness” post in order to illustrate this more as backward thinking defense rather than the foundational sort of perspective which should ultimately be needed to harden these fields up.

            Liked by 1 person

  5. Is pain only complex for humans because we have so many other capabilities to interpret it and modify it?

    I’m guessing placebos for managing pain only work on humans. I don’t know how you would give a cat a placebo.

    Pain might not be so complex for non-humans.

    For a while (maybe still now), dashboards were the rage in software. Everybody thought them useful to consolidate and simplify data, call out the important stuff so it is not lost in the details. Pain seems like a sort of dashboard indicator, an indication of something wrong but something requiring more investigation to understand what it is – one of those red signals on a dashboard .

    Liked by 2 people

    1. James,
      Theoretically I suppose you could have some substance that looks and smells like cat food, but actually provides zero nutrition. So the cat would eat it like it’s the mimicked food, or a placebo versus the real thing for a study. Well at least theoretically…

      To me the way that some people today consider placebos to be actual treatment, is hilarious. By definition, this is effect less stuff! Placebos are obviously used to assess the results of actual treatments in studies given that we know we’re being provided with something to treat something, and so are biased observers of ourselves.

      And yes, I’m sure that cats simply consider pain to feel horrible. If only modern scientists were similarly sensible.

      Liked by 1 person

      1. The cat would need to have some expectation from the food beyond sustenance. Most of my cats would suspect they were being fooled anyway and stop eating the fake stuff. For that matter, they are picky enough that would probably stop eating it even if it was real food in the expectation of something new.

        But placebos have a real effect. It is not just to assess the results of actual treatment. We know that because we can compare results of no treatment versus placebo treatment versus actual drug treatment. Frequently for many pain drugs placebos work almost as well as actual drugs.

        “These data demonstrate that cognitive factors (e.g., expectation of pain relief) are capable of modulating physical and emotional states through the site-specific activation of μ-opioid receptor signaling in the human brain”.

        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6725254/

        Since the only active agent is the mind itself, it is hard to conclude other than the mind itself has a certain degree of direct control over the brain. This, of course, aligns with neuro-feedback studies also suggesting that consciousness can mediate in the control of neurons down to the single neuron level.

        Liked by 1 person

    2. The complexity in mechanisms is throughout the nervous system, so I don’t see any reason to suspect that’s any less in other species. It might be that the number of psychological influences are fewer, but I still think they’d be there. It wouldn’t surprise me if mammals, at least, have both placebo and nocebo conditions, although figuring out how we could affect that seems like it would be a real challenge.

      The dashboard comparison is interesting, in that it represents a convergence of disparate mechanisms, with each unique problem coming from a unique idiosyncratic combination of those mechanisms.

      Liked by 2 people

  6. As usual, I liked your post and the comments/responses around it. There were 2 issues raised in my mind from reading your post: #1 would you suspect that the channels of pleasure parallel the channels of pain in our minds from the PoV of stimuli, affects, and feedback mechanisms? #2 I’ve always suspected there’s a cut-off mechanism in each of us (though not necessarily at exactly the same points), that prevents the level of pain from increasing toward infinity. It seems reasonable that the level of stimulus-pain might continue to increase while experience-pain levels or drops off (or we lose consciousness). I’m not sure whether or not this is analogous, but when I was in high school, I played clarinet and saxophone, composed music, and had relative-pitch (give me a concert-c on a piano and I could tell you what the next 8 or 10 notes were played after that); I used to imagine a concert-c and then imagine going up the scale, note-by-note, for the next 4-5 octaves, until I couldn’t imagine or play in my mind a higher note although, in theory, the scales can go on up and infinite number of octaves.

    Liked by 2 people

    1. Good question on pleasure. Given how idiosyncratic the pain mechanisms are, I doubt there are parallel channels. I suspect they’re actually all entangled, with pleasure just as complex and idiosyncratic. It’s not hard to see stimuli that under certain circumstances map to pain, but map to pleasure in a different context. Sometimes it might be because we simply associate the pain with progress, such as in a workout, but other times it might be like my experience, where scratching a bloody wound feels like relief rather than pain.

      Boy, I hope there is some kind of cut-off mechanism, but I’m not sure if there’s evidence for it. A guy who had large scale burns on his body once told me that the pain in the hospital with daily bandage changes was far worse than the pain he remembered from the initial burns. But from the story he told me, his adrenaline was sky high in the accident, while in the hospital he had nothing to do but agonize while he was healing.

      Liked by 1 person

    2. Mike Stone,
      It seems to me that on your octaves question, we do at least get a limit regarding the frequencies that the human is able to hear. Apparently dogs go farther up the scale. My perception is that there are no natural infinities in this world however.

      It seems to me that for the standard human, pain has the potential to get insanely bad. Thus either an evil god created us, or an indifferent evolution that, in this regard, can simply be horrible. I don’t know of anything positive that I’ve ever felt which I’d classify to even be 1/100 times less intense that my worst pain. But maybe under experimental conditions, pleasure could go the full way?

      For his research, Jaak Panksepp would wire electrodes into pleasure producing parts of the rat brain. When the rat was permitted to operate a pleasure leaver, it would keep running it without stopping for normal needs until its muscle would simply give out. I also recall reading about this being done a bit in the 70s to train institutionalized homosexual men to instead be attracted to women. The results of hooking them up to pleasure switches, were simply shocking!

      I guess direct pleasure experiments aren’t done anymore, or at least not for humans. They do however help demonstrate the validity of my amoral theory that qualia constitutes all that’s valuable to anything, anywhere.

      Liked by 1 person

  7. Thank you, Eric, for your informative and well-considered response. As for your “limit regarding the frequencies that the human is able to hear”, I was talking about my ability to imagine notes, not to hear them. There was no audible stimulus involved; I don’t think I even sub-vocalized the tones in my mind, but I can’t be sure of that (many people sub-vocalize while reading without being aware they are doing so). I do know that I can imagine the sounds of notes higher than I am capable of sub-vocalizing. I do remember reaching a limit beyond which I couldn’t imagine a higher tone. So anyway, I guess the consensus here is that there are no limits on how much pain we can experience and that pleasure and pain are not parallel networks.

    Liked by 1 person

    1. Mike Stone,
      You bring up an interesting question. When you think a note, is this entirely different from hearing such a note? In each case I’d say that there will be qualia. The qualia of hearing requires sense organ input, and conversely I’d say that thinking a note requires memory input. Though one will be far more rich than the other, each of them should have recognizable similarities which you know of as that note. So it could be that the thought note is essentially a pared-down version of the other, and thus the brain does some of the same things to produce either example of qualia.

      To me this suggests that if a given note cannot be created through sound and then heard, that it cannot be imagined either. But what do you think? Can you think the sound of a dog whistle which is thus not possible for a human to hear?

      Liked by 1 person

      1. Thanks, Eric, for persuing this digression with me. I couldn’t imagine the sound of a dog whistle because I’ve never heard one. The way that I imagine such high notes is by starting off with a tone that I’ve heard recently and then calculating the next note after that one, the next, and so on. Remember that I have fairly decent relative pitch. The difference between perfect pitch and relative pitch is that with perfect pitch you can identify any tone without having an initial tone (like a concert-C) to compare it to and with relative pitch you can identify any tone after having an initial tone played for you. I can’t sing worth a damn but I can hear if somebody plays off-key by much less than a quarter-tone. So bottom line is that I don’t think I’m remembering some of those very high notes; that is, I’m calculating them relative to what I’ve heard of the first note and then producing it in my imagination (up to a certain point which is the limit of my ability to produce such a note in my imagination). It’s not like trying to count up to infinity, which I could, in theory, do until I drop. Anyway, this is something I’ve wondered about since I first started imagining musical scales in high school. I’m 73 years old now.

        Liked by 1 person

        1. Well then Mike Stone, I guess to test your position we could detect the frequency of a dog whistle, drop that frequency down by exact notes until you could hear it, and then from there you could try working your way up to your imagination of what a dog whistle would be like to hear. Is that right?

          If so, I guess I’m skeptical. It seems to me that the pitch that you do hear should provide relatively rich qualia when compared with the qualia of your thought based representation of it. From here it seems to me that properly scaling up to your imagination of what a dog whistle would sound like, should leave you with no imagined sound at all, or the same thing which you’re unable to hear. But then I guess you could try something like this and tell us if you’re able to imagine a sound which is too high for you to actually hear. Or perhaps you have?

          Liked by 1 person

          1. Interesting point, Eric. I suppose I could slow down the frequency of a dog whistle (if I had the equipment to do so) and then scale it back up until I couldn’t produce the sound in my imagination. I’m not sure whether I could imagine a sound that is too high for me to actually hear, but I am sure that I’ve imagined sounds higher than I’ve ever heard before (perhaps I could have heard it but I don’t remember having done so).

            Liked by 1 person

      2. “qualia of hearing requires sense organ input”

        Can we hallucinate a sound? I think so. I think I have. I know we can hallucinate visual objects, even entire scenes and landscapes, because I have. This is not imagining but actual vision.

        What about dreams?

        You could say that without a sense organ we cannot learn to see or hear.

        I think qualia are the product of the brain – that is, they are created by the brain and the sensory input is only absolutely required during learning. So a given note can be created without sound.

        Liked by 2 people

        1. James,
          When I say that the qualia of hearing requires sense organ input, I just mean that otherwise it might be referred to better with other terms, such as “imagination”, or perhaps “memory”.

          On hallucinations and dreams, I consider these to exist under “altered states of consciousness”. (This is one of three ways that I avoid using the “unconscious” term.) During sleep the brain continues to function, though in a degraded way by which coherent thought regresses into commonly illogical dreams. Furthermore there are all sorts of substances which can be taken and conditions that can exist by which waking consciousness becomes deluded into hallucinations.

          There are also times where we aren’t confident about a given sense input for whatever the reason. Perhaps it was too faint, or perhaps we were engrossed in something else? Thus we might be uncertain whether or not we heard the door bell ring for example. Perhaps it was a hallucination?

          I also consider seeing and hearing to be learned skills. Furthermore I do agree that the qualia of a given note can be produced by my brain without sense input. I tend to find such qualia far more rich when the sense input is present however.

          Liked by 1 person

          1. Responding to your assertion, “During sleep, the brain continues to function, though in a degraded way by which coherent thought regresses into commonly illogical dreams”, I believe that rational and irrational thoughts are subsets of the brain’s capability to think. It can think rationally when appropriate and irrationally when appropriate (I put myself in an irrational mode of thinking when I want to be creative, e.g., when writing poetry). The brain constantly interprets whatever internal or external stimuli are presented it, like when we view a Rorschach diagram and attempt to interpret what we see. There is no degraded mode during dreaming. The brain just does what its circuits are “designed” (no implications of a “Designer” intended) to do. I do remember reading some sleep research about the transit along the pathways from stimulus to memory being reversed (memory to stimulus circuitry) during dreaming.

            Liked by 1 person

          2. Yes Mike Stone, if we can think both rationally and irrationally, then each must be subsets of the brain’s capability to think.

            I believe it’s become at least somewhat accepted in science today that conscious forms of function require this dynamic to largely shut down for recuperation periodically. Here we still “think”, though somewhat as if we’ve been drugged. Thus we might believe, for example, that what we’re thinking about is actually happening! In my own dreams all sorts of implausible scenarios make sense to me. It wouldn’t surprise me if dreams serve an evolutionary adaptive role in themselves. At this point however I can’t discount the possibility that there a mere product of something else, or just degraded thought while consciousness recuperates.

            I certainly agree that dreams should largely play off memory. If we’re not using sense organs for thought input, then memory of past consciousness should be required to take that role.

            Like

          3. “Unconscious” and “altered state of conscious” are different.

            Hallucinations can seem as real or even realer than actual qualia generated by real objects. Hallucinations under DMT/ayahuasca frequently involve vivid and intricate constructions of entire landscapes with parts alive and moving. How they originate is somewhat of a mystery but any connection to memory must be very distant. MRIs find that the visual cortex is actually firing during the experiences so something similar to vision is happening. Describing them as “imagination” doesn’t really capture their quality. I can testify to this from personal experience.

            The basic point here is that our reality whether triggered by real objects or manufactured completely by the brain are all in a sense hallucinations because the representations of consciousness are all manufactured by the brain.

            György Buzsáki, in the The Brain from Inside Out, argues that the brain actually comes pre-equiped with thousands of patterns and learning is a matter of matching patterns to external sensory input. While he doesn’t talk about this, I can imagine that the intricate hallucinations of drugs and some dreams might in part be products of the patterns we are born with combined with fragments of memories.

            Liked by 1 person

          4. I very much agree James. Nothing that we perceive should exist because of outside input (“noumena”), but ultimately because the brain produces it, hallucinatory or not (“phenomena”). Of course it’s generally adaptive when perceptions correlate with what’s out there in at least some capacity, but not always.

            Regardless I like to divide the “unconscious” term into three more specific terms. There is “non-conscious” to denote such a void, “altered states of consciousness” when a degradation of standard thought is being referenced, such as in sleep or intoxication, and “quasi-conscious” when non-conscious influences are being referenced, like a given bias.

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.