The layers of emotional feelings

One of the ongoing debates in neuroscience is on the nature of emotions, where they originate, where they are felt, and how innate versus learned they are.

One view, championed by the late Jaak Panksepp and his followers, see emotions as innate, primal, and subcortical.  They allow that the more complex social emotions, such as shame, involve social learning, but see states such as fear, anger, and grief as innate and physiological.  They generally do not make a distinction between the feeling of the emotion and the underlying reflexive circuitry.

The other view, championed by Lisa Feldmann Barrett and Joseph Ledoux, see emotions as cognitive constructions, something that we learn throughout our lives.  Ledoux in particular recently wrapped up a series of blog posts on this subject well worth checking out.

In many ways, this view resembles classical James-Lange theory, which held that a stimulus causes physiological changes, that we then interoceptively feel and interpret, with that interpretation being the felt emotion.  But modern constructivists are not adherents of James-Lange.  The brain has extensive connections between the regions that initiate the physiological changes and the ones where the feeling of those changes occur.  The interoceptive resonance is undoubtedly an important input of the experience, but it’s only a part of it.

In the past, when discussing this debate between basic emotions and constructed ones, I’ve noted that much of it, perhaps all of it, comes down to definition disputes.  Ledoux himself seemed to acknowledge this in a podcast interview, where he noted that he and his friend, Antonio Damasio, who is more in the basic emotion camp, agree on all the scientific facts.  They just disagree on how to interpret them.

In other words, it may be that there isn’t a fact of the matter answer to this debate.  When faced with these scenarios, I think there’s value in laying out the different positions and their relations.  Yes, we’re talking layers and hierarchy again.  These frameworks are a simplification, perhaps an oversimplification, but they help me keep things straight.

As I noted when discussing the layers of consciousness, this is not any kind of new theory.  I fully confess to not having the expertise for that.  It’s really just a way of relating the major views.

The hierarchy of emotional feeling:

  1. Survival circuits: A stimulus comes in and triggers reflexive survival circuits.  This causes physiological changes: heart rate, blood pressure, breathing rate, arousal levels, etc.  If not inhibited by higher level circuitry, it may lead to automatic action.
  2. Communication from the survival circuitry to higher level circuitry:  The signals rise up from subcortical regions in the brainstem and limbic system, including the amygdala, to cortical regions.  It’s important to note that this communication is two way.  The higher level circuitry, particularly the prefrontal cortex, has the ability to inhibit the motor action aspects of the survival circuits.
  3. Interoception loop: The effects of the physiological changes are interoceptively sensed, adding to and reinforcing the signal in 2.  (Note: “interoception” refers to sensing the internal state of the body.)
  4. Construction of the representation: A mental representation of 2 and 3, along with what they might mean in a broader autobiographical context, is built.  Ledoux calls this a “schema”, positing that we have a fear schema, an anger schema, etc.  Whatever we call it, it’s a model, or galaxy of models related to the signal from the reflexive survival circuits.
  5. Utilization of the representation: The representation from 4 is available for cognitive access.  In humans, this typically involves using it in action-sensory scenario simulations, although it may be used for much simpler processing: single prediction of cause and effect.  The result of this is that some survival circuit actions are inhibited and some allowed to continue.  Note that sometimes all of them are inhibited. (The animal freezes.)
  6. Introspective access to the representation:  For a species that has introspection (humans and possibly some great apes and other relatively intelligent species), this allows knowledge that the feeling is being experienced.

Note that this list is in the typical “feed forward” order where a feeling is externally stimulated.  It’s possible for someone to initially have 4 and 5 in the absence of 1-3, which can cause a “feed back” signal back down to 1, and then up again through the layers.  In other words, thinking about something that makes you angry, can set up the loop that makes you feel angry.

So, where in this is “the emotion”?  A basic emotion advocate will see it happening in 1, although whether it is a conscious feeling at this stage depends on which one you ask.  Damasio seems to see this stage as pre-conscious.  He explicitly defines “emotion” as the survival sequence, the “action program.”

But many in the Panksepp camp do see some form of consciousness as this stage, although they see it as an anoetic form of consciousness, that is, phenomenal consciousness but not access consciousness, a sensation we don’t have introspective access to.

Constructivists like Barrett and Ledoux seem to only see the conscious emotional feeling as existing in layer 6.  But this seems to require consciousness in the full autonoetic (meta-aware) fashion that only humans and perhaps a few other species possess.  In other words, in their view, only humans and maybe a few other species have emotions.

My own view is that the conscious feeling of the emotion happens in 5, whether or not it’s being introspectively accessed.  This substantially widens the number of species who can be regarded as having emotional feelings, inclusive of all mammals and birds, although the complexity of the feeling varies tremendously depending on the intelligence of the species.  A mouse’s emotional feelings are far simpler than a chimpanzee’s.

To me, it feels like the Panksepp camp’s attribution of consciousness to layer 1 is stretching the concept of “consciousness” too far.  On the other hand, Barrett’s and Ledoux’ requirement for full autonoetic consciousness goes too far in the other restrictive direction.  And they seem reluctant to admit that 2 does provide a link between the higher level representations / schema / concept and the lower level impulses.

My view is that consciousness is composed of cognition that, in humans, is within the scope of introspection.  Much of that same cognition also exists in other species, with varying levels of sophistication, even if they themselves can’t introspect it.  That means that a dog can be angry, although anger doesn’t have the same scope of meaning for them as it does for us.

But the thing to understand is that these are philosophical conclusions, not scientific ones.  As far as I know, what I laid out in the layers represents the current scientific consensus of both camps.  Consciousness is in the eye of the beholder, and so, it appears, are conscious feelings.

Unless of course I’m missing something?  What do you think about the layers?  Where in them do you see the emotion, and if separate, the conscious feeling?  And what makes you see it that way?

This entry was posted in Mind and AI and tagged , , , , . Bookmark the permalink.

41 Responses to The layers of emotional feelings

  1. Erm … I’m having some trouble with your hierarchy.

    A stimulus comes in and triggers reflexive survival circuits

    What does the path of a typical “survival circuit” look like? I’m thinking sense neurons —> subcortical routing structure (thalamus)—> cortex (recognition)—> subcortical structure (routing stimulus to action) —> cortex (action plan) —> cerebellum (muscle coordination). So does this count as a survival circuit, or is it already level 2? Are there survival circuits that don’t pass thru the cortex at all?

    *

    Liked by 1 person

    • Based on what I’ve read, the path of a typical survival circuit comes in from sensory neurons, goes through the brainstem, then to various subcortical structures, often including the hypothalamus and amygdala. They often reflexively release hormones. If not inhibited, they stimulate motor neurons in the brainstem, which cascade to peripheral nuero-muscular junctions.

      If you’d like some specific examples:
      https://www.princeton.edu/~ndaw/ld18.pdf (page 7 on)
      or
      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3625946/ (Defense as an example section)

      The thalamo-cortical system is usually considered to be outside of the survival circuits, although it can affect them.

      Like

      • Thanks for those links. I got halfway thru the first, printed it out, and started this response.

        I recognize there are some sensory inputs that go directly to the brainstem, but these seem to be touch/interoceptive and also some sound. So, sudden pokes and loud sounds generate a startle response via the brainstem. Maybe there is also one for sudden flash of light. But I’m thinking more about things like response to predators. In order to respond to a predator you have to recognize a predator. I’ll wager that recognition necessarily goes thru visual cortex before signals get sent to the amygdala.

        *
        [looking up second link now]

        Liked by 1 person

        • About 10% of the axons from the optic nerve actually go directly to the superior colliculus in the midbrain (part of the brainstem). All of these axons are from rods rather than cones. This creates low resolution colorless imagery in the midbrain region, enough to provide rough information for reflexive and habitual reactions. (Spookily, this is imagery we have no introspective access to.)

          The role of the sensory cortices appear to be more about providing detailed information for discrimination and deliberation, in other words, for the higher level circuitry that determines which reflexive survival circuits to allow or inhibit.

          That said, this is the brain, where nothing is ever cut and dried and just about everything has pathways to everything else. It wouldn’t surprise me if signals from the sensory cortices don’t affect the subcortical survival circuits.

          Like

  2. Wyrd Smythe says:

    I’m afraid neuroscience, for me, may be like baseball for you. Not a blog post topic that really grabs us much. Not any kind of value judgement, of course, just an alignment of interests.

    Funny thing is, earlier today I saw this TED talk that kind of applies here (at least in terms of the role of emotions in consciousness):

    It’s a cool video, and it addresses a consciousness topic I’ve mentioned before: What theory of consciousness accounts for awe?

    As with many other emotions, it may have primitive roots, but consciousness elevates those to complex tapestries. Just think how obsessed the human race has always been with love, how much has been written, both analytical and poetic.

    Liked by 2 people

    • On neuroscience, I can understand that. And there are some posts where I really get into the weeds, although it’s been a while since I’ve done that. I originally got interested in neuroscience as research for story writing, but long since passed the point where I had enough for that.

      I’ll have to catch the video tomorrow. In general, my view towards emotions is functional. Awe and love have their adaptive roles. Certainly they occupy crucial aspects of our experience, which undoubtedly is why we dedicate so much art and philosophy toward them.

      Like

      • Wyrd Smythe says:

        “In general, my view towards emotions is functional. Awe and love have their adaptive roles.”

        With a lot of them it seems more clear what the original function was (e.g. notions of physical beauty are rooted in mating drives), but awe stands out a little to me.

        And, as I said, regardless of their origins, our emotions are complex and evolved — lots of associations and semantics. Looking at them strictly functionally might be like trying to understand our bodies in terms of our earliest ancestors. It’s helpful in understanding where they came from, but less helpful in understanding what they are now.

        Like

        • Emotions seem like colors, occurring in a spectrum of valance, arousal, and disposition, with all kinds of unique combinations that we give names to. Awe strikes me as a combination of fear and enthusiasm for the possibilities. With all that combining, together with the fact that we live in a world very different from our ancestral hunting grounds, we probably shouldn’t be surprised that not all of the combinations have a clear function.

          Like

  3. Steve Ruis says:

    I love your posts! I like the layers … somewhat but I think the problem is that there is too little actual data and too much “explanation” so far.

    I think consciousness doesn’t enter the layers without education, at least that is my experience. As youths all of that stuff seem to be subconscious and only when it leaked out were conscious thoughts involved. (Children are constantly being corrected for acting without “thinking.”)

    Liked by 3 people

    • Thanks!

      Children are an interesting case, because their prefrontal cortex is not yet mature. As a result, their ability to think before they act isn’t at the level of an adult yet. This is particularly true for young children. (Something adults seem to frequently forget.)

      Like

  4. Paul Torek says:

    I agree entirely with your analysis, but I would add more terminology. I put “phenomenal consciousness” at level 2, and “intentionality” (the ability to have thoughts and feelings about particular things) at level 5 in your hierarchy. Self-consciousness, level 6, doesn’t seem at all necessary for an animal to feel emotions. Some scientists and philosophers focus on self-consciousness too much, probably because it makes humans nearly unique. Well, yes it does, but that doesn’t mean that all of what’s important lies there.

    Liked by 1 person

    • Level 2 for phenomenal consciousness strikes me as more plausible than level 1, where many of the Panksepp camp would place it. But I personally don’t think it happens until level 5. I do agree with you that level 5 is where intentionality is. I think the difference between us is I see intentionality as necessary for pheomenality.

      Totally agreed about level 6. It gives us the ability to anticipate and manage our emotions, but it’s not necessary to feel them.

      Like

  5. I’ve been bitching about the ideas of Lisa Feldmann Barrett for years. I agree with Mike that she and Ledoux use a far more restrictive definition for the “consciousness” term than seems productive, but my own criticism goes much further. In latter stages of a Brain Science interview (about minute 49 http://hwcdn.libsyn.com/p/6/7/b/67bc06238541fb18/135-bsp-barrett.mp3?c_id=16217102&cs_id=16221542&expiration=1574863433&hwt=71d14196e47514cef583e2b1eebe86fd ) Barrett implies that various standard emotions that we feel, only exist in us because we’ve been socially taught to feel them! She even implies that language is needed for such teaching. If crappy ideas can be this successful here, then what hope is there that people in the field can grasp the good of good ideas?

    Of course everyone knows that the emotions of babies cannot be the emotions of adults. Babies have not been conditioned over years of experience or possess fully developed cognitive function in support. Nurture clearly exists beyond nature. But just as evolution provides different people with different arm length, vision, and so on, evolution should provide different people with different proclivities for fear, anger, sadness, curiosity, sympathy, empathy, and so on. Why? Because such traits are crucial to our function. Without solid evidence I’ll refuse to believe that evolution leaves the existence of such traits to serendipitous social teaching! If she is right, where are the people who’ve never been orally trained to feel sadness, fear, and so on? (Her evidence is that neural correlates for traits like sadness have not been found.)

    It wasn’t long ago that the realm of consciousness was strictly barred from neuroscientific exploration. Given what’s happened since that time, I’m beginning to sympathize. Mike is trying his best to delineate modern ideas by means of a six layered hierarchy, but the organization of theory doesn’t cause any given example of it to explain things better. Permit me to propose an idea which isn’t similar to any of them. Note that instead of beginning with the human, as some do, this one begins from the beginning.

    Once life developed central organism processors there should have essentially been non-conscious computers in charge of such organisms, or the sort of thing which runs our robots. And presumably they would have faced catastrophic failures in more open environments just as our robots do, given that evolution shouldn’t have been able to program them for enough contingency situations.

    To get around this apparently evolution developed a sentient operator which is not brain, but rather produced by brain. Here the brain punishes and rewards this entity given various circumstances (such as body damage), and it’s up to the experiencer to think about what to do so that it might feel better. Theoretically the brain uses the desires of the conscious entity as agency from which to more effectively operate non-consciously.

    Beyond the sentience motivation (some call this phenomenal consciousness), there are informational senses such as vision (some call this access consciousness), as well as the reemergence of past conscious states, generally known as memory. The experiencer naturally interprets them and constructs scenarios about how to make itself feel better, somewhat like electric charge evenly distributes over a conducting surface. Beyond thinking, the only thing which the conscious entity seems to do is operate muscles (or more accurately, the brain operates associated muscles given such decisions).

    Some will say, “Yea, but where’s the neuroscience?” I provide brain “architecture” rather than brain “engineering”. Neuroscientists can interpret their data by means of my architecture if they like, or continue on as they have.

    Liked by 2 people

    • Eric,
      Just for reference, here’s an excerpt from the transcript of that interview:

      Dr. Campbell: Somebody is probably going to object that it certainly doesn’t—no pun intended—feel like that’s what’s happening. I mean it really feels like emotions… I mean, how do babies appear to have emotions from the very Beginning?

      Dr. Barrett: Well, they don’t have emotions, though. So, one of the things I talk about in the book is I cover the research showing, pretty convincingly, that newborn babies have these feelings of pleasure and distress, of feeling worked-up and feeling calm—what scientists call ‘affect.’ But when a parent looks at an infant and experiences that infant as angry, or as sad, or as afraid, or as happy, and so on, the parent is using their own predictions, their own conceptual system in their own brain to make sense of that baby’s affect, to make sense of that baby’s distress.

      So, the parent is guessing at the mental state of the baby. But babies are not born with brains that are like miniature adult brains, they’re born with brains that are equipped to wire themselves to the physical and social surroundings that they grow in. So, they have the capacity to feel pleasure and pain, they have the capacity to be worked-up and agitated or calm, but they don’t have the capacity to experience adult-like emotions until they have learned emotion concepts. And this is something that I explain in my book.

      As I noted in the post, this comes off to me as a definitional dispute, about whether “affects” count as “emotions”. Everyone agrees the mental life of babies is not that of adults. Is there a contention here other than what to call their feeling states?

      Whatever we call it, I do agree with her that we have a strong tendency to project our own complex mental states on babies, or children in general, as well as animals.

      Liked by 1 person

      • Mike,
        Yeah that passage was part of it, but just a bit further she really locks it in with a language stipulation where babies are taught to feel emotions by means of language. The implication is not only that babies feel no fear, but a human that grows up without language would never feel such a thing. This would make the lingual human pretty special in the animal kingdom. Here dogs would never be afraid, but rather just feel bad or good from time to time given that they did not evolve the tool of language.

        One thing about projecting our emotions to other people, including babies, as well as animals of all sorts, is that this often seems to work pretty well. The best parents tend to be the ones with the best theory of mind skills. Similarly the best animal trainers also tend to be the ones with the best theory of mind skills. I doubt you’ll find a competent professional who works with animals on a daily basis, that will tell you different.

        Conversely when I read the presented Joseph Ledoux article, I find myself nodding in agreement. For example, “For me, all emotions arise from non-conscious cognitive processing.” Yep. (I’d have cringed if he said “unconscious” there, not that I’d have taken his meaning differently.) When you interpret inputs in a conscious capacity, there should be associated non-conscious processes activated on the basis of various situations. This may be heart rate, and a great number of things that we aren’t aware of.

        His “fear schema” idea seems to reflect how past experiences leave an imprint upon us, thus making sure that we all have unique experiences to a given situation. Yep.

        Then there’s how people who use different languages tend to mean somewhat different things by means of existing emotion terms. Yep.

        He says danger, rather than fear, is universal. I take this to mean that fear is just a term we use in English to represent how we tend to feel sometimes, and so is cultural. Conversely the situation of being in danger, or something beyond what’s felt, would be universal. Conceptually yes, not that the “danger” term is universal either.

        So apparently I can go along with constructivists, that is as long as they don’t make outrageous claims.

        Liked by 1 person

        • Eric,
          Ledoux’s wording is a bit more careful and precise than Barrett’s, but I think they’re largely saying the same thing. The way I bridge it is that experience teaches us what to fear, be angry about, etc, and language teaches us how to categorize those various feelings.

          Her point is that the various states we label “anger” are a category rather than one objectively distinct state, same for fear, sadness, etc. This actually fits with a statement you made recently that what we really have are smoothly varying valences and levels of arousal. I think Barrett would agree with that.

          (I agree somewhat. But I do think there are distinct dispositions toward various actions involved that make the basic states a bit more distinct. Affects / emotions evolved from reflexes.)

          Ledoux, like you, dislikes the word “unconscious”. If you’ll notice, he pretty consistently keeps to “nonconscious”. Like you, he wants to stay away from Freud’s conceptions. I have to admit, after reading him, I’ve finally started eschewing the word “unconscious” the same way I had started avoiding “subconscious”. I now try to stick to “nonconscious”, although I don’t think most people have noticed the shift.

          Liked by 1 person

        • Wyrd Smythe says:

          “Similarly the best animal trainers also tend to be the ones with the best theory of mind skills.”

          I don’t have formal training with animals, but my love of dogs has caused me to read a fair bit of the literature (and make my own observations over the years). What I would take as the general view is that dogs have extremely simple emotions, probably similar to babies, but older dogs are often more nuanced than, say, puppies.

          The behaviors are driven by hunger, fear of the new, fear of the startling, happiness at comfort, all basic, but very primitive, “emotional” responses. Understand them however you like, but the terms “anger” and “fear” and “happiness” are useful handles for whatever dogs (or babies) do feel.

          Surely there’s no doubt they feel — i.e. experience — something.

          I think the point Barrett is making is that humans, as our minds learn, create a vast tapestry of associations, so those primitive feelings become rich with meaning. She seems to feel, in terms of human experience, it’s only those rich associations that are human “emotions.”

          As Mike said above, it’s mostly a matter of definition.

          FWIW: I am sympathetic to the idea that human emotions, because of our powerful brains, are rich and complex despite primitive origins. I think fully understanding them now requires considering them in all that detail.

          Liked by 1 person

      • Mike,
        It’s not exactly my disdain for the ideas of Freud which has me eschew the “unconscious” term, though I suppose this might contribute in a “quasi-conscious” sense. It’s that the unconscious term seems used in too many unconnected ways to be effective. As I’ve just noted, one of the terms that I use to replace it is quasi-conscious, as in “somewhat-”. There shouldn’t be much uncertainty about what this modifier is meant to reference, as opposed to the quite plain “un” term.

        Then note that sometimes the term is used to reference being asleep, drugged up, and that sort of thing. Instead of unconscious I call these “altered states of consciousness”. This can even refer to the use of caffeine given its neural effects.

        Then there’s the unconscious interpretation that I consider most problematic, or the opposite of conscious that some of us now refer to as “non-conscious”. Here I’m able to make a sharp distinction between things that possess affective states and things that do not. A door will not be conscious any more than my phone will be, so they’re referred to as “non-conscious” rather than the old “unconscious” standard. A dead person, or even one that’s perfectly anesthetized, may also be referred to as “non-conscious”. Furthermore just as a lightbulb produces light under the proper conditions (rather than exists as light), I can say that my non-conscious brain produces consciousness under the proper conditions (rather than exists consciously).

        I suppose that I should ease up on Barrett, and especially if she views affect in terms of a spectrum rather than specific individual states of existence. I’d rather people not use the “feelings” or “emotion” terms in academic speculation, since they might instead address the full concept by means of terms such as affect and sentience. If adopted this should put your concerns about definition disputes here to rest. We needn’t dicker over whether something does or does not qualify as “emotion”, that is as long as something is being referenced which feels positive to negative, or consciousness itself as I define it.

        There are two essential ways that I’d have us differentiate general conscious inputs (that is beyond the memory form that I’ll also get into here). Let’s say that your knee itches. Here there should be an “affect” component given associated irritation, and a “sense” component because the itchiness should give you location information regarding that particular issue. Or perhaps someone has forced my hand into a bath of ice which quickly becomes unpleasant. This unpleasantness may be referred to as “affect”, while various other associated forms of qualia may be referred to as “sense” given that information is provided about what’s going on. So because I also have memory of similar past conscious experiences to relate this to (which tend to automatically come up given similar neural firings), the input experiences of wetness and so on is something which I might indeed associate with “an ice bath”. As far as I can tell, affect, sense, and memory are the exclusive forms of input to the conscious form of function.

        What if I were to say something really stupid on your blog, and you were to thus make me look appropriately foolish for saying such a thing? To me this should feel humiliating, and presumably because evolution has found it effective to punish people for being made fun of given that they thus may not get their genes propagated quite as well.

        There should be an amazingly diverse spectrum of things which makes the human feel good and bad, and presumably evolution chose the specific affects which we tend to have because they help motivate us in more productive ways than other affects would in such situations. And it’s not just the human that should have a vast range of affective motivations, but non-human mammals, birds, reptiles, and I suspect even fish (though surely not nearly as nuanced). And even if far less nuanced, should insects possess conscious inputs to process as well? Possibly so, that is at least the ones that are able to navigate more open environments.

        Could evolution program a flying insect to do the proper things to survive in a purely non-conscious way, given a vast assortment of conditions to deal with? Perhaps to an extent such an insect must think about what would feel better to it given non-consciously provided conscious inputs. Here non-conscious programming would be able to use these decisions as at least a partial guide from which to work.

        Liked by 1 person

        • Eric,
          On your last point, whether insects, fish, and amphibians are conscious remains a controversial question. They clearly have sensory or image based representations. Many people take that as consciousness, but it really only amounts to sensory consciousness.

          Whether those sensory systems are integrated with anything other than a reflexive system remains in dispute. They’re capable of reinforcement learning, but it’s not clear whether that learning rises above rule based stimulus-response policy adjustments. If they do have affects, it seems like they would be at an incipient level, a tiny glimmer compared to what mammals and birds have.

          Liked by 1 person

      • Agreed Mike. Good consensus does not yet exist in science regarding such life. The census is likely “no” for insects. Furthermore my own model suggests that sensory consciousness alone should not be termed “conscious” whatsoever. Note that a Roomba has input senses, so unless we want to burden the “consciousness” term with something like this (and I presume that Mitchell has far grander visions), we’ll need to demand an affect dynamic as well.

        But then if we piped the computational resources of a vast supercomputer to a small flying robot, could we get it to do the sorts of things that a dragonfly is able to do? And I suppose this would be with millions of times the computation and billions of times the required energy? If not however then I suspect evolution has been cheating by means of phenomenal experience, agency, teleology, and so on. I’ll keep in mind a term that you’ve recently used for this, or anoetic (which seems to be from anoesis).

        Liked by 1 person

        • Eric,
          Just to be clear, I used the “anoetic” term to describe how some in the primary emotions camp see consciousness existing in places like the midbrain. I was using their own terminology in an attempt to fairly present their view.

          But I can’t see the idea of anoetic consciousness as coherent. When I read descriptions of it, it sounds like nonconscious processing. To me, consciousness requires noesis, that is, the term is meaningless without at least incipient levels of cognition.

          Liked by 1 person

      • We’re getting into some seriously funky and esoteric terms here Mike, which probably isn’t helpful. I was interpreting “anoetic” to be somewhat in the vein of purpose driven function, or affect based stuff. But I guess this particular term implies a lack of cognition as well, unlike the affect term.

        Nevertheless I will say that as I define it, something anoetic would inherently be conscious, though not functionally so. By definition there will be something that it is like to exist here, and even if I’m speaking of feeling without any cognitive grasp of what’s otherwise going on. I believe that the very first conscious beings must have been anoetic, and that evolution eventually mutated this dynamic until functional consciousness emerged from it. Some say that octopus consciousness arose from a different line than sentient life in general, so it should have had its own anoetic origins as well.

        In today’s world however, surely there are countless babies, some human though the vast majority not, that suffer horribly initially, though lack any cognitive grasp whatsoever of the world that they enter in the minutes or days before they mercifully die. But at least ignorance is bliss, or a feature which I do take advantage of. I seek to understand what’s real, not to permit my understandings to harm me.

        There but for the grace of Nature, go I…

        Liked by 1 person

        • Eric,
          I think the difference here is I consider consciousness itself to be functionality. So to me, saying something is conscious without being functional, is a bit like saying something can perform certain functions without being functional. Which is why I find it incoherent.

          I realize not everyone is a functionalist, but I see lots of evidence for functionality, and nothing for anything else. And positing something else sends us down the mysterian rabbit hole. I’d go there if the evidence pushed me toward it, but it doesn’t, and going there seems like it brings in a lot of unnecessary complications.

          On some babies suffering horribly, how do you know they suffer horribly? It’s not like we can ask them about their experience, or remember our own experiences as a baby. All we have to go on is their behavior, but their behavior could be driven by reflex.

          In any case, mammalian babies are born with a working cortex. It’s immature and doesn’t dominate a newborn’s behavior, but it’s there nonetheless, and MRI scans show that it’s functional, just not as functional as a more mature brain.

          So, to the extent they don’t have cognition, it’s not clear that they do suffer. And to the extent they do suffer, it’s not clear it’s without cognition. Setting aside cases like the hydranencephalics (who are missing most of their brain), I tend to think they are capable of suffering, but it happens with latent cognition.

          Liked by 1 person

      • Mike,
        No human has solved the hard “How?” of phenomenal experience, but you and I do take opposing positions here in a conceptual sense. My position is that there must be some material based physics associated with the production of sentience, and therefore it’s quite possible that this physics could exist alone. As I understand it, your position is that cognitive information must come first, or even be the means of producing sentience. And indeed, I suppose this has to do with your beliefs about conscious experiences being produced by means of information processing alone. Thus any machine at all which properly processes the correct information could understand Chinese or feel thumb pain in the manner that a Chinese person can.

        I should say that even if your information based hard problem solution does happen to be the case, this shouldn’t invalidate the brain architecture which I’ve developed. I’d still consider the entire brain to exist as a non-conscious machine which produces consciousness, somewhat as a lightbulb produces light (or I see recently from James Cross, as a car produces movement). Here there would still be a tiny second form of function that the vast non-conscious brain takes its cues from. From your modification an informational input, or cognition, would need to exist before any sentience exists, though I’d still say that this would be the motivation which drives the conscious form of function. Then there’d also be a potential memory input component which provides the conscious entity more such information to work with. This sentient entity (rather than the brain which produces it), would naturally interpret these inputs and construct scenarios about what to do to make itself feel better. And in these efforts when such an entity decides to operate muscles, the non-conscious brain would tend to comply.

        So even if you’re right that sentience exists exclusively as a functional product of cognition, and so something which lacks cognition cannot feel good or bad, I’ve still got brain architecture of you to potentially develop a working level grasp of.

        Liked by 1 person

        • Eric,
          On the hard problem, you’re right, no one has solved it. And I don’t think anyone is going to solve it, at least not directly. But there are solid possible explanations for the meta-problem, the problem of why we think there is a hard problem. I’m currently reading Michael Graziano’s new book which discusses his solution.

          On developing a working grasp of your theory, I don’t know if you saw my last reply to James, but I’m not really satisfied with emergent explanations by themselves. I want to know how it emerges. So that would be my challenge to you. Explain how this other entity you postulate emerges.

          Liked by 1 person

  6. Let’s get into this a bit further Mike. Firstly as I understand it, David Chalmers phrases his hard problem in terms of both “How?” and “Why?” questions regarding the existence of phenomenal experience. So it’s actually two questions rather than one. Furthermore on the “Why?” side of it, I have an answer which you and others here seem not to dispute. It’s essentially that phenomenal experience exists because it creates agency from which to consider what to do under more open environments — non agency based stuff doesn’t seem sufficiently programmable. But even if I have essentially solved the first half, the second seems like a real doozy!

    You’ve agreed that the hard “How?” hasn’t been solved, and surprised me by saying that you suspect no one will directly solve it. That’s more like something I’d say. But then I see that you’ve also just told James Cross “I want to know how it emerges, and I think learning it is achievable.” Ah, that’s more what I thought! And I agree that this should technically be achievable, though I tend to be more pessimistic than you given my respect for what life is able to do, as well as disrespect for what the human is able to do.

    Then as for the question of why we think there are these two hard problems, to me this seems pretty clear. Most people aren’t familiar with the “Why?” answer which I provide for phenomenal experience. Without a good answer I suppose that they presume it’s hopeless. As for the “How?” side however, here we should be expected to demonstrate the validity of such an answer by building something which suggests that it can feel good and bad, or a true agent. Well that’s a tough one!

    You provide a partial answer, or that information alone should be sufficient. Then I provide an opposing partial answer, or that certain material based physics should be required. Neither of us shall be experimentally demonstrating that we’re right, that is beyond any acceptance of funky thought experiment scenarios.

    So if no one seems to have a clue about this, does this not provide a good answer for the meta hard problem? This is to say that we think that there’s a hard “How?” problem of consciousness, because we have no practical clue what it takes to build this sort of thing? Surely Michael Graziano and others aren’t suggesting an answer nearly this obvious.

    Apparently you’ve acknowledged to James that emergence exists for phenomenal experience (and in an epistemic rather than ontological sense of course). But you’re also now telling me that you’re not happy with emergence proposals. Thus you’ve challenging me to explain how phenomenal experience arises, or the very thing which I’ve always told you that I have no clue about. So no, I won’t be providing you with such an answer.

    What I can provide you with (and beyond what I consider to be an effective “Why?” answer) is what I consider to be an extremely effective “What?” answer. It is this model that I’d like you to gain a working level grasp of. Note that people and other animals do exist right now. Wouldn’t it be somewhat helpful to reduce the function of such creatures back to humanly grasped ideas? That’s my potential contribution. And note that if you are still interested in the hard “How?” of phenomenal experience, shouldn’t it be helpful in your quest to reduce what you’d like to build back to manageable ideas? If you want to copy something, wouldn’t it be helpful to grasp the nature of that particular thing?

    Liked by 1 person

    • Eric,
      I agree that the “why” problem isn’t particularly problematic (although I’m not sure that’s what Chalmers actually meant with his “why”). But you seem unaware how many people have already figured out that answer. And in recent decades, many have gotten more specific, seeing consciousness as a prediction system. Of course, this ends up overlapping with views about intelligence, but then I see consciousness as a type of intelligence.

      I think your discussion of the “how” part conflates Chalmers’ “easy” problems with the “hard” problem. The easy problems are scientifically tractable, and steady progress is being made on them. The only question is whether there remains a hard problem once all the “easy” stuff has been accounted for. I don’t think there is.

      But I recognize that most of the people troubled by the hard problem find that dubious. That’s where the meta problem comes in. Why are people so convinced there is a hard problem? Chalmers himself identified several possible answers, many of which render the hard problem moot.

      Graziano’s solution is that we build an internal model of our attention, but it’s a simplified model, calibrated for effectively controlling attention rather than providing an accurate account of how it works. But this simplification makes it seem like something ethereal and otherworldly, as though it’s something separate and apart from the relevant physics. Our model of it is very different from our model of the associated physics. The two seem irreconcilable, hence the hard problem. But Graziano’s point is we’re taking that internal model too literally.

      The issue I see with the you-don’t-understand-it-until-you-can-build-it argument, is it involves a simplistic idea of understanding. Understanding is not something we either have or don’t have. We aren’t either completely ignorant of something or know everything about it. If that’s the standard, then we understand nothing. A more pragmatic view is that understanding is a spectrum.

      In that sense, the information processing paradigm does give us substantial insights into what is going on. And the scientific data coming in only solidifies that paradigm. The major scientific theories, except IIT, are information based.

      As I’ve indicated several times, I’d give more credence to the “certain material based physics” being required if someone could give a plausible account of what that physics might be. The vagueness of this proposition makes it seem more like an appeal to mysterianism rather than any actual attempt at understanding.

      Liked by 1 person

      • Mike,
        If there are people in academia who grasp that consciousness should exist as a second form of function which provides an organism with agency from which to deal with more open circumstances than generic programming can sufficiently address, then I’d like for them to plainly state this. The field seems in desperate need of good premises from which to build. I consider this such a premise.

        Furthermore I’d like to find people who believe that consciousness exists as an affect based agent, including sense and memory inputs generally, that interprets them and constructs scenarios about what to do to make itself feel better. Yes prediction is needed to effectively do so. But who goes beyond this simple prediction observation to state a full model for assessment such as mine?

        On Graziano, is his point essentially that because our perceptions of reality do not reconcile with reality itself, that there only appears to be a hard problem? Regardless, to me the “How?” of affect is simply not open to question of “Maybe the issue here doesn’t actually exist?” I get the impression that Graziano is trying to take what’s apparently an inconvenient question, and then explaining it away like it doesn’t matter anyway. And is this also the case for Chalmers? If affect does exist, then this will mandate that it does so by means of other dynamics which may or may not be understood by the human. Regardless of how much people might want to bypass this issue, it shall remain.

        You’ve certainly demonstrated how strongly most professionals today believe that affect requires no specific material based physics — apparently the notion of generic information has been quite popular. Consider the following as well:

        What are some of the things which mandate specific material based physics? How about the phenomena of radiation, gravity, temperature, and the dynamics of elements and molecules. Furthermore in a causal universe, in the end we should expect everything which exists to rely upon material based physics in order for such things to function as they do.

        What are some of the things which we do not consider to mandate specific material based physics? Some that I’ve mentioned in the past are the tale of Pinocchio, MS Windows, and mathematics. Why? Because each of them are human constructs. And note that without a conscious entity to grasp such things, specific examples of them should fall back to material based physics. A computer running MS Windows should continue functioning in a void of humanity, though now as a specific display of material based physics rather than as a patented property which may thus be realizable under an assortment of platforms. You and I may see the tale of Pinocchio in generic terms, though in a realm without conscious comprehension any given portrayal of this story should only reflect material based physics. This could be in terms of written words, oration through a speaker for no one to grasp, or whatever.

        My point is that materially dependent stuff will not rely upon the human for its existence, while non materially dependent stuff (such as the tale of Pinocchio), will rely upon human convention. Thus the question becomes, would it be effective to say that affect should be considered as a human construct?

        In that case apparently affect will not exist beyond us, but then how could we be a product of it? This just wouldn’t make sense. Conversely affect could exist as a material based dynamic beyond human constructions, as most things seem to be.

        Apparently your task is to support the notion that affect exists as information alone, though not simply as a human construct which is thus multi realizable. So see if you can find some examples of things which do not exist as human based conventions, and also don’t depend upon specific material dynamics. If affect does exist by means of generic information alone rather than associated physics, what would be some non-human construct examples of this sort of thing?

        If I decide that under the current environment my position here will not prevail, then my plan is to alter my position to be more systemically marketable by means of the standard “affect exists as information alone” position. To me this business is simply “bath water”, so I’ll certainly try to preserve “the baby”.

        Liked by 1 person

        • Eric,
          There are plenty of people who plainly posit that consciousness exists to provide non-reflexive behavior for complex open environments. See the paper I highlighted in the latest post as an example. This role for consciousness is pretty widely accepted. (It’s not universally accepted, but nothing in consciousness studies are.) Of course, the more of your specific solution you draw into the picture, the less widely there will be agreement.

          On Graziano, the key factor is that our model of our internal processes is not reliable. This, in and of itself, isn’t controversial, at least not in scientific circles. It’s well attested in various psychological studies. So when our unreliable introspective model tells us there’s this magical-like thing that needs to be accounted for, but no corroboration for it from other sources, then it makes sense to be suspicious of that model. The question then is, why does the model tell us that? Graziano’s theory aims to provide a fully naturalistic answer.

          The issue with radiation, gravity, and temperature is that those are relatively simple physical phenomena, not functional systems. Consider the heart. It’s physical system. But it’s also a system that we consciously interpret as a pump, a blood pump. That interpretation is an abstraction, a human construct. But we can build a technological device, another physical system, that implements that abstraction, which can then replace a damaged heart. The heart’s functional role can be fulfilled with alternate physics.

          My question is, why can the heart’s role be replaced by alternate physics, but the brain’s can’t? Or if your assertion that it can, but that technological systems just don’t have the right stuff, then we come back to the question I keep asking: what are they missing?

          Liked by 1 person

      • Mike,
        I’m pleased that some or even many do believe that consciousness exists to provide the non-reflexive function from which to better deal with more open environments. And I did listen to a few minutes of that mentioned paper on my drive home. But notice that in it, non-reflexive function is not presented in terms of what motivates it. I presume this is essentially considered advanced brain function, and so requires none?

        From my perspective one should not sufficiently grasp the significance of “internally generated representations of events”, without connecting them with a non-conscious brain punishing/ rewarding a different entity, or the one which fosters non-reflexive function. So unfortunately I don’t quite consider these people on my path yet.

        I agree that our models of internal processes aren’t reliable, of course. But I also can’t say that my own introspection mechanisms have ever suggested to me that there’s a magical-like thing that needs accounting for. I’ve been a reasonably extreme naturalist from about the age of 12.

        Apparently some people interpret the hard “How?” of affect to imply supernaturalism. I noticed an old Massimo Pigliucci article from that perspective on your twitter feed recently. But I instead take the question literally — it’s simply a “hard” one. I certainly won’t be going the way of Descartes! Furthermore I do appreciate the point of Chalmers that no matter how much we learn here, this dynamic may always seem really strange to us. I consider affect/ consciousness to exist as the most amazing stuff in the universe. I know of nothing similar.

        On replacing a human heart with a technological machine, it seems to me that the same physics applies, or the fluid pumping kind of physics. The difference would be in the mechanisms, which is fine. And I don’t even mind proposing that whatever physics the brain invokes to produce affect, a human made machine could conceivably do so as well through separate mechanisms, that is if the nature of that physics is ever made apparent to is. Does this reassure you?

        Note that here we’re now discussing material based dynamics rather than something which is reproducible by any machine that’s able to process the proper information into other information. The naturalist in me is offended by that stance.

        I just read the recent James Cross post on the potential for consciousness to be associated with EM radiation. https://broadspeculations.com/2019/12/01/em-fields-and-consciousness/

        I must say that I’m intrigued! As you know I’m not really into the engineering side of this, though “the binding problem” does concern me even still. If there are all sorts of things which happen in my brain to produce a unified experiencer, by what means could that come together each moment? Thus maybe the experiencer is radiated. And note the relevance of my little analogy of “the brain produces consciousness like a lightbulb produces light”. I can’t say I meant that literally, but it’s an interesting thought nonetheless. No new forces, no Chinese rooms, and no binding problem…

        Liked by 1 person

        • Eric,
          “Apparently some people interpret the hard “How?” of affect to imply supernaturalism.”

          I think for many, explicitly or implicitly, the hard problem is the problem of relating their deep intuition of dualism (which everyone has, even children) with the lack of scientific evidence for it. That’s why it’s so important to understand how unreliable introspection is. Once that’s understood, the hard problem comes down to an intuition we shouldn’t trust.

          “that is if the nature of that physics is ever made apparent to is. Does this reassure you?”

          Time to move on. I’ll just note that neuroscience already knows a lot about the physics, and while there remains an enormous amount to learn, what we know constrains the possibilities in ways that doesn’t leave much hope for those pining for exotic answers.

          “Note that here we’re now discussing material based dynamics rather than something which is reproducible by any machine that’s able to process the proper information into other information. The naturalist in me is offended by that stance.”

          Sorry, it’s not clear to me what’s offensive here.

          I personally don’t think the binding problem is the big deal many make of it. It reflects Cartesian theater thinking, that it all comes together for a movie passively experienced somewhere. The reality, based on what I’ve read, is the individual pieces are actively assembled, pulled in as needed. This mostly happens pre-consciously, so it gives us an impression of a theater, but one that’s illusory.

          Liked by 1 person

      • Mike,
        I wasn’t aware that we all have a deep intuition of dualism. That’s quite a proposal! Anyway if it’s common to interpret the hard problem of consciousness more in terms of “beyond causality of this world” rather than the literal “a challenging question”, then I’ll stop saying that there’s a hard problem of consciousness.

        Furthermore since apparently evolution has utilized the physics of affect, to me this means that such physics should be possible to implement by means of a non living system as well. So we’re in agreement here. Note that I’ve not proposed anything exotic as an explanation, or any explanation at all. I simply presume that physics is required.

        On why my sense of naturalism would be offended if affect exists by means of information processing alone, I perceive this to subvert material causal dynamics. Let me know if I’m missing something here:

        Computers function on the basis of physics, of course. Therefore they should only produce the sorts of things which associated physics permits them to produce. But saying that the brain functions as an affect producing computer (which I certainly agree with), though any other computer can produce the same affect as long as it properly processes the same information, seems to sidestep the physics of various specific computers. How might a computer with no screen, for example, produce a screen image? So I perceive the “information processing alone” argument to fail by means of a subversion of physics. This is to say that because the brain does produce affect, this production should occur by means of associated mechanisms. Thus a computer which lacks those mechanisms should be missing the associated physics. I may give up this observation entirely however, given how important it seems to many that affect, unlike other stuff, requires no associated physics. Here my models do remain relatively unscathed.

        Liked by 1 person

      • Mike,
        You may be right that dualism is an inherent danger here. Notice that each of us would be mortified to be perceived as a dualist, and yet we’re also arguing that the other is suggesting some less than natural things. You interpret me this way given that I consider affect so amazing that we may never understand this stuff well enough to build it. (Or let me know if there’s more.) I interpret you this way given that you consider affect to exist as information processing alone rather than a mechanism based output of such processing. Furthermore there is this:

        Beyond that, all I know to do is fall back on is the evidence from neuroscience on what happens in the brain, which is all about selective and recurrent propagation of electrical and chemical signalling (information processing). Maybe we’ll eventually get evidence for something else, but until then, my views are based on the evidence we do have.

        So beyond obvious externalities such as heat or entropy, we have no evidence of any thing else that the brain does? But the post that I mentioned earlier from James Cross does bring up something else. Apparently the electrical function of neurons also produces associated electromagnetic radiation. In fact measuring this stuff can even tell us some interesting things about the brains which produce such radiation.

        Since brains produce something related to information processing which is actually an output of such processing, it could be that some of this radiation effectively exists as the experiencer of existence. This could be what you’ve been asking me about for years, or a platform from which the brain architecture which I’ve developed may indeed be realized.

        Conversely a computer outputting valence without any valence output mechanisms, may be somewhat like the Virgin Mary getting pregnant. To me neither make sense. Just as Mary probably got knocked up in the normal fashion, brains might have output mechanisms from which to produce affective states. Even if there are all sorts of crackpots supporting the electromagnetic radiation proposal (and remember that you’ve got a few of them on your side too), I am intrigued by this thought. Though “the engineering side” has not been my focus, it seems to me that this proposal does deserve some careful consideration.

        Like

    • Lee Roetcisoender says:

      As I survey the current landscape of discourse, consciousness is a word we need to learn how to use. As it stands now, we do not know how to use the word correctly, therefore the entire topic becomes a convoluted menagerie. Our current knowledge is limited to two types of phenomena, mind and matter. Subject/object metaphysics (SOM) draws an ontological distinction between mind and matter, whereas reality/appearance metaphysics (RAM) makes no such ontological distinction. Mind and matter are therefore the same thing, both of which by definition, must conform to the same laws.

      Mike, unless you have since changed your mind, you have agreed that when it comes to motion and form, the fundamental mechanisms of matter which result in form have to be the same fundamental mechanisms of mind, which also result in form. In the physical world, motion is responsible for the form of physical constructions. Likewise, in the mental world of mind, motion is also responsible for the form of intellectual constructions.

      The only thing we have direct access to is our own phenomenal experience. Wouldn’t it make sense to look within to determine the cause of motion within our own mind. I’m not talking about the content of mind, but the very mechanism responsible for the motion, motion which for all practical purpose results in the form of intellectual constructions, regardless of what those construction are. Those constructions consist, but are not limited to the following: an accurate construction of reality we can then use for prediction or inaccurate constructions, which are delusions, or creative imaginative constructions we create to entertain ourselves.

      Peace

      Like

      • Lee,
        The problem with looking within to determine the very mechanism of the cause of motion in our own mind, is that introspection simply isn’t evolved to provide that level of insight. Actually, if there’s one hangup that seems to cause more confusion about consciousness and the mind than anything else, it’s the inability to accept that introspection is not reliable.

        https://aeon.co/ideas/whatever-you-think-you-don-t-necessarily-know-your-own-mind

        That doesn’t mean we should ignore introspection, only that it has to be seen as just one source of data, an unreliable source. Before reaching any conclusions based on it, we need to find corroborating evidence from other sources. When we can’t find that corroborating data, we should be cautious.

        In the end, it all comes down to whether our theories enhance or reduce the accuracy of our predictions. Depending only on introspection seems to reduce that accuracy. Using it in conjunction with other sources seems to help.

        Like

        • Lee Roetcisoender says:

          You misunderstand what I mean Mike. I’m not talking about relying upon introspection, because I too believe that introspection is not reliable. I’m talking about the ability to distance oneself from the experience of self and look at mind as on object, just like any other object. It’s all the same shit, cause and effect, motion and form. Now that being said, I am also convinced that homo sapiens do not have the capacity to distance themselves from the irony of being a solipsist self-model, simply because that is what my models suggest. So in the end, I think part of the reason I participate in blogs is to determine if my models are correct. To date, my models are proving to be rock solid.

          Peace

          Like

          • “I’m talking about the ability to distance oneself from the experience of self and look at mind as on object, just like any other object.”

            It sounds like you’re talking about the ability to be objective rather than subjective, although I know you dislike the “subjective” term. Arguably, that’s what scientific methodology is all about. It’s why many experiments are conducted with double blind protocols, to minimize human bias as much as possible. It doesn’t always succeed, but the effort gets us closer than we’d otherwise be.

            Like

          • Lee Roetcisoender says:

            Correct me if I’m misunderstanding here Mike, but it sounds like you believe that it is impossible for homo sapiens to be objective about the phenomena of mind simply because homo sapiens are themselves a mind. This is in spite of the fact that it can be demonstrated that mind is an object. That’s what my own models assert. Although I have to say, that in spite of the models and the overwhelming evidence which support the predictions of my models, even I do not want to believe it.

            Peace

            Like

          • Lee,
            I think the human mind evolved to maximize our chances of survival and procreation, not to enable us to understand it accurately. And many of our most intuitive most primal models, again useful for getting through day to day life, actually mislead us when trying to understand the mind itself. So we start with a major disadvantage. That’s not to say we can’t piece together objective theories, just that it isn’t easy.

            Like

          • Lee Roetcisoender says:

            “That’s not to say we can’t piece together objective theories, just that it isn’t easy.”

            The only thing, and I must repeat for the sake of posterity; the only “thing” that stands in the way of intellectual progression is prejudice, bigotry and bias. The problem is not with the data, nor is it with the inability to use logical consistency, the problem is prejudice, bigotry and bias.

            Peace

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.