The construction of feelings

I’ve had a number of conversations lately on the subject of feelings, the affective states of having valences about conscious perception, such as fear, pain, joy, hunger, etc.  Apparently a lot of people view feelings as a very mysterious phenomenon.  While I’ll definitely agree that there are a lot of details still to be worked out, I don’t see the overall mechanism as that mysterious.  But maybe I’m missing something.  Along those lines, this post is to express my understanding of mental feelings and give my online friends a chance to point out where maybe I’m being overconfident in that understanding.

To begin with, I think we have to step back and look at the evolutionary history of nervous systems.  The earliest nervous systems were little more than diffuse nerve nets.  Within this framework, a sensory neuron has a more or less direct connection to a motor neuron.  So sensory signal A leads to action A.  No cognition here, no feelings, just simple reflex action.  The only learning that can happen is by classical conditioning, where the precise behavior of the neural firings can be modified according to more or less algorithmic patterns.

As time went on, animals evolved a central nerve cord running along their center.  This was the precursor to the vertebrate spinal cord.  All (or most) of the sensory neural circuits went to this central cord, and all (or most) of the motor circuits came from it.  This centralization allowed for more sophisticated reflexes.  Now the fact that sensory signal A was concurrent with sensory signal B could be taken into account, leading to action AB.  This is still a reflex system, but a more sophisticated one.

Generic body plan of bilaterian animal Credit: Looie496 via Wikipedia

As more time went on, animals started to evolve sense organs, such as a light sensing photoreceptor cell, or sensory neurons that could react to certain chemicals.  These senses were more adaptive if they were in the front side of the animal.  To process the signals from these senses, the central trunk started to swell near the front, becoming the precursor to a brain.

The new senses and processing centers would still initially have been reflexive, but as the senses started to have more resolution to them, it allowed the nascent brain to start making predictions about future sensory input.  These predictions expanded the scope of what the reflexes could react to.  A mental image of an object, a perception, is a prediction about that object, whether it is a predator, food, or irrelevant to the reflexes.

Up to this point, there are still no feelings, no affects, no emotions, just sensory predictions coupled to a more or less algorithmic reflex system.  This is where many autonomic robots are today, such as self driving cars: systems that build predictive maps of the environment, but are still tied to rules based actions.  (Although the organic systems were still able to undergo classical conditioning, something technological systems likely won’t have for quite a while.)

But with the ever higher volume of information coming in, the animal’s nervous system would increasingly have encountered dilemmas, situations where the many incoming sensory signals or perceptions led to multiple reflexes, perhaps contradictory ones.  An example I’ve used before is a fish sees two objects near each other.  One is predicted to be food, triggering the reflex to approach and consume it, but the other is predicted to be a predator, triggering the flight reflex.

The fish needs to ability to resolve the dilemma, to make predictions about what would happen if it approaches the food versus what would happen if it flees, and what its reflexive reactions would be after each scenario.  In other words, it needs imagination.  To do this, it needs to receive the information on which reflexes are currently being triggered.

Consider what is happening here.  A reflex, or series of reflexes, are being triggered, and the fact of each reflex’s firing is being communicated to a system (sub-system, whatever) that will make predictions and then allow some of the reflexes to fire and inhibit others.  In the process, this imaginative sub-system will make predictions for each action scenario, each of which will themselves trigger more reflexes, although with less intensity since these are simulations rather than a real-time sensory event.

This sub-system, which we could call the action planner, or perhaps the executive center, is receiving communication about reflexive reactions.  It is this communication that we call “feelings”.  So, feelings have two components, the initial reflex, and the perception of that reflex by the system which has the capability to allow or override it.

In other words, (at the risk of sounding seedy) feelings involve the felt and the feeler.  The felt is the reflex, or more accurately the signal produced by the reflex.  The feeler is the portion of the brain which evaluates reflexive reactions to decide which should be allowed and which inhibited.  In my mind, the reflex by itself is not the feeling.  It’s a survival circuit that requires separate circuitry to interpret and interact with it to produce the feeling.

Embryonic brain
Credit: Nrets via Wikipedia

In vertebrates, the brain is usually separated into three broad regions: the hindbrain, the midbrain, and the forebrain.  The hindbrain, equivalent to the lower brainstem in humans, generally handles autonomic functions such as heartbeat, breathing, etc.  The midbrain, often referred to as the upper brainstem in humans, is where the survival circuits, the reflexes that are in the brain rather than the spinal cord, typically reside.  And the forebrain, equivalent to the cerebrum in mammals, is where the action planner, the executive resides, and therefore where feelings happen.

(Many people are under the impression that prior to mammals and birds, there wasn’t a forebrain, but this is misconception.  Forebrains go back to the earliest vertebrates in the Cambrian Explosion.  It is accurate to say that the forebrain structure is far larger and more elaborate in mammals and birds than it is in fish and amphibians.)

On feelings being in the forebrain, this is the view of most neurobiologists.  There is a minority which question this view, arguing that there may be primordial feelings in the midbrain, but the evidence they typically cite strikes me as evidence for the reflexes, not the feelings.  A decerebrated animal shows no signs of imagination, of having functionality that can use feelings, only of the reflexes.

So, that’s my understanding of feelings.  My question is, what makes feelings a mystery?  If you saw them as a mystery before reading this post and still do, what about them am I missing?


Edited per suggestions in the comments, changing references to “sensations” to “sensory signals” to clear up possible confusion.  MS

124 thoughts on “The construction of feelings

    1. It depends on what parts of the system you’re talking about. I use the word “sensation” in the sense of being the raw signal from the peripheral nervous system. Sensations lead to the creation of perceptions. The reflexive reaction to either a sensation or a perception leads to the creation of affective feelings. “Emotions” typically refers to a more developed mental state built on those affective feelings, although admittedly word usage here is maddeningly inconsistent in the literature.

      Liked by 1 person

    2. Mike,
      Apparently like Paul I was also thrown by “So sensation A leads to action A. No consciousness here, no feelings, just simple reflex action”. Would you instead consider using the term “input”? We don’t normally say that the computer has an associated “sensation” when a key is pressed, but rather “input”, so that term seems more appropriate. And indeed, I don’t even consider that particular scenario to illustrate algorithmic function, since here it’s one to one. So if I could suggest a second friendly amendment, no classical conditioning is possible quite yet either.

      Liked by 2 people

      1. Thanks Eric. I get that “sensation” might be confusing. Per your suggestion, I edited the post, referring instead to “sensory signals”.

        I didn’t change the part about classical conditioning because I still think it’s true. Note that we’re not talking about non-reflexive learning here. Very simple organisms show the ability to do classical conditioning, including decerebrated animals.

        Liked by 2 people

    3. Mike,
      Yes “sensory signals” seems far more appropriate than “sensations”. But let me also get a bit deeper into the second issue.

      As I understand it you began your model where a given sensory neuron incites one and only one motor neuron. Well that’s like a mechanical typewriter where each key does one and only one thing. There should be no computation here as normally defined. I certainly don’t consider for there to be a classical conditioning potential for any number of one to one input/ output functions.

      But then the next stage on the Feinberg and Mallott hierarchy seems to get us to non-conscious computation. What happens when sensory neurons A and B don’t go directly to their respective motor neurons? What happens when they go to the same place to thus provide that receptacle with either nothing, or just A, or just B, so on? Now we have the potential for motor neuron output that isn’t quite as singular as before. It’s once various inputs come together to a single place for processing, effectively the bilateral area that you mentioned, that we get the potential for algorithmic output function. So I think that you jumped the gun just a bit initially. Classical conditioning could indeed occur once inputs go to a single place for processing. And of course that’s how life with central processors is structured today, whether or not something as advanced as a cerebrum happens to be severed. It’s like when you said, “Now the fact that sensory signal A was concurrent with sensory signal B could be taken into account, leading to action AB. This is still a reflex system, but a more sophisticated one.” Here we should get into algorithmic function.

      Liked by 2 people

      1. Eric,
        Given the specific way I described the nerve net, I can see your point. Unfortunately, I over-simplified. Even a nerve net doesn’t have a sensory neuron with only a connection to one motor neuron. There are cross connections. Otherwise it’s not a net but just a bunch of crisscrossing independent connections. (I did hedge bit with the “more or less” remark, but I’ll admit that could have been better.) Those cross connections allow for associated signalling, but probably to a far less degree than what happens with a central nerve cord. So the classic conditioning is nascent, but it’s still there.

        I will point out that individual neurons are dynamic systems, so the mechanical type writer analogy doesn’t really fit for them. Even a single neuron can undergo changes such as habituation where it is simply less responsive to repeated stimuli, or more responsive to other types of stimuli. I think you’d have to drop down to individual proteins to get down to the level of straight mechanical action. And I’m not even sure about it there; proteins can change shape and function under varying conditions. Biology is complicated.

        Liked by 2 people

    4. Okay, I see what you mean Mike. Apparently there could be cross connections and just plain advanced individual neuron function which provide algorithmic processing, even without central terminations. But then beyond your proclivity for engineering, also consider my own proclivity for architecture.

      Before the Cambrian Explosion 541 million years ago, surely there were far less advanced events happening. Early nerve systems should have at least began with a one to one input to output function, or were non-algorithmic. But what happened first? Did such neuron systems develop small scale interconnections and advanced protein dynamics before there was an associated central point for sense signals to be processed together? Or did the central termination point generally come first and the rest later? Surely you’re more knowledgeable about this than I am.

      Regardless “classical conditioning” may not be the best term to use for nascent neuron system function, since it’s often associated with conscious forms of function. A dog may salivate when it hears a diner bell, for example, because it thus thinks it’s getting fed. But would you say that you’ve ever “classically conditioned” a computer? 😉

      Anyway for me it’s certainly convenient to believe that this form of computation began in a central location, complementing the genetic form of computation that I consider the first. But surely there were at least random precursors if not much more.

      Liked by 2 people

      1. Eric,
        You’re getting into the evolution of the neuron itself, which I’ll admit I haven’t read anything on. I’m not sure how much is known about it. I have read speculation that neuroreceptors might have evolved from hormones, but I recall that being a pretty speculative proposition. All of which is to say, I don’t know. The narrative you described seems plausible, but I can see other possible scenarios as well.

        On classical conditioning, I’m taking my cue from the books on neurobiology that I’ve read, where authors do use the “classical conditioning” phrase to refer to non-conscious systems. I agree that it’s a bit jarring the first time you see it in that context, but the most interesting facts are always the more jarring ones. 🙂

        Beware of believing things because they’re convenient. It can make you unwilling to give inconvenient evidence its due, and too easily slide into believing things because you like them.

        Liked by 1 person

  1. Mike,
    As far as I can tell your readership is mostly made up of well educated naturalists. Do they largely consider feelings mysterious because they wonder if perhaps there’s something supernatural going on? I doubt it. No instead I suspect that they realize how woefully few accepted understandings exist in science here today, and also understand that western philosophy has been kicking this stuff around for two and a half millennia without reaching any generally accepted understandings at all. (Such epistemic dualism should only add to the mystery.)

    You don’t consider feelings mysterious given that you’ve provided a rough model regarding their existence. And I don’t given my own quite extensive set of models. But until we have a respected community of professionals which is able to reach some generally accepted understandings in this regard, the topic will indeed remain quite mysterious for educated naturalists in general. I’d be far more concerned if your readers were instead apathetic!

    This post gets to the heart of my models. Thus I could set up shop here and we could go on and on and on as you know. But I’ll try to hold back for a while at least. I’d like to hear from your readers in general, and with the hope that mutual interests can be realized.

    Liked by 2 people

    1. Thanks Eric. I’m not expecting supernatural objections, although I know a few who might moot them, if they’re still reading.

      On having a respected community of professionals, I tend to think we have that with cognitive neuroscience. It’s just that most philosophers of mind don’t seem interested in reading much of it. That’s unfortunate, because it seems like some of the things discussed as intractable mysteries in philosophy have answers sitting in plain sight in neuroscience books. The answers are often given in different terminology, and require work to understand, but a lot of it is there.

      Certainly neuroscience doesn’t have a sharp picture yet. Much remains blurry and indistinct. There are legions of details to be unearthed. But just as we’re not likely to discover that quantum mechanics are utterly wrong, I don’t see the broad picture neuroscience has of the brain changing. We’re not, for instance, going to suddenly discover that the cerebellum isn’t heavily involved in fine motor coordination.

      I thought you might have special interest in this post. Hope it helps.

      Liked by 1 person

    2. If (and that’s an actual ‘if’) the actual facts of the matter are being talked about, then how is there room for mutual interests to be realized? If someone wants to believe their house is fine and another person wants to say it’s burning to the ground, in Venn diagram terms they are two circles that do not overlap. And if it does happen to be burning to the ground, then one of those circles doesn’t even overlap with the circle that is reality. That’s not anyone just walking over other peoples beliefs – that’s reality walking over other peoples beliefs.

      Liked by 1 person

        1. I should have noted that it was a reply to Philosopher Eric – the wordpress thread structure gets a bit confusing at times, I may have hit the wrong reply or something. Otherwise how does it seem out of place? It’s possible I might be missunderstanding what Eric means by ‘mutual interests’ or something like that? But as it is, sometimes mutual interests can’t be realised – not because someone wants to stop the others interests from being so, but it’s just the reality of the matter.

          Liked by 1 person

      1. Thanks Callan, I didn’t quite realize that you left one for me either. It’s good to hear from you. I don’t recall us speaking since way back at Conscious Entities.

        I don’t believe that mutual interests are always in the cards as well. Unfortunately I sometimes learn that some of my best friends online happen to be invested in ideas which are quite opposed to my own. This is one of the reasons that I’m always looking for more friends!

        And in truth I do discuss burning houses that many established interests would rather believe are just fine. For example I believe that our mental and behavioral sciences in general have problems because they do not yet formally acknowledging a “value” dynamic from which to drive a conscious form of function. Furthermore I believe that in order for science in general to become more healthy, we’ll need a community of specialists with accepted principles of metaphysics, epistemology, and axiology from which to work. Unfortunately this position tends to irk philosophers as an expansion of science onto their own hallowed ground. (You knows, scientism!)

        Look, I don’t consider it to be in my own best interest to piss people off. But as you say, Venn diagrams don’t always align. And does reality sometimes walk all over idiotic human beliefs? Oh hell yes it does! I wouldn’t have this be any different.

        Like

        1. Hello Eric,
          I think cognitive science’s advancement, as it is currently done, isn’t necessarily all that healthy in terms of what it’s findings do on contact with human psychology. But I can’t say I’m really agreeing in the same terms in regards to health. For myself, in the past I reflected on how to make an AI and I thought you needed ‘positive’ input. Which is basically value. I thought ‘how do you make ‘positive’?’. Then I suddenly concluded you don’t – it would be a five volt charge on a wire that’s interconnections would provide certain behaviors. Perhaps to the mechanism from the ‘inside’ it might feel like values or suchlike.

          And if that sounds stark, that’s why I think cognitive science isn’t advancing itself in a healthy way. It’s building up a tsunami of information on the human brain, when really we are only set to handle relatively small waves at a time.

          I think cognitive science will start to explain the human feeling for metaphysics, which will be for that feeling somewhat like pulling an octopus inside out. I think those things will prove to be riddled with superstitions and so anyone serving roles in those will be as discredited as astrology is discredited. I am not saying this to attack these ideas in saying this, I am just trying to outline the size of the wave I think is coming.

          I think we need a safe harbor to be built to stop the massive wave from crushing us and instead send through a series of waves of a size people can handle. But it can’t be built by people whose ideas are set to be crushed by the tsunami, IMO.

          I have to wonder how many are actually in such a position of knowing and with the social credibility to manage this – I think probably very few.

          I guess as to solutions I disagree. But in regards to a big health issue in regards to cognitive science, a mental health issue, I fully agree with you there is a big issue coming up.

          Liked by 2 people

      2. Callan,
        Yeah I’ve got no idea how to produce value either. I just know that it exists since I feel it. And I presume that it’s a product of causality since otherwise there’d be nothing to figure out in that regard. (That’s my own stab at metaphysics.) But in truth I’d simply like us to acknowledge value beyond our standard moral judgements. I say this even if value ultimately has supernatural origins!

        I don’t so much see a tidal wave coming, but rather a small community that’s naturally pounded by status quo interests, and nevertheless survives and grows given its sensible positions. Science is in need of effective principles of metaphysics, epistemology, and axiology from which to better function, I think. But how might that assertion be considered in the field which has overseen these domains for two and a half millennia? Not well of course. At some point there should be outside thinkers which get this job done, and I hope to help.

        Liked by 1 person

        1. Eric,

          I think value basically needs a wildlife park style preservation to be set up for it. Because otherwise I think cognitive science as currently practiced is basically going to kill it off. Creating poison knowledges and just pumping them out into the atmosphere willy nilly. As much as there has been unethical science practiced in the past, it’ll be another example of that. Of course cog sci as currently practiced will be creeping towards poisoning the idea of ethics as well.

          But one of the things that will really cause the downfall of value is to think value is invulnerable and eternal.

          Liked by 1 person

      3. Callan,
        Cognitive science is creating poisonous knowledge that’s killing off value? Hmm…. I guess to assess that I’d first need your definition for the “value” term, and then some examples of the types of knowledge that you perceive cognitive scientists to be killing it off with.

        Like

        1. Eric,
          Well you alluded to a lack of health yourself? What did you mean there? As far as I can tell you mean something like you feel science is walking away from subjects like metaphysics and that’s intellectually unhealthy.

          From my perspective cognitive science is eroding out subjects like metaphysics, making them about as credible as astrology. I mean, is astrology killed off? Not really – but it’s not exactly widely taken as credible anymore, is it? It’s been made into a ghost of itself, ironically.

          I think the easiest way to describe it is to follow the money. Imagine at a certain point your boss or some higher up having a little chat and asking to tone down the metaphysics talk. The sort of chat that while softly said, one can tell it decides careers.

          Why would a boss do that? Well the system of rationality/people that are behind how money moves around, they react to cognitive science’s findings and they start to reduce funding based on it.

          You could argue the people who run the numbers will always believe in metaphysics and I’ve shown no reason why they would change. But I think we both know what bean counters are like. And they control food and shelter. All they have to do is be convinced metaphysics is pretty much the same as astrology. I don’t so much have to show you how value is being undermined – you don’t have to believe it for it to happen around you. And as much as it happens around you, happens to you.

          I’m not saying that’s a good thing, I’m just being a canary falling off its perch.

          Liked by 1 person

      4. I think I get it Callan. You’re saying that some scientists are extremely disrespectful of philosophy, and so try to kill off this sort of exploration. I hadn’t heard that cognitive scientists lean this way particularly, and indeed, given how fragile their “glass house” currently seems, wouldn’t expect them to cast many stones! Do business people really listen to cognitive scientists? Which of them hold sway?

        Professor Massimo Pigliucci, however, claims that a number of physicists tend to be quite disrespectful of philosophy, and play the hardness of their science to substantiate such criticism. This includes Neil deGrasse Tyson, the popular science popularizer that use to be his friend.

        Regarding what I believe, it’s that science needs effective principles of metaphysics, epistemology, and value, in order to better function. The problem is that we don’t yet have a community of associated professions (philosophers) which is able to agree upon any such principles, and after two and a half millennia of trying. In order to deflect criticism regarding this circumstance, some philosophers (like Massimo) decide that the field of philosophy should not thus be judged harshly, and with the assertion that philosophy should be held to a different standard because it’s different stuff. (He isn’t pleased when I summarize his position as “epistemic dualism”.)

        The solution, I think, would be to develop a smaller and stronger community of respected professionals that is able to reach some agreements regarding these matters. And what would give such a community legitimacy? That should accrue to the extent that scientists come to find its principles useful for their work. Furthermore to potentially help build such a community, I’ve developed one principle of metaphysics, two principles of epistemology, and one principle of value, for general assessment.

        Like

  2. Yours is a very interesting thesis Mike, and your description of it quite compelling. Nice work as always. My guess is that those that find feelings mysterious (notwithstanding your explanation) are expressing mystification with David Chalmers’ “Hard Problem”. That is, the mystification applies to ALL sensations, whether they arise from internally-sourced or externally-sourced data sensors. Chalmers’ mystery asks “Why do we have any sensation at all? Why can’t ALL processing – even processing involving analysis of internal states – occur “in the dark” as does reflexive (e.g. “knee jerk”) processing? Why are some cognitive processes accompanied by consciousness?”

    Liked by 1 person

    1. Thanks Jim. I suspect you’re right about at least a portion of it. Anil Seth recently did an article for Science Nordic where he described the hard problem as the question of how consciousness arises from physical systems. When reading it, I realized that the hard problem is actually the problem of accepting that dualism isn’t true, which many people struggle with.

      I actually tried hard to stay away from consciousness per se in this post. Many people equate it with the ability to feel, and I do think that’s part of it, but human consciousness requires introspective self awareness, the capability which gives us a simplified model of our own mental life, a model that leads us to the impression of dualism since, due to its simplified nature, seems utterly disconnected from the physics of the brain.

      Like

      1. Regarding your second paragraph, agreed, provided there is more to it than the “simplified model of our own mental life”. The model is like the screen in Dennett’s Cartesian Theater – it begs the presence of a viewer. As Stanilas Dahaene said in Consciousness and the Brain (sorry I don’t have an accurate reference handy), that’s not a problem so long as the viewer is a process and not simply another agent-within-the-agent. The model provides internally-sourced data for an interpretive process, just as photons provide externally-sourced data which is transduced and interpreted.

        Liked by 1 person

        1. I actually think the audience is the action planner described above, but this time of a model built from selected information flowing into it. That’s why the action planner’s activity is more likely to be in consciousness than other areas, it’s the one closest to the introspection mechanism.

          Like

  3. If I may interject my own view, I’m beginning to believe sensations simply ARE the processing of information from internally sourced or externally sourced data sensors. Colors are not inherent to the photons being sensed; colors are instead an interpretation of the frequency data that is transduced by retinal sensors. The process of interpretation gives rise to the sensation of color. The complexity of such processing can vary widely from agent to agent, and even from time to time within the same agent. That is, it seems to me there is a spectrum of sensation, with agents at one end of the spectrum experiencing no sensation at all (although they may exhibit intelligent behaviors), and acutely conscious agents (e.g. human beings) at the other end of the spectrum, with many levels (as perhaps measured by the number of information bits required to describe the complexity of sensory processing) in between.

    I would, for example, consider the behavior of a photo-electric sensor as being an extremely primitive act of cognition, not in the sense meant by panpsychism, but rather in line with your stimulus-response example. Such an agent resides at the extreme end of the spectrum of cognition, but nevertheless deserves to be placed on that spectrum.

    Liked by 2 people

    1. Also sorry that I don’t have a better explanation than “sensations simply ARE the processing of information from internally sourced or externally sourced data sensors”, and “The process of interpretation gives rise to the sensation of color.” Dennett, Dahaene, and Andy Clark have all said words to the effect that future research will flesh out this process and Chalmers’ Hard Problem will dissolve into No Problem. I’m inclined to agree, although I still think the ultimate answer will involve a substantial readjustment in the way sensations are viewed, in much the same way that scientists had to readjust their views of time, space, and gravity in light of Relativity theory.

      Liked by 1 person

  4. On the distinctions between thoughts, emotions, and feelings I recall Cordelia Fine having an equation in one of her early popular books:

    emotion = arousal + emotional thoughts.

    Trying to map archaic epistemic terms onto modern ontologies is difficult. Partly because we often don’t acknowledge the different modes and levels we are working with.

    Pre-modern India had no separate category for “emotion” or “feeling”. Sanskrit has names for emotions and often several synonyms or fine distinctions in intensity, but emotions were lumped together with thoughts as “mental” (cetasika) whereas sensations were “physical” (kāyasika). So our European way of dividing up experience is not “natural”.

    So it just occurs to me that we could think more systematically about this.

    The ontological distinctions are whether the stimulus comes from the peripheral nervous system or it originates in the central nervous system; and whether it is accompanied by physiological arousal or not (i,e. whether it also involves the autonomic nervous system).

    This gives us

    central – arousal
    central + arousal
    peripheral – arousal
    peripheral + arousal

    If you want to map the archaic Eurocentric epistemological terms onto this then I suggest: thought, emotion, sensation, feeling.

    I haven’t factored in the parasympathetic side of the autonomic system, i.e. ± relaxation.I suspect we don’t see these as separate categories, but as positive experiences fitting into categories. Calm, contentment etc.

    The next step would be an axis for anticipation/reward.

    That would give us a coarse grained model that would allow us to map most of the archaic legacy epistemic terms onto a modern empirical ontology.

    Liked by 1 person

    1. Good point that many of these terms are culturally specific. And I do think the division that many make between cognition and emotion (in its fully elaborated sense) is a false one. Emotions, feelings, are cognition, just of a more primal sort than the second order thinking we often engage in.

      I did leave off the entire interoceptive resonance aspect of this. And it certainly is a major factor in felt emotions. But I don’t buy the full James / Lange theory that it’s the whole package, and your combinations seem to indicate that you don’t either. People with severed spinal cords just below the neck have reduced emotions, but don’t lose them entirely. And all the wiring between the relevant regions make the overall proposition unlikely to me.

      I’ll have to study the model you lay out here more carefully when I have time. Certainly the embodied aspect is a major factor.

      Like

    2. Just wanted to give my gut response here. I think the pairings might be more like:

      Thought: central, no change in arousal
      Emotion: central +/- arousal
      Sensation: peripheral
      Feeling: peripheral +/- arousal

      [As for myself I think emotion is in a completely different category. ]

      *

      Like

  5. Compare with Damasio’s view.

    “…we are mentally and behaviorally far more than our neurons. We cannot have feelings arising from neurons alone. The nervous systems are in constant interaction and cooperation with the rest of the organism. The reason why nervous systems exist in the first place is to assist the rest of the organism. That fact is constantly missed.”

    http://nautil.us/issue/56/perspective/why-your-biology-runs-on-feelings
    http://nautil.us/issue/56/perspective/antonio-damasio-tells-us-why-pain-is-necessary

    Before sensory organs evolved to perceive the external world, the nervous system served the role of monitoring and controlling the internal world – basic things like eating, digestion, elimination.

    BTW, I used the same bilaterian drawing in a post about six years ago.

    https://broadspeculations.com/2012/10/21/floating/

    Liked by 2 people

    1. Good point. As I noted to jayarava, I left off the entire interoceptive resonance aspect of this. I do think that’s a major part of the experience, but not the entire experience ala James-Lange theory.

      But there’s no doubt that when a reflex is triggered, it often affects the entire body, all of which reverberates back and adds a visceral aspect to felt emotions. It’s probably why we call them “feelings”. On the other hand, even people with severed spinal cords still feel emotions, albeit in reduced intensity.

      On the drawing, great minds think alike 🙂

      Liked by 1 person

  6. Surely one of the unique things about consciousness and conscious states of feeling is that they are known in themselves, whereas everything(?) else can only be known in their aboutness? In describing any ‘mechanism’ (your word) for feelings, Mike, it seems to me that we’re immediately leapfrogging our direct knowledge of the very thing we’re seeking knowledge about, and instead focusing only on functional mechanisms. There’s a facile and clichéd phrase, ‘It is what it is’, and yet it seems to me that feelings and consciousness ‘Are what they are’. That’s unsatisfactory to the analytical mind, but in what way is it incorrect? What do you mean by ‘mysterious’, by the way, and am I going down that path in your view? Thanks Mike.

    Liked by 2 people

    1. Hariod,
      Don’t know if you noticed, but I actually avoided consciousness as much as possible in this post, because I think it’s not necessarily the same topic. It might be controversial to say this, but based on what I’ve read, we can have unconscious feelings.

      On being known in themselves, the limitations of language loom here. I would say that conscious content is recursive aboutness, but I could also see a case being made that, since this is all happening in the brain, known in themselves could also be a useful way of describing it.

      On “it is what it is”, I don’t deny that, but then I wouldn’t deny it for anything. Everything is what it is. But maybe I’m not grasping what you mean by that phrase?

      By “mysterious”, I just mean the sentiment that the mechanism is completely unknown. I’m not sure if I recall you ever expressing that sentiment about feelings per se. You definitely weren’t one of the people I recently (last few months) discussed this with, so I didn’t have you in mind when I mentioned it. But I do sometimes have the impression of you being comfortable with mysteries, but maybe that’s just in contrast to me; while I love mysteries, I also love trying to solve them.

      Liked by 1 person

      1. Thanks Mike. Not sure if I’m ready to buy into the idea of feelings which are not felt. I can accept a kind of proto-feeling which would be the pure perception of a sense contact prior to re-presentation in overt consciousness, but would suggest it’s still apprehended liminally in (what we call) consciousness. ‘Unconscious feelings’ sounds rather too conveniently theoretical to my mind, at least, couched in those words it does, though doubtless it’s one of those cases in which we have to ask at what point of reduction does the ‘thing’ spoken of no longer remain the ‘thing’, but rather some limited array of constituent parts. When does the ‘car’ being assembled become a ‘car’; when do the conditions for a feeling give rise to a phenomenon apprehended as being felt? I believe some mind theorists reject the notion of the sub-conscious on similar grounds?

        On the idea that consciousness and conscious feelings ‘Are what they are’, then by that I mean they’re precisely what they appear to be. I’m not suggesting there’s any cognitive closure about them — quite the opposite — rather, they require no elimination or reduction to be understood. They explain themselves in their own terms. I don’t believe that position makes me either a mysterian or anti-scientific, does it? Consciousness and conscious feelings (sorry to keep using the C-word) don’t require explaining to be known, aren’t problems waiting to be solved, as they are known in the very immediacy of their presentation as being none other than what they appear to be. They can, of course, be known (in a limited, indirect sense) by whatever categories the human animal has at its disposal (Kant, etc., for Tina to expand upon perhaps?).

        Liked by 3 people

        1. Hariod,
          I responded below on the whole sub-conscious thing.

          On accepting consciousness as is, well, I guess it depends on what we’re trying to accomplish. That may be fine if your goal is meditation or something along those lines. But I’m interested in what makes minds tick and what might be required to build an engineered version. That makes me unable to resist dissecting the various phenomena we refer to as “consciousness”.

          Liked by 1 person

    2. Hariod,
      You’ve gotten into why I believe the “unconscious” term needs to be retired. If one means “not conscious”, then why not say “non-conscious”? But then if one means “sort of conscious”, then why not say “quasi-conscious”?

      Liked by 1 person

      1. Thanks Eric, and I think (but am not certain) that I agree. Sorry to be simple, but if I’m ‘unconscious’ when asleep, then why does the alarm clock awaken me? We can walk the causation back to stimuli directing attention, exciting receptors, etc. and up through higher levels of cognition that give rise to introspective lucidity (i.e. the knowledge that ‘I am awake’), and yet were we prior to this point ‘non- (or) un-conscious’? Wasn’t some degree of consciousness monitoring stimuli and directing attention still? Are you suggesting there is a state of being alive and sentient and yet which is ‘non-conscious’, or are we (in that state of sentience) always conscious in some degree? I may be looking at it too simplistically, forgive me!

        Liked by 1 person

        1. The alarm clock awakens you because the auditory information is first presented to the brainstem, which creates a consciousness of the sound as the signal is passed on to the auditory cortex for enhanced processing … an instance which appears to support conscious functionality in the brainstem complex.

          Liked by 2 people

          1. Stephen,
            I’m using the dictionary definition of reflex, and my use of that word is deliberate. I list it in the hierarchy for two reasons. First, to point out that the standard of merely being responsive to the environment, a standard many panpsychists adopt, doesn’t really meet most people’s intuitive sense of consciousness. (I first laid out this hierarchy in a post criticizing panpsychism.)

            And second, because it is a foundational component of human cognition. Without the reflexes, we would have no affects, no feelings, no motivations, no purpose. It’s the base programming of a conscious system. Without them, we’d be purposeless reasoning engines with no sentience.

            A lot of cognitive scientists prefer the term “survival circuits” for what I’m calling reflexes. I suspect they’re making a distinction between spinal cord reflexes, which always either happen or don’t happen, and non-conscious circuits in the brain which we have some ability to override. So we can’t override the knee-jerk or withdrawal reflexes, but we can often override the fleeing, fighting, feeding, or mating type reflexes under the right conditions. Nevertheless, they identify many primal behaviors as “reflexive”, happening without conscious thought, which seems inconsistent to me.

            On feeling a burning hand, the problem with referring to your own experience is that experience in a healthy human involves both the lower level reflexive circuitry and the high level cortical feeling. When someone strikes your knee with a rubber hammer, you still feel the strike (in a somatosensory manner) even though the knee-jerk response is automatic and involuntary, a reflex from your lower spinal cord. But if you were in a coma, you wouldn’t feel it, yet the reflex would still happen. (Spinal cord reflexes don’t generate affective feelings because we can’t override them. They’re simply too far away from the cortex for such circuits to form.)

            Like

        2. Hariod, Eric, and Stephen,
          Good discussion guys.

          I think this comes down to what we mean by “consciousness”. This is the problem with this subject. We often end up talking past each other with different definitions. My response is to describe a hierarchy of consciousness:

          1. Reflexes, responses to stimuli
          2. Perceptions, predictive models based on sensory data, expanding the scope of what the reflexes are reacting to.
          3. Attention, prioritization of what the reflexes are reacting to.
          4. Imagination, action simulations assessing which reflexes to allow or inhibit. It is here where reflexes become feelings. What I described in the post.
          5. Introspection, metacognitive self awareness.

          Being woken only requires 1, reflexes.

          2-4 is often referred to as sensory or primary consciousness. But here’s the thing, we routinely refer to 1-4 in humans that happen without 5 as being in the unconscious or sub-conscious. You can use the word quasi-conscious if you prefer, but that doesn’t change the fact that it’s happening outside of John Locke’s seminal definition of consciousness as: “the perception of what passes in a man’s own mind”. Under that definition, if we can’t introspect it, or aren’t current introspecting it, it’s not in consciousness.

          We’ve all had the experience of someone yelling that they weren’t upset. Many of us have had the experience of parts of our body aching because of tension we weren’t aware of. I personally experience traveler’s constipation every time I travel due to some kind of unconscious anxiety that I can’t introspect despite many attempts. (Sorry, I know, too much information, but hopefully I’m getting the point across.)

          Liked by 2 people

          1. Mike, I’ve been contemplating this “Construction of Feelings” post since you posted it, but I’ve been too busy on other fronts (like our “Arrival” block universe conversation) to have the time to comment. But I do have a few concerns …

            We probably all agree that definitions of consciousness are all over the map and frequently unspecified, so I believe it’s important to be using a common one, at least common to any given discussion. I’ve mentioned my rather simple definition of consciousness before, I believe, although perhaps on Eric Schwitzgebel’s “Splintered Mind” blog. My definition is: “If an organism’s brain creates a feeling that is felt by the organism—that’s consciousness.” My definition views all modes of consciousness as feelings created by the brain, not only physical feelings like touch, tickle, and pain, but sound, sight, thought and so on—they are all feelings. Importantly, I also maintain that feelings are the content of consciousness, not different kinds of consciousness, the different kinds of consciousness being core (or creature) consciousness and extended consciousness, as in primates, for instance. I haven’t had any feedback for my definition, but I’d be very interested to learn what you and your blog crew think of it.

            My first concern regarding your hierarchy, Mike, is about your #1: “Reflexes, responses to stimuli.” The dictionary definition of “reflex” is: an action that is performed as a response to a stimulus and without conscious thought. Defined as such, a reflex is behavior and I don’t understand your inclusion of reflexive behavior in a hierarchy of consciousness.

            Certainly the initiation and completion of a reflexive behavior is an unconscious and seemingly automatic process, but, as far as I can tell from my own experience, the stimulus and the following reflexive action are both felt and, as such, are both conscious. Stick your hand in a flame … well, not really, but imagine! 😉 I’m sure none of us will compare that situation with similar remembered situations in order to assess the potential damage and construct and initiate action plans—there simply isn’t sufficient time to do so. But we definitely feel the heat and we definitely feel our arm in motion as the reflexive movement rapidly withdraws our hand from the flame. We are undeniably conscious of the heat and our bodily movements but those perceptions aren’t the reflex which, per the definition above, is an action performed without conscious thought.

            So, Mike, are you operating with a different definition of “reflex”? If not, why is “reflex” at the initial (lowest?) position in your hierarchy of consciousness?

            Liked by 1 person

    3. Excellent questions Hariod! Firstly I agree with Mike that one of the hardest parts of this is that people use separate definitions and so talk past each other. That’s the very thing which “ordinary language” philosophers, such as Ludwig Wittgenstein, have sought to overcome. And Mike’s hierarchy of consciousness is very much in that vein — it essentially takes ordinary conceptions of consciousness and places them at different levels which are ranked by presumed complexity. Thus while the ideas of John Locke find a home at level 5, even some panpsychists do so at level 1.

      I believe that I have something which is more effective than the “ordinary language” potential cure however. My EP1 instead obligates the listener to accept a given theorist’s presented definitions, whether this be for “consciousness”, “life”, “time”, or any other term. The question for the listener then becomes, does the presentation seem useful? Note that Newton didn’t care how people in general though of “force”, but rather developed the quite useful “accelerated mass” definition. (Or maybe he even nicked it, but regardless.) I believe that I’ve developed some extremely effective (and original) definitions associated with “consciousness”. Let’s see if I can tempt you!

      I consider the human brain to essentially be a computer that is not conscious. It takes input signals, processes them algorithmically, and so produces output function. But the thing that makes it far more special than the computer in front of you, I think, is that it also outputs a punishment/ reward dynamic for something other than it to experience. I theorize that this “valence” stuff drives the function of a tiny second computer, or the conscious form by which I perceive existence. In the following diagram,

      note that I consider all input, processing, and output of consciousness to exist as output of the non-conscious brain. While the main computer functions by means of neurons, the tiny conscious computer instead functions by means of value. The thought processor here interprets inputs and constructs scenarios about what to do to promote these interests. Note that muscle operation output then becomes input for the non-conscious computer to effectively operate these muscles. So on to some specifics.

      Yes I’d like for us to stop referring to sleep under the heading of “unconscious”. In English the “un” term is generally taken as “not”, and obviously if an alarm clock can awaken you then you simply were not “not conscious”. Instead I consider sleep to be a degraded state of consciousness that seems necessary for recuperation. I put this, along with hypnosis, daydreaming, and even drug impairment, under “altered states of consciousness”. (As I recall, Mike suggested this term after he gave me some reasonable objections for my earlier “sub-conscious” title.)

      From my models consciousness stops when there is no associated functioning processor, and thus conscious inputs are not realized. Here a person might be comatose through injury or drug impairment. I call this “non-conscious” given that nothing happens on the conscious side of things. Dead people are also non-conscious, of course.

      Then finally when we wish to refer to how conscious function is subtly influenced by non-conscious dynamics, such as non-acknowledged biases, I consider “quasi-conscious” to work better than “unconscious”. Or I suppose it could go the other way, as in Mike’s travel constipation. There’s nothing strange about travel anxiety. But if one result of this is to mess up Mike’s plumbing through intricate biological effects, then that could certainly be referred to as a “quasi-conscious” result. Conversely the literal meaning of “unconscious” doesn’t seem appropriate here, and of course the term is used in too many ways as discussed. In the spirit of effective communication and thought, I believe to would be best to retired the term (and while we’re at it, to finally retire the ideas of that Austrian guy who first popularized it!).

      Liked by 1 person

      1. Thank you so much, Eric, and also Mike and Stephen too. Your collective knowledge and/or sophistication of theory in this matter far surpasses my own, although I find the discussion fascinating and hope you can forgive my naive engagement, and thank you for tolerating it.

        For what it’s worth, I take the term ‘consciousness’ to mean ‘with knowledge’ (more or less its etymological basis), and this implies some referent — i.e. what is it that the knowledge is together ‘with’? Nonetheless, I’m not at all convinced there actually is any referent ontologically discrete from the knowledge (certainly not any homunculan-like self, of course), and that the conscious flux is known (i.e. is ‘with knowledge’) only (with)in and as itself, and just as it appears. [As Mike points out, as per Locke’s definition.] It does not pass down (move, as it were) to other categories (of organic matter, say), though its levels attenuate and amplify. There perhaps seems to be some faint echo there of your non-conscious brain, Eric, although my own current position is that matter and consciousness boil down to looking at (that is, figuratively speaking) the same thing in two different ways. In other words, the dualistic premises are wrongly conceived initially, yet neither consciousness nor matter need eliminating. So whilst I say that consciousness does not pass down to (or come up, emerge from) matter, what I mean is that it always was (what we might also call) matter, and vice versa.

        I don’t know what to make of your ‘tiny second computer’, Eric — is it still algorithmic? are parts of it closed to cognition? in what way is it not a meta-level representation (output) of the first computer? and if it is, what medium is it outputting to and/or as? All that said, then I find the many models of mind more or less compelling in themselves . . . until I read the next. They all have their logical and/or aesthetic appeal, and then one has one’s dispositions as to what may or may not appeal, and which are pretty much impossible to escape. And yes, as you say Mike, there’s the seemingly insurmountable problem of language. That takes me back to my earlier thoughts, to the effect that consciousness and conscious feelings are what they are in their apprehending, and the apprehending is one side of a single coin faced with consciousness on one side and matter on the other. Neither require elimination or reduction to be understood. They explain themselves in their own terms, although I entirely take you point, Mike, that you’re interested in “what makes minds tick and what might be required to build an engineered version.” That appears to be a quite different question to what conscious states (e.g. of feeling) are in themselves, and one which I can offer nothing useful on. As you can see!

        Liked by 3 people

      2. Hi Eric! Just wondering if you’re familiar with John Searle’s “Is the Brain a Digital Computer?” … The PDF is online, if you haven’t. Searle says not. I spent over 30 years in the computer programming business and I agree with him.

        Liked by 1 person

        1. Stephen,
          I think there’s an important distinction between viewing the brain as a digital computer in the sense of a commercial computer with the Von Neumann architecture including a CPU, memory unit, etc, and viewing it a biological computational system. I think anyone familiar with the brain knows it’s not the former, but most neuroscientists see it as the latter.

          Like

          1. Mike, if you haven’t read that paper, I would go ahead and do it. (Maybe re-read it if it has been a while.) Searle isn’t so much bothered by the brain as a computer as he is by the brain being an information processor. He specifically states brains don’t process information. He makes at least one very astute observation that needs to be answered, namely that information processing requires syntax, and syntax needs an “outside observer” or homunculus. He says neural processing doesn’t need the homunculus. I think he’s right about the former but wrong about the latter. What do you think?

            *

            Liked by 1 person

          2. James, I can’t recall whether I’ve read that paper, but I’m familiar with Searle’s views. I don’t agree that information processing requires syntax. I know some people insist that it does, but I don’t find their arguments persuasive. This SEP article gives an overview of the other accounts: https://plato.stanford.edu/entries/computation-physicalsystems/

            And any argument about a need for an outside observer ignores the number one lesson of natural selection: that we can have design without a designer, competence without comprehension, and information without an observer.

            Like

          3. Mike, I’m one of those who say you need syntax to process information. Syntax is what says that a voltage of 3 is a zero and a voltage of 10 is a one. Syntax is what says this incoming neural signal means you should pull your arm in, and that incoming signal means you should duck your head. Reflexes require that kind of syntax. Syntax can be hard wired. That’s how natural selection works as the designer/observer. When I say syntax needs an observer, I mean that whatever creates the mechanism that will interpret the input, whatever creates the purpose of the mechanism, is creating the syntax. But the purpose is not inherent in the physics of the mechanism. The “observer” doesn’t have to be extant when the processing happens. There has to be something outside the mechanism that explains why the mechanism was created, something that has intention, or “intention” if that something is natural selection.

            The intention can also come from a programmer, even a natural programmer, which you might call a planner.

            *
            [see how I did that? 🙂 ]

            Liked by 1 person

          4. I did, and pretty much agree ontologically if not necessarily definitionally. I think Searle’s definitions of “syntax” and “observer” are narrower than yours. But then, I think my disagreement with him amounts to straight definitional one anyway.

            Like

          5. Mike, I’m not convinced you understand the significance of the observation that something outside the mechanism determines the syntax. Something outside the program determines the meaning of what the program running on the computer does.

            What if your pre-frontal cortical planner is a (creative) programmer? Maybe it doesn’t run the program, just sets it up.

            Or is that what you were thinking?

            *

            Liked by 1 person

          6. James,
            It might be that I’m not understanding the significance, although I did make the point in my post on information a while back that something must make use of the patterns for it to be information. DNA is only information because the of the transcription proteins, ribosomes, and other cellular machinery that makes use of it.

            Now, a case could be made that information is like color, it didn’t really exist until conscious systems came along and made it. But if so, then there were things that reflected electromagnetic radiation with the wavelength our nervous systems assign to “red” long before we came along. For information, the physical systems would have existed long before we came along and interpreted them to be information. Maybe we could call the state of these phenomena before we came along as “unrecognized-color” , “unrecognized-information”, or maybe “unriefied information”.

            I could see the prefrontal cortex as a programmer of sorts since its role is to lay plans, plans which are executed by other regions, and ultimately that’s all programming really is. When do we cross the line from a programmer to an operator or orchestrator?

            Like

          7. Mike, at this point I think it would be worthwhile to have a very rigorous definition of information. Then you can start qualifying it with words like “semantic”. For example, you said “something must make use of the patterns for it to be information”. I would say that applies to semantic information, but not all information. Here’s where I would start:

            Information is a set of data. A datum is a physical measurement (or alternatively, an affordance for a physical measurement).

            And now i think it’s worthwhile to go all the way down to a base framework and say everything that happens in the world looks like this:

            Input (x1,x2,…xn) —> Mechanism —> Output (y1, y2, … yn)

            That’s information in, information out. The x’s and y’s are data. I can point out here that the Mechanism itself constitutes information because something creates the mechanism, as in Input —> Mechanism0 —> Mechanism. This is significant because you then get Input —> Mechanism —> Output = Integrated Information.

            Semantics (syntax, value) comes in when the Mechanism is created for a purpose (by intention or selection). The first such mechanisms to exist were what you call reflexes, such as chemotaxis.

            The next level/subset of semantic information is semiotic information, where the value of the output is associated not with the input but with something in the causal history of that input. The input information is being used as a reference to something else. There’s a whole lot that can be discussed about how semiotics fits here, but I’ll just skip to the point that’s pertinent to the OP.

            At some point you get to where the output is a symbolic symbol representing something in the causal history of the input. We can call it a percept. Then you can have a mechanism that combines percepts into concepts. Then you can have a mechanism which combines concepts and creates concepts which are goals/intentions. Then you can have
            goal —> Mechanism —> plan, and then
            plan —> Mechanism —> action, and
            results —> Mechanism —> new plan, and so on.

            At this point I would suggest that semiotic information may be as far as we have to go to explain human consciousness, but I’m inclined to say all of the levels are “about” consciousness, or are dimensions of consciousness.

            *

            Liked by 1 person

          8. Coming up with one comprehensive rigorous definition of “information” may not be possible. It seems to be one of those terms, like “energy”, “life”, or “religion”, that defy one definition.

            My own personal definition is, patterns that have causal effects by virtue of their structure rather than size or magnitude. Admittedly, that’s very broad. I’m aware of narrower ones such as Shannon information, Fisher information, or the semantic information you mention. But it seems to me that any narrowing inevitably cuts off things we commonly refer to as “information”. The problem is that the term is polluted by its etymology of being that which inputs forms into the mind. We use the word for patterns that, now that we’re here, put forms in our mind, such as tree rings, but that existed long before we came on the scene.

            I don’t have any real issue with the idea that everything has inputs, processing, and outputs. My only concern is that explaining everything doesn’t explain consciousness itself. It’s kind of like my response to people who insist that quantum mechanics has a role in consciousness; to be relevant, QM’s role has to be different than its role in a cup of coffee. For the input->mechanism->output to be useful, it seems to me that it has to add something beyond what happens with a tornado.

            I’m not getting the distinction between semantic and semiotic information, or are those terms meant to be synonymous?

            Like

          9. What do you mean coming up with a rigorous definition of information may not be possible? I just did! I know, you mean generally accepted. But I would say when anyone, like Searle, or Tononi, talks about information, the first thing you need to do is nail down exactly what their definition is. They can define it however they want, as long as they keep it in their own court, so to speak.

            As for semantic and semiotic information being the same thing, I think you’re right. I had not intended them to be synonymous, and that post was the first time I has considered “semiotic information”. But I can’t think of an example where you have one but not the other.

            As for the usefulness of input->mechanism->output, I have found it enormously useful. It lets me explain Aristotle’s four causes, and causality in general. It let’s me understand “it from bit”. It lets me explain exactly why Searle is a little right and a lot wrong. It lets me understand where “meaning” comes from. It lets me explain why Integrated Information Theory is a lot right and a little wrong. It lets me understand why Functionalists are a lot right and a little wrong. It lets me understand that some of Science is about looking for functions (inputs->outputs, “physical laws”), and some of Science is about looking for mechanisms.

            And it lets me understand that there is no such thing as “consciousness itself” (to use your phrase), in the same way there is no “life itself”. There is no monolithic thing that grants consciousness. There is no “elan mental”. Consciousness is about combinations of input->output processes, specifically, semiotic processes. This is what coffee cups don’t have, although tornados (and other homeostatic systems) may have a little. Just like viruses might be sorta alive. Human consciousness will be about the various kinds of outputs (reflexes, percepts, concepts, goals, plans, evaluations) and how these become inputs. The Hard Problem (and qualia) will be explained (I’m pretty sure) by explaining subjective references to semiotic objects.

            *

            Liked by 1 person

        2. Thanks for the interest Stephen. No I hadn’t seen that one. I didn’t get to far with it however since apparently I agree. I’m not sure anyone today considers brains “digital”. Then there’s the whole “software” thing. I haven’t noticed serious people proposing that position either. But surely brain science was far less advanced 28 year ago when the paper came out, so perhaps it helped.

          Apparently there are some professionals today who even oppose the general brain/ computer analogy, however, though my sense is that they’re an extreme minority. One distinguished dissenter recently shot me the following link https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer, though I found it to actually make my case.

          There clearly are differences between brains and any machine that we humans build, of course. So from there the question becomes, can this nevertheless be a useful analogy? Are there not effective similarities? Of course there are! I actually consider virtually all human understandings to be acquired by means of analogy, so this specific issue seems extremely important. And do you know what I’m told when I ask for a more effective analogy for the brain than one of our computers? Not a single suggestion yet.

          This gives me the opportunity to present my own “four forms of computer” discussion — and I like that it’s to a computer guy as well! So give this a try:

          Before “life”, I can’t think of anything which might reasonably be said to have functioned computationally. This is to say, to accept inputs and process them algorithmically to produce output function. I’d love some suggestions if you can think of any.

          So far the first reasonable example of algorithmic function that I’ve come up with, is the genetic material associated with living function. Consider how chemical substances enter the cell, these inputs algorithmically react with genetic material, or are thus processed, and this produces output function. Thus I think it’s reasonable to say that “computation” is necessary for life.

          Then getting into Mike’s post, reality’s second form of computer would have occurred when whole organisms began to accept input signals that went to more than just individual dedicated outputs. That’s where we get into central organism machines which algorithmically process input for output function, or brains.

          I theorize reality’s third from of computer to exist as an output of the second. As I described above to Hariod with diagram, this is the conscious form of computer by which you and I perceive existence.

          Then finally there are those comparatively quite primitive computing machines that we humans design and build.

          Any thoughts?

          Like

    4. Hariod,
      So you define consciousness as “With knowledge”? I can go with that. And as a solipsist there is one thing that I do “know” exists, or my thought itself. Unfortunately in all other regards I only “believe”, though as outlined above I have developed an extensive model of consciousness beyond what I know. It concerns three forms of input, one form of processor, and one form of output. I offer it for general scrutiny.

      Regarding whether or not I consider the tiny conscious form of computer to be algorithmic, great question! I have struggled with this a bit. It’s not conventionally algorithmic, as in crunching all the numbers produces an associated result, but for the moment I will say that it’s algorithmic in a sense. The algorithm here involves instantaneous punishing to rewarding sensations, and thus concerns something which attempts to feel as good as it can each moment.

      For example you might be in the middle of an interesting part of a good book. Thus the associated algorithm suggests that you keep reading rather than do something else. But you also notice a twinging little pain, and along with your memory of such past sensations decide that it’s relieved by going off to pee. The algorithm here does not have you jump up to go just yet however, since it doesn’t actually feel that bad though continuing to read the book feels quite good. Here I theorize that the non-conscious computer is rewarding you for reading an interesting book, as well as punishing you for not relieving yourself. At some point however the instantaneous algorithms should favor that you put the book down to relieve yourself. That’s the sort of computation I mean for the tiny conscious form of computer.

      As I define the term, the conscious computer the cognitive part, and it’s entirely an output of the non-conscious computer. Then regarding this computer’s medium, I’m not entirely sure though hopefully this will help:

      The theory is that it’s possible for a computer that is not conscious to output a punishment/ reward dynamic for something other than it to experience, and so drive the function of a teleological form of computer. That “something else created” is theoretically you and I. I’m saying that if the +/- value happens to be eliminated, which is to say the purpose part, then we are eliminated as well.

      These are just little snippets of the whole thing, and so may not be entirely clear, though I can always go into greater detail if questioned.

      Liked by 1 person

  7. Mike, google the phrase — The “Id” Knows More than the “Ego” Admits PDF — to retrieve the 2012 PDF by Mark Solms and Jaak Panksepp that’s very relevant to the topic.

    “From our perspective, the capacity to be aware of the environment and that one is the subject of such externally triggered experiences is already a higher cognitive function, which is ultimately mediated by the ability to reflect upon one’s subjective experiences. This hierarchical parsing enables one to be conscious in different ways—e.g., to feel happy and sad, without necessarily having the mental capacity to recognize that one is happy or sad, let alone to reflect upon the objective relations that caused this happiness or sadness. Being phenomenally conscious does not, by itself, require much cognitive sophistication at all.”

    Liked by 2 people

  8. Mike, here’s the text from a figure Panksepp uses. You’ll have to get the PDF document to see the colors:

    Figure 1. A schematic showing nested hierarchies of brain functions in which primary processes (red squares) influence secondary (green circles) and tertiary (blue rectangles) processes, which in turn exert top-down regulatory control. The seven primary process emotions are noted: positively valenced emotions highlighted in red (SEEKING, LUST, CARE and PLAY), and negative ones in purple (RAGE, FEAR and PANIC/GRIEF).

    1) Tertiary Affects and Neocortical ‘Awareness’ Functions
    i) Cognitive Executive Functions: Thoughts & Planning (frontal cortex)
    ii) Emotional Ruminations & Regulations (medial frontal regions)
    iii) ‘Free Will‘ (higher working memory functions—Intention-to-Act)

    2) Secondary-Process Affective Memories (Learning via Basal Ganglia)
    i) Classical Conditioning (e.g. FEAR via basolateral & central amygdala)
    ii) Instrumental & Operant Conditioning (SEEKING via nucleus accumbens)
    iii) Behavioural & Emotional Habits (largely unconscious—dorsal striatum)

    3) Primary-Process, Basic-Primordial Affective States (Sub-Neocortical)
    i) Sensory Affects (exteroceptive-sensory triggered pleasurable and unpleasurable/disgusting feelings)
    ii) Homeostatic Affects (brain-body interoceptors: hunger. thirst, etc.)
    iii) Emotional Affects (emotion action systems—Intentions-in-Actions)

    I don’t have URL’s for a number of relevant Panksepp PDF’s but I can email the PDF’s to you … send a request to my ERL email address ERLTalk@outlook.com. (I will not share your email address).

    Liked by 1 person

    1. Thanks Stephen. I downloaded the PDF, although it may be a bit before I have a chance to read the whole thing.

      I did read a good portion of Panksepp’s book, “The Archaeology of the Mind”, which gives a good overview of his ideas. In short, I classify the evidence he provides for “Primary-Process Emotional Affects” as evidence for the reflexes. My impression is that he routinely conflates the survival impulse with the feeling. While that impulse and feeling are always paired in mentally complete humans, there’s no evidence that it is when the cerebrum is absent. I’d be very interested to know if there is evidence that can’t be explained by reflexes alone.

      Last year, I compared Panksepp’s understanding of emotions with Lisa Feldmann Barratt’s: https://selfawarepatterns.com/2017/08/12/the-layers-of-emotion-creation/
      Barratt, while being interviewed by Gina Campbell, was asked about Panksepp’s views. She responded that she had visited his lab, appreciated his passion, but that his evidence was simply lacking. http://brainsciencepodcast.com/bsp/2017/135-emotions-barrett

      Like

  9. Hi Mike,

    I’m one of those people who thinks feelings are mysterious. After reading this I still think they are mysterious although you’ve given a good account of a hypothesis/theory on how the nervous system evolved and is evolving to perform certain functions.

    Here’s what I think you are missing:
    You suggest there is a system which tries to perdict actions from relfexes (perhaps contradictory ones) and then based on those predictions allows or inhbits one or another relfexes. That’s all well and good. So far you seem to be talking about different systems starting with basic (e.g. reflex systems) to more complex ones evloving over time.
    But all of a sudden you also make the claim that at this level of complexity somehow the process is ‘felt’ by the processor as a ‘feeling’.
    What you are suggesting it seems to me is that feelings just happen to occur by pure magic at this stage.
    Consider this, I make a robot which has a camera, a motor (with wheels) and a small piece of program. When it recognises an electric socket the output to the motor is to move towards the source and when it recognises fire the output to the motor is to move away from the source. When seeing both at the same time the output is diverted to a program that tells it what to do when seeing both stimuli at the same time e.g. move away everytime except if distance between sources is greater than X etc. Now would you suggest such a program is also able to ‘feel’ these inputs it’s getting from the camera-motor system? And why not?

    Another example is AI game programs. Usually you can see that they have to resolve complicated and often controdictory scenarios and then make good output decisions. They then must also be able to imagine the consequnces of each of the possible actions they can take and then choose wisely. Does this mean they can also ‘feel’ the conflicitng communications as something? if not then why not in this case?

    Liked by 3 people

    1. Hi Fizan, yes, there does appear to be a point at which purely algorithmic models either break down or eliminate (i.e. discount, explain away) phenomena. [Not at all saying Mike did that.] Unreliable though introspection (e.g. felt states) may very often be or in its indications, its existence does seem to be a fact — if only in its seeming, the qualities of which differ from (what’s regarded as) pure matter alone given the seeming is conjured and disappears, whereas matter more-or-less remains in its appearance/presence. My own current take is that the seeming of introspection (i.e. conscious states and feelings) is an aspect of matter, and that the converse is equally the case. I’m glad Mike is tackling the matter of feelings, though, as often they get conveniently ignored, as if biology had nothing to do with animal consciousness (there may be other forms). And yet, if only in its seeming, all consciousness is accompanied by feelings in some degree.

      Like

    2. Hi Fizan,
      Well, I’m certainly not aiming to have magic steps. A lot depends here on whether you’re willing to accept that a feeling is, fundamentally, communication. I know a lot of people are adamantly opposed to that proposition. But usually those people are unwilling to ponder what else feelings might be.

      That said, it’s worth noting a few things that I didn’t cover in the post, which is the unique characteristics of biological feelings. So, the reflex is triggered, which sends a signal to higher level cortical regions. But also, the reflex triggers autonomous processes, perhaps releasing hormones such as adrenaline, increasing heart rate, blood pressure, etc. All of this resonates back to the brain as interoceptive signals, reinforcing the signal from the reflex and adding a visceral aspect to it.

      This is born out by patients with spinal injuries. A patient whose spinal cord has been severed at their neck still experiences feelings, although they reportedly don’t experience them with the same intensity. In their case, the interoceptive loop has been cut (or at least greatly reduced), but the emotional feelings remain.

      In principle, there’s no reason why a technological system couldn’t have feelings, and yes, they would fundamentally be built on communication between components. Although it would take a lot of attention to ensuring that the engineered system was architected similar to a biological system. In most cases, I doubt that would be useful for what we’d want those systems to do.

      I actually would not call the predictive processes that game AI is doing “imagination”. The reason is that the models the AI are working with are vastly simplified ones compared to what animals deal with, and there is no sensory imagery involved. The Deepmind people are working to add actual imagination to their systems, and I did a post on that a while back ( https://selfawarepatterns.com/2017/07/31/adding-imagination-to-ai/ ), but last I heard, what they had was pretty primitive. But as I noted in that post, getting true imagination in a system would be a significant step.

      Liked by 1 person

      1. “A lot depends here on whether you’re willing to accept that a feeling is, fundamentally, communication..”
        From my perspective I’m not fundamentally opposed to anything however I do need to be convinced that something is the case. I may even agree that a feeling is a communication of some sort but there are so many other communications as well that aren’t feelings. In fact almost everything is a form of communication of information. So I need to be convinced why a certain way of communication is also a perception of feeling. If that explanatory connection is lacking then rather than asserting any claim as to what I believe feelings are I put my hands up and say I don’t know apart from my obvious experienential knowledge of them.

        Even still I fail to see how the addition of other communications such as hormones to the mix adds anything fundamentally new to the picture.

        I think building a technological system which can have feelings may be the proof of concept that’s needed. So far it seems at least from what you are saying there isn’t much complicated in building feelings but no one has come close to contructing them yet.

        What do you mean by ‘true imagination’?
        And I wonder how you think game AI works in a fundamentally different way? I’m assuming by sensory imagery you mean perdictive maps of the environemnt?
        in this sense ‘sensory’ basically means all input information coming from outside the object of interest and perdictions would be about how that information would likely change and how it may influence the object of interest.
        I play a game called Rocket League (we have cars with rocket engines which we control to play soccer!) and I’ve played a lot against it’s game AI. The more you increase the difficulty the better the AI plays. From my perspective the AI has to do the same job that I’m doing with my controllers, it has to know/estimate and perdict how the information is constantly changing (there can be upto 5-7 human players and 1 AI in the midst). There are limitation such as the speed of the car, the dimensions of the car, the speed of the ball, the dimensions of the ball, the dimension of the goal post, the amount of boost you have etc. These limitation essentially mean you should be able to constantly pedict how the situation is likely to unfold in the next few seconds and start acting in advance if you want to pose a challenge. I don’t see how the game AI would be able to manage the same feat that we human players manage if it wasn’t essentially doing the same thing as us i.e.constantly be making meaningful perdictions about how the information outside the car (and even inside if you consider boost) is changing. Similar to us it would have to resolve many complex dilemmas on the go as well.

        Liked by 1 person

        1. I didn’t mean to imply that feelings wouldn’t be complicated to build. They certainly would be, and we don’t have the ability yet to do it. Reflexes are a lot easier. (It could be argued that the device you’re using to read this essentially works through reflexes.) Feelings would require imagination, and we don’t know how to do imagination yet.

          I’m not familiar with the design of Rocket League, but I’m familiar enough with game designs that are very good at providing the illusion of comprehension. When the dynamics are relatively simple (there are lots of tricks to make it look not simple), then it’s possible to brute force responses with a decision tree. Now, for all I know, Rocket League uses some kind of ANN, but it’s possible for the game player be effective without it.

          Perception and navigation of the real world is hard. Imagine how much more effective a Roomba might be if it could do it as well as a cockroach, not to mention a mouse. People who worry about how dangerous AI might be should wait until we have systems with at least the navigational intelligence of a bee.

          Liked by 1 person

          1. Perhaps what I’m failing to grasp is how your ‘imaginative subsystem’ which basically allows or inhibits reflexes based on prediction of likely consequeces, is essentially different from the decision tree in AI.

            Liked by 1 person

        2. It depends on how flexible you are with the concept of a “decision tree”. Imagination enables an organism to foresee the consequence of actions and make decisions in light of those consequences. It arguably is what separates a rules based system from a more flexible intelligence. In a compatibilist sense, it’s the free will engine.

          Like

  10. Hey Mike, sorry I’m late to the party, but your fine post here triggered many responses, and I use “triggered” advisedly. I’ve been stewing over how to respond. As you know, and some of your other readers may know, we agree on (most of) the basics of consciousness, and so now you seem to be digging into the fine details. Awesome! I will try to explain how my understanding differs, but I will split it into more than one post.

    But first, my main trigger: prediction. [begin rant] I hate (Hate, HATE) when people say the brain predicts its environment. “Predict” has the wrong connotation. Prediction suggests some kind of reasoning. “Predict” suggests the predicted thing would be surprising to someone who doesn’t have the facts and the reasoning. People predict a solar eclipse. People do not predict the sunrise. People expect the sunrise. When I see a coffee cup on my table, and then turn my head, I do not predict that the coffee cup will still be there when I turn my head back. I expect it will be there. I fully appreciate that there is some brain mechanism which compares my vision of the coffee cup and tries to determine whether it is surprising, but that is not prediction. [end rant]

    *
    [and don’t you dare start saying we hallucinate reality! Anil Seth, as good as he is, has a lot to answer for for that one.]

    Liked by 1 person

    1. Hey James,
      No worries on coming late. Often the best conversations on these posts take time to ferment.

      Wow, I never considered that someone might hate the word “predict”. Though I think the reason many scientists and philosophers use that word is exactly what your objection to it is. Words like “expect” are passive. They imply that the brain is just receiving the information and letting images be created. The reason these people prefer “predict” is because it implies a more action orient system, a proactive stance. The brain actively builds its perceptions with the sparse and gap ridden sensory information that it receives.

      To be clear, no one is saying that we’re consciously making those predictions. Instead the proposition is that lower level machinery goes through logic that builds those predictions. From within our consciousness, we do merely receive them. They don’t feel like predictions to us, just as feelings don’t, well, fell like information.

      I think this is an important insight. You might not buy it. But I think it’s worth considering the data that Anil Seth, Andy Clark, and company put forth for that viewpoint. If you haven’t heard Clark discuss it before, I recommend this BrainScience episode: http://brainsciencepodcast.com/bsp/2016/126-andyclark

      Like

      1. Okay, I feel vindicated. About 2 minutes in Dr. Campbell states “when we say that our brain is constantly generating predictions, this is just a fancy way of saying that our expectations color our experience.”

        That’s what I’m saying. And I don’t like using “fancy” language when it introduces incorrect connotations.

        *
        (I pretty much stopped listening right there. Let me know if you think there is more I need to hear.)

        Liked by 1 person

        1. Uh, I do think there is, but totally your call whether you want to invest the time. I’m actually reading Clark’s book right now (lamentably in dead tree format since it’s not available in Kindle format in the US) and might eventually do a post on the prediction paradigm.

          Like

  11. On consciousness: I know you were trying to restrict your thoughts to “feelings”, but given that the prevailing conception of consciousness is to have something which “feels like” something, you are addressing consciousness by default, like it or not.

    I think I agree with Fizan in that, while you are trying to take the mystery out of “feeling”, you fail to explain why a “planner” responding to a reflex produces a “feeling”.

    I think maybe you are actually guilty of making the same mistake as most other philosophers (J’accuse!), namely, looking for an “elan mental” (compare: elan vital) which is “feeling” or qualia.

    I also think you have the right basic idea of starting with something fundamental, like reflexes, and building up a hierarchy by adding constraints, e.g., that the input of the “planning” response be the output of a prior (reflexive) response. But I think you might be missing the possibility that some of the capabilities associated with human consciousness are not necessarily all within a single hierarchy. That is, I think you can have branches. For example, what if you have a system (a person) who can have multiple sensory experiences, and can report them, but can not control a response to them?
    “Look, I see a yummy apple tied to a string. That string seems to be tied to a bucket. Here I go. I’m pretty sure I’m gonna grab that apple. Hope there’s nothing bad in that bucket. That apple was yummy, but I seem to be soaked. I predicted that would happen.”

    Does this reflex driven person have “feelings”? What if the same person doesn’t have memories?
    “This apple is yummy!”
    “Why am I all wet?”

    My point is that there are a lot things associated with human consciousness. What do they all have in common? Input —> Mechanism —> Output.

    *
    [feelin a little panpsychist today]

    Liked by 1 person

    1. James,
      Don’t know if you saw my points elsewhere in this thread on unconscious feelings, such as the tension you didn’t realize you were feeling until your muscles begin to ache, or the person who screams that they’re not upset, or in my case, the constipation I always get when I travel due to some anxiety that I can’t discover no matter how hard I introspect. So, while we can be conscious of our feelings, not all feelings are within consciousness.

      I’ve been honest before that the hierarchy I present is not meant to be a comprehensive statement about all aspects of consciousness or the mind. It’s just a mental crutch I use to keep aspects of this stuff straight. Definitely the reality is far more complex and variegated. Biology isn’t engineering. It’s messy, inconsistent, and utterly opportunistic. The reality is always far more complicated than the crutches.

      If you think I’m positing an “elan mental”, presumably a magic step, I’d be interested to know where you see me introducing it. That there are reflexes seems indisputable. We know we have them on the spinal cord, such as the knee-jerk reflex, the withdrawal reflex, and many others. And we know many mental states we experience, particularly many emotional ones, are involuntary.

      But maybe you’re saying that there’s something between the autonomic reaction (what I’m calling the reflex) and the mental assessment of that reaction? If so, what would you say that is? (Even if from a purely phenomenological perspective.) If the answer is some mysterious ineffable something, that I would ask you consider how that isn’t actually a reach for an “elan mental”.

      Like

      1. [First, dude, it’s not necessarily travel anxiety. I think it may have to do with familiarity of terrain. Happens to me, but no anxiety.]

        So yes, I did see your other comments about unconscious feelings, and that was going to be a topic of a separate thread, but I’ll just put it here.

        I think you’re missing the idea of composition. A mechanism can be made up of other mechanisms, and can get input from other mechanisms. In the case of “unconscious feelings”, maybe those are just feelings associated with a different mechanism.

        So there is one big mechanism associated with each human, and I think Damasio’s description of the Autobiographical Self is perhaps best. This is the mechanism which most people are thinking about, because it gets input from (most?) all the senses and outputs memories and behaviors, at least. But there are lots of other mechanisms and sub mechanisms, and they’re all doing consciousness-type things, just not on the scale of the autobiographical self. Thus, Minsky talks about the Society of Mind, and Dennett talks about competition for fame in the brain.

        My point here is that many “unconscious”, “subconscious”, or “preconscious” processes are really just conscious processes that happen separate from the consciousness mechanism (autobiographical self) that we’re interested in.

        *
        [more in a separate reply]

        Liked by 1 person

        1. [Hmmm. I would say it’s anxiety over the lack of familiarity with the terrain, but whatever it is, it’s not something I’m conscious of or have ever found a way to control.]

          You make a good point here. There’s nothing magical about the information stream that ends up in consciousness. There are many information streams that don’t. The only thing that separates them is their location.

          An example is the phenomenon of blindsight. A patient can have a damaged or destroyed visual cortex, and is blind as far as the ability to consciously perceive sight. But if someone holds up an object in front of them, they can “guess” at far better accuracy than random chance whether that object is there or not. The theory is that their frontal lobes are receiving input from their superior colliculus since it also receives signals from the retina, but that flow of information is not one of the flows within the scope of introspection, making it “blindsight”.

          Like

      2. Re: elan mental

        I’m trying to understand what you mean by “feeling”. You say a reflex action does not have or does not generate it, but a system that receives a communication about reflexes and creatively chooses which to inhibit does have it. So I see these parts:
        For reflex:
        Input = measurement of environment
        Output = behavior (necessarily movement?)

        For “feeling”:
        Input = communication of reflexes
        Output = creative(?) inhibition of one (more?) reflex

        I’m not sure what makes one a feeling and not the other. Also, are all the words for a “feeling” necessary? What’s the bare minimum needed for a “feeling”.

        I guess what I’m looking for is the bare minimum of what’s required for a feeling, and why we would call that a feeling.

        *

        Liked by 1 person

        1. Ok, here we get to one of the difficulties in describing this stuff. I oversimplified in the post to get the basic point across. Evolutionarily, the reflexes resulted in actions. However, the brain reflexes have been co-evolving with the forebrain for a long time (500 million years), and in many cases the action part has become atrophied or unspecific, becoming more of a preparation for certain classes of actions rather than a program for a specific action.

          What makes one a feeling and not the other? Along the lines of what we both noted above, location. We feel a feeling because when the signal from the reflex arrives, the mental concept forms in the cortex, and that mental concept is within the scope of introspection. The reflexes, in and of themselves, aren’t.

          Why do we have feelings? Because they’re adaptive, useful. How else would we expect the firing of the reflex to be communicated to the reasoning aspect of our brain? Such communication is pre-language. It must happen in a primal fashion. That primal fashion is what we label a feeling.

          Like

          1. So Mike, I’m not sure your understanding my point, presumably because I have not sufficiently clarified it, which I’m trying to do now.

            I see two systems:
            1. Input comes from environment, output goes to behavior, more output goes to system 2.
            2. Input comes from system 1, output influences system 1 or other system

            You: if input doesn’t go to system 2, “we/I” don’t feel it so it’s not a “feeling”
            Me: input from the environment is a “feeling” for system 1.
            “we/you” is another name for system 2, the autobiographical self. So only the inputs to system 2 are “feelings” for system 2.

            So, system 1 has “feelings” (at least one, anyway). It just can’t talk about them. It can only react to them.

            Whatcha think?

            *

            Liked by 1 person

        2. “Whatcha think?”

          Ok, I said “location” above, but that answer was incomplete. The real question I think you’re asking is, what’s the difference between the signal received by system 1 and the one received by system 2? For the raw signal itself, the answer is, none, zip, nada. They’re both a cascade of electrochemical reactions.

          However, the way system 2 responds to that signal is different than the way system 1 does. System 1 receives the signal and, more or less, algorithmically responds to it. It’s response is relatively simple. (Given that it happens in the midbrain, where the number of neurons is a minuscule fraction of the number in the cortex, this fits.)

          System 2 receives the signal and utilizes it to (don’t hit me) make predictions. In other words, System 2’s response to the input makes it more meaningful. It’s System 2’s interpretation of the signal that makes the feeling. This is why System 2 can become confused, thinking that certain interoceptive signals are the core reflex, such as parole officers misinterpreting their feelings of hunger for an intuition about how immoral a particular inmate might be who is unlucky enough to be evaluated just before lunch. (This actually has been studied and, statistically, just before lunch is a bad time to be evaluated.)

          This is undoubtedly why System 2 is so much slower than System 1, why System 1, in an emergency, often acts without System 2. (In reality, there’s more like three systems, the midbrain and surrounding circuits, which are more hard coded for certain responses, the basal ganglia, which are more modifiable by learning but habitual in the moment, and the frontal lobes, which plan and decide on actions, but is slower as a result.)

          Hope that adequately address your question.

          Like

          1. “However, the way system 2 responds to that signal is different than the way system 1 does. System 1 receives the signal and, more or less, algorithmically responds to it. .”

            I don’t like this answer because I think system 2 also responds more or less algorithmically. It’s just a more complicated algorithm.

            After some thinking here is my current stance:

            Maybe a “feeling” is the action or event. A system “feels” an input and produces an output. That whole thing is the “feeling”. So the whole of what you call a reflex action is a feeling. It does not produce a feeling, and it does not have a feeling. The event is a feeling.

            So the input to system 2 is not a feeling, which is an event, but it is a reference to a feeling, a reference to an event. What system 2 does with the input is a new feeling, so feeling 2, i.e., a feeling where the input was a reference to feeling 1, the Mechanism was system 2, and the output is whatever.

            I should point out that in some cases the output of system 1, the “reflex”, is nothing other than to produce something that will be used as a reference for the event, which is to say, has no extra action besides communicating with a system 2. So when I see a book on a table there is probably no reflex action which responds to the book, except for the one that communicates to a system 2 (and possibly 3, 4, 5, and 6 in parallel, or series, or some combination).

            Does that make sense?

            *

            Liked by 1 person

          2. BTW, I have (stoically) decided to live with the “prediction” terminology. That horse is just not going back in the barn. When I see “the brain (or the planner) makes predictions”, I simply translate it to “the brain checks expectations”, and all’s well.

            *
            [nevertheless, if you use that phrase in my actual presence, there will be some hitting, on principle]

            Liked by 1 person

        3. Hey James,
          Just realized the end of my last reply sounded antagonistic. It wasn’t meant to. Not sure why I worded it that way. Anyway, I do hope I got to the heart of what you were asking about.

          Like

    2. “Does that make sense?”

      On “algorithmic”, I can see your objection. Unfortunately, I’m running out of words to express the concept. I had already used “reflex” and “programmatic” (which you might have similar issues with). I could say “robotic”, but that is only in terms of modern robots; robots ten years from now might have imagination. The main idea is the response is automatic and unconsidered.

      The problem I see with your view of feeling is that it lets in too many things to club-feeling. If we use it, we have to be prepared to speak of the spinal cord “feeling” the strike against the patellar tendon when it initiates the knee jerk, even for a patient with a severed spinal cord, or for someone in a coma, or even for a recent corpse. And I think we’d have to accept that my laptop is currently “feeling” the keystrokes I’m typing, that a car feels the turn of its steering wheel or when the brake or acceleration pedals are pushed.

      In other words, I think this definition of feeling is too deflated and too broad. If we adopt it, then it seems like we have to come up with a new term for the types of feelings we actually experience as opposed to the ones my laptop has, and the difference remains just as much an issue.

      In my view, a better approach is to ask what are feelings for? Why did they evolve? I think they’re the principle motivating force for our internal imaginarium, our reasoning ability. We have them as input to add (sorry) prediction to our responses, to widen the scope of what the responses are reacting to, to increase their adaptiveness, their efficacy.

      Remove imagination from the mix, and what you have left isn’t a feeling. It’s just a naked impulse. Just as the cell machinery is what makes DNA information, imagination is what makes the impulse a feeling.

      A stark example of this is a prefrontal lobotomy. The procedure is reputed to remove the ability of patients to feel emotionally. The procedure doesn’t remove their ability to reflexively or habitually react to stimuli. But it removes the broader, richer ability to use the reaction in a broader context, to feel in the full meaning of the word.

      Like

      1. “The problem I see with your view of feeling is that it lets in too many things to club-feeling.”

        I’m afraid that opening up club-feeling is what is necessary to take the mystery out of it. Just like understanding that snow and steam are the same thing, and that the cause of photosynthesis and sunburn are the same, we need to understand that the reflex event and the planning event are essentially the same type of thing.

        “If we use it, we have to be prepared to speak of the spinal cord “feeling” the strike against the patellar tendon when it initiates the knee jerk, even for a patient with a severed spinal cord, or for someone in a coma, or even for a recent corpse. And I think we’d have to accept that my laptop is currently “feeling” the keystrokes I’m typing, that a car feels the turn of its steering wheel or when the brake or acceleration pedals are pushed.”

        Consider those bullets bit. But you need to be careful about the difference between “feeling” something and “having a feeling”. That latter phrase usually means there is a process that has a reference to a previous “feeling” as input. So the computer is probably not recording keystrokes as references to keystrokes, nor are cars rembering turns of the wheel (yet).

        “If we adopt it, then it seems like we have to come up with a new term for the types of feelings we actually experience as opposed to the ones my laptop has, and the difference remains just as much an issue.”

        I believe this is another bullet I’m willing to bite. But someone has already started on the project of naming these different types of feelings: Charles S. Peirce. I’m talking about semiotics. You have already talked about the “communication” to the planner as a signal. Peirce would ask, what kind of signal? It seems to me you are saying symbolic signals are necessary for feelings, whereas indexical signals used by reflexes might not.

        “In my view, a better approach is to ask what are feelings for”

        While understanding what feelings are for is important, I don’t think that is the fundamental distinction that makes something a feeling. Instead of talking about “being for something” I will talk about purpose. I will point out that reflexes also have a purpose. So you seem to be suggesting that the difference between being a feeling and not being a feeling is having purpose type B (planning?) versus having purpose type A (pulling a limb away from something hot). In point of fact, I would be more amenable to saying the important distinction is between having a purpose (like a reflex) and not having a purpose (like the action of billiard balls).

        “Remove imagination from the mix, and what you have left isn’t a feeling”

        Are you saying imagination is needed for feeling the warmth of a fire?

        You obviously like to talk about reflexes, but do you consider reflexes whose sole output is the communication of a signal to the planner to be in this category of reflexes you talk about?

        Finally, I wonder if you are conflating the ability to create concepts with the cognitive ability called imagination. I think they are related but not the same. For example, I think there are different “reflexes” (to use your term) that take pixel data from the retinas and generate references (neural output) referring to things like lines, circles, etc. These are concepts. Subsequent processes can take these inputs and combine them to make further references (concepts), like faces. A lot of this happens automatically, so, without imagination. Imagination comes in when a system (the planner? The autobiographical self?) can take two separate input concepts (say, a face and a sound) and combine them to create a new concept (guy named “Bob”).

        Tag.

        *

        Liked by 1 person

        1. “Are you saying imagination is needed for feeling the warmth of a fire?”
          I am. A reflex can respond to a burning sensation and withdraw the affected body part, but for “feeling the warmth”, you need imagination. Imagination decides whether to move away or towards the fire. Without it, there is no reason for the feeling mechanism and, unless there is some other survival advantage to it, it likely wouldn’t have been selected for.

          “You obviously like to talk about reflexes, but do you consider reflexes whose sole output is the communication of a signal to the planner to be in this category of reflexes you talk about?”
          As I noted earlier, brain reflexes and the executive centers have been co-evolving for a very long time. At the beginning of the Cambrian, all of the reflexes likely had an action component. But since then, whether or not they are adaptive will often come down to what affect they have on the planner. This is probably why a lot of neuroscientists use the term “survival circuit” instead. I just use “reflex” to make a point.

          On visual imagery, I usually use the word “perception” here. But you’re right, these are autonomic processes. We can’t introspect the early sensory processing stages, no matter how hard we try. The circuitry for it just isn’t there. We can influence them to some degree by volitional movement of the eye and choosing to physically focus on certain things, but we can’t “unsee” many visual illusions. We’re also generally conscious of the initial prediction of the meaning of a perception before we are of the details, at least unless the perception is too different from anything in our prior experience. (Which is to say, we conscious perceive the bear-ness of the charging grizzly before we consciously notice details about it.)

          Interestingly, the results of the perceptual processing goes to the reflexive circuitry at the same time it goes to planner. In the case of the (limited) perceptual circuits in the midbrain, the reflexes get them first. It’s why the reflexes can often act with little or no guidance from the executive, or can require intense effort from the executive to override.

          Like

          1. Okay, I think I got it. I would translate it thus:

            A process is a feeling if
            1. A necessary part of the input is a signal representing a sensation (reflex based on environmental input), and
            2. The mechanism in question has certain capabilities that we would describe as imagination.
            3. The output serves a purpose.

            Exactly what the necessary capabilities are could be teased out, but we can get there another time. Maybe that could be your next post: The requirements of imagination.

            *
            [capabilities of imagination? Limits of imagination? hmmmm]

            Liked by 1 person

          2. Thanks. I’ll take that as a request. But just to reign in expectations, if I could give a full accounting of imagination, I’d likely be keeping a spot on my shelf clear for a Nobel prize.

            Like

  12. It’s still hard for me to understand you Mike. I read what you and others are saying, but I still don’t understand if you think that the feelings that are a form of communication coincidentally have qualia attached to it, or that you really consider also the qualia or the qualitative experience having that feeling playing a role in the communication. In the latter case you would be a dualist. Which is oke by me.
    In the first case having qualia attached to the form of communication as you describe it, would be quite mysterious since it’s not necessary, unless you are a dualist. Also for non dualist it’s quite weird that feelings that make life worth living are just a side effect of some form of communication in the brain. greetings and keep up the good work

    Liked by 1 person

    1. Hey Oscar,
      Good hearing from you.

      I think the qualia of the feelings are generated based on the reception of the communication from the reflex. But the the audience for the qualia, the consumption of it in a way where we can discuss it, requires the introspection mechanism. So, there a brain region that generates the impulses, another that receives them and associates them with a variety of memories, creating the qualia, and then uses that information to decide which impulses to allow or inhibit. And another that provides second order representations of the qualia, enabling us to discuss them.

      So, no dualism required, at least not the substance variety. There are lots of interacting components, but they’re all physical.

      Liked by 1 person

  13. Mike,
    What makes feelings a mystery is that feelings are the surface appeal of something deeper and richer in both texture, form and content. You’ve adequately explained what feeling are for and why the evolutionary “process” brought about feelings, ( I hate it when people refer to evolution as a thing instead of a process). And that “why” is to enrich the experience and the novelty of the expression. But fundamentally, we still don’t know what the construct of feelings mean other than feelings being a form of information or data, i.e., the mystery. Feelings are an enhanced form of information which empowers homo sapiens with the capacity to understand the “thing” with a precision that other animals do not possess, but this enhanced form of reasoning is still completely bankrupt when it comes to understanding the “thing in itself”….. go figure.

    Eric the philosopher has an interesting model which utilizes value in that construction. Nevertheless, Eric’s model is woefully inadequate and extremely limited in scope because he does not recognize the impact nor the implications of value as an Objective Reality. One must first be compelled to consider hierarchy: Is value an object, or is value a subject? His model is incapable of answering that fundamental question and as a result, places value as a derivative of the brain or his second computer. But at least Eric is open enough to postulate that value may ultimately have “supernatural origins” and that’s a start.

    I agree with Thomas Metzinger when he says that what we need is a marriage of intellectual honesty and spiritualism. Spiritualism is not religion. Religion reflects one’s passion for a belief, not a nominal belief beyond the weight of evidence, but a substantial belief based on personal experience of one degree or another. In contrast, spiritualism is not satisfied with the construct of belief as predicate, but in the unconditional empirical experience of knowing. And that knowing goes far beyond any experience of solipsism.

    Liked by 2 people

    1. Come on Lee, you know that my parents named me “Philosopher Eric”. “Eric the philosopher” would be more like something that I chose. 😉

      Mike has lots of conservations going right now on his site, and I’ve got a few as well. But yes it is Saturday as you know, and I’m sure that Mike would be only too pleased for me to take this one up as well. He’d surely like for his site to become a place where people discuss things intelligently with each other more, rather than generally just with him. Thus he could watch them develop and be proud without requiring quite as much personal effort. And unlike Hariod, what you’ve just given me seems easy!

      I actually do recognize the impact and implications of value as an Objective Reality. That’s exactly what my project is about. If I’m in pain, then this negative value cannot possibly not be an element of reality. Therefore such value must be an objective element of reality itself. But who’s saying it? Some god beyond time and space? I do not consider myself to be a god (or even a Wizard 😉 ), but rather a subject of a presumably larger realm of existence. Thus my understandings of Reality, must inherently be subjective approximations from within. I do consider them “useful”, but never assert them as “truth”. Surely you don’t consider my understandings any more True than I consider yours? The only responsible question left, I think, is to consider how useful any given position happens to be. That’s what science is about. It remains entirely in accordance with solipsism.

      I’m pleased that you remembered that I consider the brain to be reality’s second form of computer, behind generic material, though before consciousness and then the technological computer. But from there Lee, my goodness, have you gone squishy on me? I don’t consider any organized religions to be useful in my own life, though I can’t stand spiritualism! Are you aware that the root to this word happens to be “spirits”? As in the boogie man and such? I’d almost prefer if you’d meant “hard liquorism”!

      Liked by 1 person

    2. Hey Lee,
      On evolution speak, yeah, the problem with talking about evolution is it’s very easy to fall into a short hand of talking about it as if it’s a conscious force, or species as if they’re “innovating” when they mutate adaptations. But for people who understand how natural selection works, it’s just harmless metaphor, and Daniel Dennett does argue that we should feel okay to discuss “competence without comprehension”. I think there’s something to that argument. Always avoiding teleological language makes for tedious writing and reading. We just have to do our due diligence and periodically remind the reader that we’re speaking metaphorically in those cases.

      I’m always uneasy with talk of spiritualism. I get that a lot of people use that terminology in a non-supernatural fashion, but, I don’t perceive a lot of consistency among those who do, and it’s extremely easy to be misunderstood or misconstrued. As a result, I personally prefer alternate language, but I’m not a fussbudget about it if others discuss spirituality in a secular manner, as long as I don’t perceive them sneaking in the other stuff. Similar to the evolution language above, periodic clarifications are a good thing.

      Like

      1. Mike,

        I recognize that some words carry with them the baggage of different meanings for different people. I consider all of the prevailing paradigms associated with our current understanding of spiritualism to be fundamentally religious in nature. My reference to the meaning of spiritualism is a contextual one which for example, corresponds concisely with the ideology of solipsism, for solipsism is the unconditional empirical experience of knowing only one thing, that I exist. Sorry about that Eric, but I guess that places you in the spiritualism camp as well……

        Liked by 1 person

  14. Greetings Eric,

    Value is an objective reality because value is only one of two “things” that can be empirically verified beyond one’s passion for a belief. And because value is linear in nature and not discrete, it can accommodate and explain causation in all of its novelty of expression. Value comes first in the hierarchy of meaning even before any articulation of an experience can be formed. This can easily be demonstrated by a man sitting down on a hot stove. That low value experience (not good or bad) of the damaging heat to one’s lower posterior is immanently experienced, stimulating a response of immediate action. It is only after the remediation of that low value experience is mitigated that the articulation of the experience is formed within the conscious interval of your third computer is formed as “bad”.

    Biologists have discovered organic life forms in the depths of the ocean floor located next to hydrothermal vents. The environmental conditions for life to thrive at these extremes
    would represent “bad” for a fragile biological life form such as ourselves. Human life forms thrive in a sunlit, oxygen enriched environments in relatively mild temperatures at a pressure equivalent to or less than 14.7 psi. Life forms at these depths of the ocean experience zero sunlight, temperatures ranging from 140º F to 867º F. and pressures equal to and exceeding 3205 psi. As an intellectual construct, “good and bad” are discrete, relative terms predicated solely upon our own self-interest of survival. The environmental conditions of pressures exceeding 3205 psi with temperatures in excess of 867º F would represent that immanence of “bad’ according to your model. In this illustration, the use of discrete terms such as “good and bad” do not correspond with Value, they suppress Value and even seek to exclude it. This use of discrete, binary vocabularies infers that extreme environmental conditions on the bottom of the sea floor are bad. But they are not bad in themselves, they are conducive for other life forms, therefore they are good. Binary vocabularies suggest that what is good for one life form may represent a bad for another life form. Corresponding to Value however, it can be properly stated that the extreme pressures and temperatures on the sea floor would be a low value experience for humans, not a good versa bad scenario, demonstrating once more the empirical evidence of its immanent, ubiquitous nature..

    Like

  15. Eric,

    “If I’m in pain, then this negative value cannot possibly not be an element of reality. Therefore such value must be an objective element of reality itself. But who’s saying it? Some god beyond time and space?”

    For goodness sake Eric, referencing my last post; it doesn’t take a god, wizard or even a genius to recognize the immanence and ubiquitous nature of Value, because Value comes first in hierarchy. Value is not a relation between a subject and an object as your one principle of axiology asserts, Value “is” relationship. As an Objective Reality, Value stands alone at the center of motion and form, including mass, spin and charge, e-motions and feelings. And that Objective Reality existed a long time before the arrival of the phenomenal self model called Philosopher Eric who now declares: “that the only thing that I can prove exists is myself for no other reason than Descartes ‘I think, therefore I am’.” Is that all you’ve got? Give me a break! Solipsism is a faith and a belief just like any other religion which asserts anthropocentrism and reinforces the phenomenal self model through a strategy of control.

    To restate my thesis: Religion reflects one’s passion for a belief, not a nominal belief beyond the weight of evidence, but a substantial belief based on personal experience of one degree or another. In contrast, spiritualism is not satisfied with the construct of belief as predicate, but in the unconditional empirical experience of knowing. And the reason one can empirically know Value is an Objective Reality, is because one experiences Value 24/7. My models make the audacious prediction that there is a genetic defect in the underlying form of reasoning and rationality, and unless or until one is willing to address that defect, nothing will change because nothing can change.

    With that said, I want to thank Mike for tolerating my strange and unique views and now I will sign off this chatroom because I do not find these type of interactions useful for either myself or others. I wish everyone the best…

    Liked by 1 person

    1. I wish you all the best as well Lee. I apologize for my part of this. We’re talking about passionate people with opposing positions, so I guess it does make sense, though we don’t have to like it. I hope you reconsider. If you choose not to come back, you’ll be missed by far more than just me…

      Like

    2. Lee,
      I appreciate your insights, and I very much hope you continue visiting and having discussions with us. That said, we’re all here because we find this stuff interesting. I’d ask that you temper your passion to keep things on a friendly basis.

      Again, hope you decide to stay around.

      Like

  16. Eric,

    One final post to close this discussion out because one should not just take my word for it, here’s a quote from Zen and the Art of Motorcycle Maintenance, pg. 87-88:

    “To speak of certain government and establishment institutions as “the system” is to speak correctly, since these organizations are founded upon the same structural relationships as a motorcycle. They are sustained by structural relationships even when they have lost all other meaning and purpose. People arrive at a factory and perform a totally meaningless task from eight to five without questions because the structure demands that it be that way. There’s no villain, no “mean guy” who wants them to live meaningless lives, it’s just that the structure, the system demands it and no one is willing to take on the formidable task of changing the structure because it is meaningless. But to tear down a factory or to revolt against a government our to avoid repair of a motorcycle because it is a system is to attack the effects rather than the cause; and as long as the attack is upon the effects only, no change is possible. The true system, the real system, is our present construction of systematic thought itself, rationality itself, and if a factory is torn down but the rationality which produced it is left standing, then that rationality will simply produce another factory. If a revolution destroys a systematic government, but the systematic patterns of thought that produced that government are left intact, then those patterns will repeat themselves in the succeeding government.”

    Nothing can change unless one is “willing” to address this: the historical record, including the current affairs of the world is overwhelmingly in favor of my hypothesis….

    Thanks……

    Liked by 1 person

  17. Okay Mike, it’s my turn. In my own models I go very light on engineering details as you know. It’s almost pure architecture. You conversely like to add a good bit of biology into your models. And I do appreciate that. If my models do happen to be as effective as I suspect, then it would be great if a knowledgeable friend would develop a larger interest and so naturally think about some of those missing engineering elements. Obviously this could be someone other than you, but who knows?

    What I’m not so sure about right now is if your presentation here generally contradicts your conception of my models, or generally conforms, or lies more in the middle? So maybe this would be an appropriate first question to ask? Furthermore an answer should serve to test of how well you currently understand my models. But then it could be that you aren’t sure either. In that case we could go deeper into the nature of our models to check for consistencies and/or divergences. How does that sound?

    Liked by 1 person

    1. Eric,
      I should note that I don’t consider myself to have my own models, at least not in any formal sense. I’m just a student, trying to learn as much as I can about this stuff, and these posts are often my distilled understanding from numerous sources. (Technically we all have our own models, but nothing in mine is original research or ground breaking.)

      On your model, my perception is that you don’t really get into what feelings are, you just have them as the core component of consciousness. In that sense, I wouldn’t think anything here necessarily contradicts that.

      That said, while I agree that feelings are a core component of sentience, what F&M call affect consciousness, I see them as just part of the overall support structure for human consciousness. You’ve seen my hierarchy sketch numerous times now where this gets us to level 4.

      Liked by 1 person

      1. “I’m just a student, trying to learn as much as I can about this stuff, and these posts are often my distilled understanding from numerous sources.”

        I’m thankful for that Mike, and in this spirit permit me to offer some adjustments to your post a bit to better square it with my own models. Then you could think about where my suggestions do and don’t seem like improvements.

        You’ve mentioned that there are many terms for “feelings”, which can be problematic since it may be difficult to grasp what a given person means. I hope to get around this issue by getting very specific about what I mean. Before all those terms (qualia, affect, happiness, and so on), I theorize a punishment/ reward element to existence, or sentience. Note that there will always be “something that it’s like” to experience positive to negative value. Furthermore I define such a state to be conscious even without any associated organism functionality. Is this stuff mysterious? Well I can’t think of anything nearly this strange. Any suggestions? Apparently this stuff is all that’s good/bad for anything anywhere.

        On nerve nets I think you should have started back a bit further, or before there was any algorithmic function here whatsoever. Thus you would have started well before the Cambrian Explosion. I suspect that the emergence of central organism computation is what caused it.

        Then yes, a central nerve cord and information organs (such as eyes to measure light and ports to assess chemicals) should have evolved. But I doubt that it’s useful to say there are any “mental images” yet, and likewise for our autonomous cars. Here I see standard algorithmic function given input information. Would I say that such organisms “predict”, as in “food is probably ahead”? Perhaps it can be useful to say this in an everyday sense, but note that these creatures should simply be following provided rules, or the same essential thing that a cheap pocket calculator does.

        To me the thought of a non-conscious dilemma is quite suspect. When we desire conflicting things we obviously have dilemmas. Thus we interpret our inputs and construct various plausible scenarios about how to potentially work out how to get as much as we can. But we’re purpose driven creatures. Non-conscious creatures (including our machines) do not have personal purpose and so shouldn’t want anything at all. Thus instead of “dilemmas” perhaps we could say “programming conflicts”? Regardless a non-conscious fish that senses both food and a predator at the same location, should tend to adopt a holding pattern set of programming — or instructions from which to maintain safety while taking advantage of any openings from which to safely eat the food. So it’s not clear to me that this issue requires conscious resolution, which is to say, purposeful function.

        I suspect that consciousness evolved because in more open environments there should be too many novel ways of functioning well that simply couldn’t be programmed for in general. Thus I suspect that it produced a relatively tiny parallel conscious computer, and so gained a capacity to use such teleological figuring to help guide its non-conscious function.

        For example, consider how we generally feel better when we’re clean rather than dirty. There are obviously adaptive elements to hygiene. But surely evolution couldn’t program when it’s best for a human to get itself cleaned up, since human circumstances seem too open for useful general rules to work. So instead evolution seems to have contracted such issues out to a purposeful entity. Here it effectively said, “Some things are now going to make you feel bad, while others will make you feel good. Thus you now have personal interests. Therefore it’s your job to figure out when to get clean and so on (since associated rules don’t seem adequate for more open environment organisms).”

        If that’s why consciousness was adaptive then what effectively evolved? I get the sense that your imaginative sub-system functions in series with non-conscious function. Thus this sub-system is informed about triggered reflexes and so runs simulations to see which to activate and suppress. But if so then think about instead going massively parallel. This is to say a parallel computer which functions in an entirely different way. This tiny second computer, which exists as a product of the first, functions on the basis of one thing only — its quest to feel better. Unlike the first, this one functions teleologically.

        I perceive it to have a purely informational form of input (sight and so on), a purely value based input (like pain and such), and one that exists as a conception of past conscious states (or memory). The conscious processor then interprets such inputs and constructs scenarios about how to make it feel better going forward from moment to moment. I’ve always called this processor “thought”, though I suppose that “imagination” also does the trick.

        Liked by 1 person

        1. “Before all those terms (qualia, affect, happiness, and so on), I theorize a punishment/ reward element to existence, or sentience. ”
          Eric, what I think you should consider adding to your model is what terms like “punishment” or “reward” mean. What makes them a punishment or a reward and not just a neutral tingle? If what you call the conscious processor wasn’t there, would the punishment feeling still be a punishment, or would the reward feeling still be a reward?

          On going earlier than nerve nets, single celled organisms react to stimuli with adaptive responses. Of course, at this level, there’s no evidence that there is anything other than programmatic responses going on. Although some protists have been shown to “learn” through habituation. https://www.sciencedaily.com/releases/2016/04/160427081533.htm But calling anything going on here “prediction” would, I think, be too generous.

          In some ways, nervous systems were initially a way for complex organisms to get back to the adaptive responses of their single celled constituents. And at first they were just as programmatic. Indeed, I think even the early organisms with just nerve cords for a CNS were also mostly just programmatic. We only get something we might be tempted to call “conscious” with the rise of distance senses and brains to process those senses.

          “Non-conscious creatures (including our machines) do not have personal purpose and so shouldn’t want anything at all. ”
          It depends on what we mean by “purpose” and “want”. What is the difference between the purpose of bee looking for pollen and a self driving car attempting to reach its destination? We have a lot in common with the bee’s motivation, much more than with the car. But just as the car’s purpose is given it by its designers and users, the bee’s purpose is given to it by evolution.

          “I suspect that consciousness evolved because in more open environments there should be too many novel ways of functioning well that simply couldn’t be programmed for in general. ”
          That’s pretty much what the fish food/predator example is trying to get at. The question is, what about the open environment requires more than a rule based approach? If it’s not about resolving contradictory impulses, then what aspect of the open environment is causal? Part of this comes from my dissatisfaction with high level abstractions and wanting to get closer to the nuts and bolts.

          On imagination and the tiny computer, you might want to read the next post. I’m not sure whether imagination necessarily maps to your tiny computer. As I note in the post, imagination seems to require participation from the whole brain.

          Liked by 1 person

      2. Explain punishment and reward Mike? Excellent suggestion! Let’s try this:

        At this moment there is something that it’s like to be me, and presumably because I take the form of a consciously functioning human. What it’s like to be me is the punishment/ rewards stuff which theoretically motivates my conscious processor to function. So if this “valence” were eliminated for a while, then theoretically I should have no conscious function for that period since there would be no associated motivation to drive the function of this computer.

        (Could punishment/ reward exist without a conscious processor? I’ve gone both ways about this over the years. There isn’t a “true” answer, though only more and less useful definitions. So what would be the more useful way to go? Shall I define this processor such that it’s function is required in order for there to be, for example, pain? Lately I’ve decided not. If it’s the non-conscious side that produces the pain for the conscious side to experience, it may not be effective for me to say that a specific conscious mechanism is also required.)

        If I instead of a human I were a rock however, I presume that there wouldn’t be anything that it would be like to be me. The same could be said for any of our machines, and regardless of any purposes that we consider them to have in human service. If there’s nothing that it’s like to exist as something then there’s nothing to drive a conscious form of function as I define the term. Another way to say this is that existence is “insignificant” to them, unlike for you and I right now.

        I’m happy that you’ve mentioned how single celled organisms seem to function algorithmically. Genetic material is of course the first in my “four forms of computer” model, as I mentioned earlier to Stephen. Sounds like we’re good there. But then regarding the second form, or central processors for multicellular organisms beyond genetic material, yes I think a “before” and “after” account would have been helpful.

        So on the autonomous car versus the bee situation, if there is something that it is like to exist as one of them, then it is conscious as I define the term. I suspect “yes” for the bee and “no” for the car. But then regarding a creator, whether humans, evolution, or even gods, I consider that to be an irrelevant issue. I presume you do as well.

        If a simple statement for your “why of consciousness” could be “resolving contradictory impulses”, mine could be “autonomy”. Let’s try this approach:

        Environments which are more closed, such as computer games, obviously provide effective programming environments. Furthermore I consider plants to exist under relatively closed environments. Notice that they get by with the genetic form of computer exclusively — no brain.

        Now imagine programming an organism to survive under the extremely open environments of a non-conscious humanoid. How and when shall you program it to take care of hygiene issues and countless things more? How might even evolution program in sufficiently effective parameters here given an endless number of issues to sort out? It’s not just that one rule should get in the way of the other, but also that adding fixed instructions under open environments should be limiting. I suspect that a good bit of these instructions wouldn’t be flexible enough. So as environments opened up, the addition of the conscious form of computer should have worked better. Here evolution wouldn’t need to provide specific programming in countless ways, but rather punishments and rewards from which to force conscious entities to figure certain things out of themselves, such as when to get clean. Then the massive non-conscious computer could be based around helping it given such decisions. Here an organism should gain some autonomy.

        As far as getting you closer to the nuts and bolts, good architecture should be critical there. I may be able to help. And I certainly not high level abstractions. I provide practical explanations regarding our nature to check against observation.

        On my “tiny conscious computer” not quite relating to your conception of imagination since imagination involves the entire brain, no we haven’t taken diverging paths here. Note that I define this computer to exist as an output of the brain. Thus for all I know, what facilitates consciousness could indeed concern the entire brain. It’s interesting to me that you have an old post called “Consciousness is composed of non-consciousness”, and yet struggle with what I mean by this. I mean that a vast supercomputer creates human consciousness, and even requires a good bit of non-conscious resources to create, though the conscious computer itself it is not that computer as defined by me. Consciousness as I define it is more like the words that I think, the itches that I feel, the hopes that I have, and so on. My diagram presents an information input, a motivation input, and a memory input. This is the tiny parallel computer that does less than one thousandth of a precent as much processing as the vast supercomputer that creates it. Conversely the massive computer which facilitates it isn’t conscious.

        Of course I did read your newest post when it came out and have monitored the discussion. Good stuff! I’ll be along when I get the time.

        Liked by 1 person

        1. Eric,
          I appreciate you taking a shot at punishment and rewards, but it seems to me that you clarified what you see as experiencing the punishments and rewards: persons and bees, but not rocks or cars. But what would you say punishment is?

          “Shall I define this processor such that it’s function is required in order for there to be, for example, pain? Lately I’ve decided not.”
          I’m surprised to see this given your previous skepticism about non-conscious pain. And it seems inconsistent with your thesis that what makes a system conscious is punishment and reward. So you’re saying that there can be pain without consciousness but no consciousness without pain? If so, then you seem to be saying that there is something more to consciousness than punishment and reward? Or am I utterly misunderstanding your position?

          “How might even evolution program in sufficiently effective parameters here given an endless number of issues to sort out? ”
          The problem, as I see it, is that imagination is a complex energetically expensive capability. It couldn’t have evolved fully formed. The question is, what was its nascent form and how was it adaptive? If your explanation for it depends on its full form (i.e. being able to figure out that hygiene enhances survival) then it doesn’t seem like an evolutionary explanation. That’s what the contradictory reflex brings to the table. I’m totally open to alternate hypotheses, but only if they have equal or more explanatory power.

          “It’s interesting to me that you have an old post called “Consciousness is composed of non-consciousness”, and yet struggle with what I mean by this.”
          I think I understand what you mean by it. The problem is that I don’t think you’re saying enough when you say it. The reason I keep asking questions is to have you think about what I see as undefined or underdefined aspects of your model. (Or set me straight if you have those aspects more defined than I understand them to be.)

          I think I’ve mentioned before that part of the issue here is that we have different goals. You seem to be advocating for a science of value. I’m interested in how minds work, in gaining insights into how one might be built, and whether it’s possible to copy one. Your goal may only require architectural level contemplation. (Although I think you’d be well served to dive down anyway, since the devil’s in the details.) Mine requires getting into the nuts and bolts. For my purposes, words like “punishment”, “reward”, or phrases such as “like something to be a” are simply too high level and ambiguous. I have to dissect and reduce these terms to their more primal constituents or I’m not making progress.

          No rush on the other post. I mentioned it because it seemed germane to the point you were making.

          Liked by 1 person

      3. Mike,
        I’ll modify your initial question into my own speak. So then what would I say is a useful definition for the “punishment” term? (There are no true definitions for this one or any other, as suggested by the problematic “is”.) Furthermore let me say that this all comes down to metaphor. (Actually I suspect that all conscious understandings come down to metaphor.) So yes, in the end I’m talking about stuff that I’ve experienced that I refer to as “punishment”, and so have built this understanding through countless associated experiences. I know quite well that you’ve also experienced this sort of thing. Thus here I depend upon your personal punishment associations to understand what I’m talking about.

        As for the next issue, yes apparently you’re misunderstanding my position. Per my single principle of axiology I theorize that it’s possible for a computer that is not conscious, to produce a punishment/ reward dynamic for something other than it to experience. I define the separate thing that does the experiencing as “conscious”. Furthermore functional consciousness harbors “thought” (which I believe you call “imagination”). Well I could say that this conscious processing is required for pain to exist. In the past I actually have. But more recently I’ve decided that the contrary position is more useful.

        Notice that with my former convention the entity that interpret pain (“thought”) is also being defined as something needed for it to be experienced! And while the thought processor is defined as “conscious”, the computer that produces punishment/ reward for something else to experience is defined as “not conscious”. So this is inconsistent. Thus I now don’t think it’s useful to say that the conscious processor is required for pain to exist.

        Perhaps we’re square here however since my position is the exact opposite of what you thought it was, or “So you’re saying that there can be pain without consciousness but no consciousness without pain?” As I define it painful existence must inherently be conscious existence, and I certainly don’t restrict conscious existence to painful existence exclusively.

        Here’s how I theorize the evolutionary beginnings to consciousness:

        Central organism processors continued evolving in progressively more advanced ways. And contra F&M, distance senses should have been an important development for some of these creatures. (If our idiot machines can have things like sharp light sensing tools for algorithmic processing, then why oh why couldn’t evolution have built such function biologically? Surely I’m not the only person to identify this inconsistency to their proposal?)

        Like our robots, these robots should have had difficulties dealing with novel circumstances. Given diverse demands evolution shouldn’t have been able to program them sufficiently to deal with more open environments. But actually I shouldn’t even mention this yet since I theorize that these life forms, armed with central organism processors, were dynamic enough to incite nothing less than the Cambrian Explosion! So yes, I theorize full robotic predation before the rise of consciousness.

        Now to actually get into consciousness, I theorize that at some point some of these non-conscious machines started producing “value” for an associated conscious entity to experience. Initially this addition should have been entirely functionless for a given creature. And perhaps such consciousness came and died out thousands of times. But apparently at some point this dynamic was also put in charge of deciding something. So here instead of standard algorithms there was this other sort of computation at work. The theme should have been something like “Stop because that hurts!” Or “Do more because that feels good.” (Obviously these theme statements are just for my own descriptive purposes.)

        So here we finally have nascent functional consciousness. Success would have led to additional value based decisions to be added. My theory is that because this teleological form of function was itself open, species in more open environments ended up doing somewhat better by evolving in this direction and so replacing entirely non-conscious organisms that couldn’t be sufficiently programmed in more open environments.

        If I had the time I’d love to get into those nuts and bolts which interest you so much. Perhaps some day I will. In the mean time however I have you for the questions and concerns I have about that sort of thing. And indeed, why waste good engineering with bad architecture? That’s my perception of how things are today in general on the nuts and bolts side. The architecture side that interests me most however (involving things like psychology, sociology, and philosophy) is as I see it in horrible shape. I’d like to help straighten them out.

        Liked by 1 person

        1. Eric,
          These two sentences seems like they contradict each other:
          “Thus I now don’t think it’s useful to say that the conscious processor is required for pain to exist.”
          “As I define it painful existence must inherently be conscious existence, and I certainly don’t restrict conscious existence to painful existence exclusively.”

          If the conscious processor is not required for pain to exist, then how can painful existence inherently be conscious existence?

          ” And contra F&M, distance senses should have been an important development for some of these creatures.”
          I think part of the disagreement here is that F&M consider exteroception to be a type of consciousness. (They call it “experoceptive consciousness”.) I know you consider only what they call “affect consciousness”, in essence sentience, to be the only criteria for consciousness. In my mind, this is a definitional issue. But as I’ve pointed out before, if exteroception counts as consciousness, then self driving cars are arguably conscious.

          Exteroception evolved early in the Cambrian explosion and was probably central to the rise of effective predation. F&M cite genetic analysis, looking at when key brain structures evolved, to argue that affect consciousness also evolved early, although a lot of their analysis is based on examining extant species as stand-ins for ancient ones that can’t be examined.

          So in my view, it is possible that creatures with exteroception but without imagination existed for a time. They actually might exist today among arthropods. The more I read about ant behavior, the more robotic they seem, creatures that have an awareness of the outside world but whose behavior seems utterly stimulus driven.

          Liked by 1 person

      4. “If the conscious processor is not required for pain to exist, then how can painful existence inherently be conscious existence?”

        I think I now understand the contradiction that you perceive here Mike. Another such question might be, “If I define pain to inherently be conscious, and yet theorize a conscious processor that needn’t exist for something to be in pain, then why call pain “conscious” at all? Why have a conscious processor that needn’t be involved in the process of “consciousness”? Yes this does seem strange when asked this way. Well let me get into those details. This has been contentious for me as well, though I think I’ve got it relatively sorted. Your questioning has helped.

        From the beginning I distinguish valuable existence (as I personally/ metaphorically know +/- value), from personally inconsequential existence, and then define the value side as “conscious” with the other “non-conscious”. Furthermore my theory is that value exists as an output of a non-conscious computer that produces this stuff for something else (the conscious entity) to experience. So there must inherently be computer processing here initially, thought it’s actually non-conscious processing rather than conscious processing. I consider the conscious processor (whether called “thought” or “imagination”) to have a different function. It doesn’t create conscious inputs, but rather (1) interprets them and (2) constructs scenarios about what to do to promote value based interests. It’s instead the central organism computer that creates the parallel teleological computer. Furthermore notice that this is consistent with my presented evolution scenario, since non-functional consciousness should have been a precursor to the functional consciousness which actually uses value to decide things by means of thought/ imagination.

        I don’t actually mind when people like F&M define varieties of consciousness that are more broad than my own. Given my EP1 I’d otherwise be a hypocrite. What I do mind however is when they don’t acknowledge what their definitions imply as fully as they could. Yes a self driving car is exteroceptively conscious as they define it. But what else? Notice that a cheap pocket calculator accepts inputs from the outside world, and then processes them algorithmically for output function. So this machine is thus exteroceptively conscious as they define it as well. Indeed, what functional computer is not? Thus I don’t consider their definition useful. And the funny thing here is that F&M propose that they’ve solved “the hard problem of consciousness”. Well there isn’t anything “hard” about building something that has exteroception.

        You don’t need to wonder if “creatures with exteroception but without imagination existed for a time”. It’s virtually certain. And given the diversity of life on our planet I think it’s virtually certain that they exist today. But unfortunately science and philosophy have tremendous problems in this regard today. Can it hurt to be an ant? I’d have to be one to truly know, but I think maybe so.

        I might have mentioned this here before, but one Christmas season at a local Sharper Image store they had an amazing ant farm display. Here the ants could be seen working through an illuminated transparent blue substance, and these creatures had built an amazing system of tunnels and chambers. So I tried one out myself, though it turns out that my ants wouldn’t robotically dig into this weird material, and even though I put in the suggested starter holes. It was like they were hopeless because they found themselves in a sealed container that they couldn’t escape from. I suspect that the ants at the store had somehow been trained to understand that this blue stuff can be dug into for tunnels.

        So the question now to ask is, did the people who developed this product train “robotic” ants for their display models (or ants without a value based parallel computer)? Well possibly, but I’d think that such ants would either dig or not dig regardless of human incitement.

        Instead I suspect that it feels good to ants to function as normal, though this stuff was just too strange for them to naturally fathom digging into. So perhaps the company did enough dinking around with ants to get one to dig into this stuff, and then with this example others began to help since remaining idle should not feel good for them. Thus the company could now farm ants that would function normally in this stuff, and so it was able to send display models off to stores across the country. But of course they couldn’t afford to send consciously trained ants to people like me once countless Christmas orders came in. Yes it was a one season product. This does advance my suspicion that modern ants can feel good/ bad however. Of course I’d like more evidence on the matter as well.

        Liked by 1 person

        1. Eric,
          On the contradiction, this may be a definitional issue. I see what you call the value signal as simply the output of the reflex. But it’s not a feeling state until the imagination system interprets it. You are more willing than I am to use words like “pain” to refer to the pre-interpreted signal, but we might have similar ideas on this.

          On F&M’s exteroceptive consciousness, I think there’s a misunderstanding here and it’s my fault. I’ve been using “exteroception” and “exteroceptive consciousness” synonymously, but just now reviewing definitions of exteroception, it’s now obvious to me that is a mistake. Sorry, my bad.

          So, to do justice to F&M’s concept of exteroceptive consciousness, it must include distance senses and image maps, representations, predictive models of the environment. A calculator doesn’t have that, although a self driving car arguably does.

          Given this clarified definition, do you still see creatures as having exteroceptive consciousness without affect consciousness? You seem to see ants, with their 250,000 neurons, as having affect consciousness. What about crustaceans (lobsters, crabs, etc) with their 100,000 neurons? Or Edward Schwitzgebel’s garden snail with its 60,000 neurons, which doesn’t have any recognizable visual exteroceptive consciousness but may have olfactory maps of the environment? Or sea slugs, pond snails, or medicinal leeches, none of which have discernible exteroceptive consciousness?

          Personally, I’m not sure any of the species just mentioned have affect consciousness, that is, feelings. But as I just responded to someone else on another thread, a scientific study seems to show that fruit flies, which also only have 250,000 neurons, do seem to posses glimmers of imagination, which I would think would give them at least proto-affects. https://www.nytimes.com/2014/05/23/science/even-fruit-flies-need-a-moment-to-think-it-over.html

          This makes me open to the possibility for ants, although their behavior leaves me unconvinced. What might get in the way for them is that they don’t appear to have a unified awareness, but a fragmented and siloed one: https://www.scientificamerican.com/article/weve-been-looking-at-ant-intelligence-the-wrong-way/

          Liked by 1 person

      5. Yes Mike, regarding experienced value this may just be a matter of definition between us. You can mandate the conscious instrument of imagination in order for value to exist if you like, and as I used to, though I’ve found it useful to tighten up my own definition for conscious processing. Here I leave the fabrication of value to the non-conscious computer exclusively.

        On F&M’s “exteroceptive consciousness”, I still don’t consider this to be a useful definition. Self driving cars and robots in general use exterior information to do what they do. My phone can use distance senses and construct an image map of my voice, or a predictive model of its environment in that regard to produce effective output function. As I see it there’s merely a difference of “quantity” rather than “kind” preventing the exteroception of a pocket calculator from qualifying as an environmental image mapper. No such quantitative judgement calls exist regarding my own valence based consciousness definition — or “kind” all the way!

        Or consider this. If you were to lose your senses of sight, hearing, smell, taste, touch, and even receptors such as pain regarding what’s happening around you, and so lose all potential for exterior image mapping, do you believe that you’d thus become an autonomic robot that merely pumps blood, grows hair, and things like that? Couldn’t there still be a “Mike” inside that thinks? Or that remembers? Or yes that feels? I’d think you’d be in there thinking “WTF!” So I suspect that this “exteroceptive consciousness” business mainly serves as a distraction in a field that’s unfortunately full of distractions.

        You ask (though in my own speak), if I consider it probable that some creatures today have central organism processors without a parallel value based computer for augmentation? Well given the diversity of life today I would certainly expect so! Surely some or many lack affect consciousness. I do have suspicions that ants harbor some level of value, though it’s anecdotal as mentioned. But regardless according to the consciousness model which I’ve developed they should mostly function as non-conscious robots. So pointing out a non-unified awareness doesn’t sway me. I instead ponder whether or not they harbor any personal value dynamic whatsoever.

        The fly study, where consciousness it speculated when flies take some extra time to leave a merely somewhat bad environment, seems like an amazing stretch to imagination. Even if entirely robotic there are all sorts of potentially productive reasons not to leave such a place immediately.

        One thing that I’d like scientists to get into are signs of suffering when something becomes highly damaged. We all know about half smashed bugs. They writhe around as if they’re suffering extreme pain. But is there reason to believe that their engineering as non-conscious robots is set up to produce that sort of movement when thusly damaged? It’s a nice thought, though I certainly don’t notice our machines doing that sort of thing in general when highly damaged. But then again our machines are millions of times more primitive, so it may be that this is what advanced robots would do if we were able to build such things? Or perhaps half smashed bugs commonly do display the suffering that they seem to.

        One thing I’m pretty sure about however is that without effective epistemology, science will not get far in this regard. I did read Eric Schwitzgebel’s garden snail post again, and was newly struck by how it was all about what consciousness “is”. Our soft sciences simply should not succeed given this and other failures in the field of philosophy. Science will need a respected group of professionals that have various agreed upon principles of metaphysics, epistemology, and axiology, which is to say a founding premise from which our softest sciences should finally be able to effectively build.

        Liked by 1 person

        1. Eric, I think you’re being hasty in your dismissal of F&M’s exteroceptive consciousness. (I actually prefer the word “perception” for this capability, simply because introducing the word “consciousness” inherently seems to make it contentious.) A calculator doesn’t take in information and build predictions about its environment. Of course, it does utilize predictions, but only ones already made by the programmer(s). A state of the art phone might do some limited prediction, but it’s minuscule compared to a fruit fly, much less a bee or mouse.

          “No such quantitative judgement calls exist regarding my own valence based consciousness definition — or “kind” all the way!”
          Forgive me, but I fear you can only say that because you insist in staying at a high level and avoid getting into the messy details. Not that you’re at all unique in this. Many people use definitions like “something it is like” and simply stop, feeling like they’ve adequately defined consciousness for their purposes. But for my purposes, it doesn’t provide useful insight.

          And I could argue that a phone has valence. It will react in certain ways if its power levels get too low. Or refuse to cooperate unless it receives proper authentication. My phone lately has been complaining that it can’t back up to iCloud. All of these are reflexive preferences. What separates its “desire” for charging or backup, from an animal’s desire to find food? In trying to answer that question, we have to be careful to avoid not just an anthropomorphic bias, but an overall bio-morphic one as well.

          “Couldn’t there still be a “Mike” inside that thinks? Or that remembers? Or yes that feels?”
          It depends on exactly what has been lost. If all the sensory connections to my brain were severed, but the brain itself was intact, then yes, I would still be in there, albeit in a highly distressed locked-in state. I would still have mental imagery, just not the ability to refine imagery on new sensory data.

          An interesting question to ask is, what would happen if a developing fetus had all of its sensory connections severed, but was somehow still kept alive and allowed to develop? Genetically its brain should still have all the capabilities. But how much of a mind would actually be there? I can’t answer that question, except to observe that if it were conscious in any meaningful sense, it would be a desolate and impoverished existence.

          But getting back to my brain, if it actually was not intact, if the perception centers of my brain were all destroyed, such that I did in fact lose the ability to form perceptual images of any kind, then it becomes a matter of definition whether it’s still me. I would argue that it would be, at best, a fragment of me, possibly a non-functional one. And if my brain were decerebrated so that the brain stem was no longer connected to the cerebrum, I arguably would no longer have affective feelings in any meaningful sense.

          On flies, ants, and garden snails, all of this gets into the classic problem of other minds. What behavior can we look for to indicate a mental capability? For other species, all we have are observed behavior, and sometimes brain scans. (I recall reading somewhere that a lab was planning to do brain imaging on flies, and wondering how that could work.) Whether these creatures have affects is an interesting question, but I think we can agree that if they do, it’s a far more primitive variety than what mammals, much less humans, possess.

          Liked by 1 person

  18. I think part of the problem in general understanding of feelings is the brain senses feelings, but it doesn’t not sense its sensing. While you can see you finger touch something and so experiencing it twice, feelings are only felt once and without any perspective. Without perspective feelings become this sort of thing that exists (people will adamantly say they have felt pain) but also doesn’t exist.

    Much like tragedy + time = comedy, feeling – perspective = a sense of feeling being real, yet unexplainable in physical terms.

    Liked by 1 person

    1. Interestingly, I think feelings are a model that forms in the cortex. The cause is usually the reflexes, what many scientists call survival circuits. But sometimes the model is a mistake, forming solely based on interoceptive signals such as an upset stomach. If you don’t realize that stark feeling in your abdomen is just a straight physical thing, it’s easy to conclude it’s your primal reaction to whatever social situation you happen to be in.

      Liked by 1 person

      1. Miss reading oneself seems to be possible – it seems reading oneself is actually a theory of mind skill. Possibly in evolutionary terms, reading others behavior was a useful skill to develop…then eventually that skill turned on the brain that engaged in the skill itself. And then you get a kind of consciousness.

        Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.