Layers of consciousness, September 2019 edition

A couple of years ago, when writing about panpsychism, I introduced a five layer conception of consciousness.  The idea back then was to show a couple of things.

One was that very simple conceptions of consciousness, such as interactions with the environment, were missing a lot of capabilities that we intuitively think of as belonging to a conscious entity.

But the other was to show how gradually the emergence of all these capabilities were.  There isn’t a sharp objective line between conscious and non-conscious systems, just degrees of capabilities.  For this reason, it’s somewhat meaningless to ask if species X is conscious, as though consciousness is something they either possess or don’t.  That’s inherently dualistic thinking, essentially asking whether or not the system in question has a soul, a ghost in the machine.

I’ve always stipulated that this hierarchy isn’t itself any new theory of consciousness.  It’s actually meant to be theory agnostic, at least to a degree.  (It is inherently monistic.)  It allows me to keep things straight, and can serve as a kind of pedagogical tool for getting ideas across.  And I’ve always noted that it might change as my own understanding improved.

Well, although disagreeing with him on a number of important points, after reading Joseph LeDoux’s account of the evolution of the mind, as well as going through a lot of papers in the last year, along with many of the conversations we’ve had, it’s become clear that my hierarchy has changed.

Here’s the new version:

  1. Reflexes and fixed action patterns.  Automatic reactions to sensory stimuli and automatic actions from innate impulses.  In biology, these are survival circuits which can be subject to local classical conditioning.
  2. Perception.  Predictive models built from distance senses such as vision, hearing, and smell.  This expands the scope of what the reflexes are reacting to.  It also includes bottom-up attention, meta-reflexive prioritization of what the reflexes react to.
  3. Instrumental behavior / sentience.  The ability to remember past cause and effect interactions and make goal driven decisions based on them.  It is here where reflexes start to become affects, dispositions to act rather than automatic action.  Top down attention begins here.
  4. Deliberation.  Imagination.  The ability to engage in hypothetical sensory-action scenario simulations to solve novel situations.
  5. Introspection.  Sophisticated hierarchical and recursive metacognition, enabling mental-self awareness, symbolic thought, enhancing 3 and 4 dramatically.

Note that attention has been demoted from a layer in and of itself to aspects of other layers.  It rises through them, increasing in sophisticating as it does, from early bottom up meta-reflexes, to deliberative and introspective top down control of focus.

Note also that I’ve stopped calling the fifth layer “metacognition”.  The reason is a growing sense that primal metacognition may not be as rare as I thought when I formulated the original hierarchy, although the particularly sophisticated variety used for introspection likely remains unique to humans.

Some of you who were bothered by sentience being so high in the hierarchy might be happy to see it move down a notch.  LeDoux convinced me that what I was lumping together under “Imagination” probably needed to be broken up into at least a couple of layers, and I think sentience, affective feelings, start with the lower one, although they increase in sophistication in the higher layers.

I noted that mental-self awareness is in layer 5.  I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.  I think with layer 2, the system has to have a body-self representation in relation to the environment, so it’s tempting to put it there, but putting the word “awareness” at that layer feels misleading.  (I’m open to suggestions.)

It seems clear that all life, including plants and unicellular organisms, have 1, reflexes.

All vertebrates, arthropods, and cephalopods have 2, perception.  It’s possible some of these have a simple version of 3, instrumental behavior.  (Cephalopods in particular might have 4.)

All mammals and birds have 3.

Who has 4, deliberation, is an interesting question; LeDoux asserts only primates, but I wouldn’t be surprised if elephants, dolphins, crows, and some other species traditionally thought to be high in intelligence show signs of it.  And again, possibly cephalopods.

And only humans seem to have 5.

In terms of neural correlates, 1 seems to be in the midbrain and subcortical forebrain regions.  2 is in those regions as well as cortical ones.  LeDoux identifies 3 as being subcortical forebrain, although I suspect he’s downplaying the cortex here.  4 seems mostly a prefrontal phenomenon, and 5 seems to exist at the very anterior (front) of the prefrontal cortex.

Where in the hierarchy does consciousness begin?  For primary consciousness, my intuition is layer 3.  But the subjective experience we all have as humans requires all five layers.  In the end, there’s no fact of the matter.  It’s a matter of philosophy.  Consciousness lies in the eye of the beholder.

Unless of course, I’m missing something?  What do you think?  Is this hierarchy useful?  Or is it just muddying the picture?  Would a different breakdown work better?

77 thoughts on “Layers of consciousness, September 2019 edition

  1. Re ” I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.” This is grounded, I would think, in proprioception, the unconscious awareness of where our bodies are in space, which seems to come from a complex feedback mechanism. AN example is that when we receive a Novocaine injection from a dentist, we often seem to lose the ability to talk. Out numb lips and gums do not provide the feedback our brains need to allow for speech.
    I suspect that many of our conscious abilities have roots in our physical abilities. My question is why would we become conscious of these sub- or un-conscious abilities. There must be some evolutionary advantage.

    Liked by 3 people

    1. On proprioception, I think that’s definitely part of it. And that seems like part of layer 2. The issue is I don’t want to use the word “aware” there. Maybe the right way to think of this is as of a representation at layer 2 that the system becomes aware of in layer 3.

      On being rooted in our physical abilities, I think that’s an insightful observation. I’m struck by the fact that complex physical abilities seem linked to intelligence: primates, elephants (trunk), cephalopods, etc. Apparently, complex physicality requires a lot more intelligence.

      Liked by 2 people

    2. “I suspect that many of our conscious abilities have roots in our physical abilities.”

      I agree about that being insightful. It reminds me of some observations from this article:

      https://aeon.co/essays/how-cognitive-maps-help-animals-navigate-the-world

      It seems to me that cognitive maps probably began with maps of the internal body and basic things like orientation. But the method of organization has been extended to all sorts of others things like how we organization events in time, social structure, and even organization of complex ideas.

      Liked by 1 person

      1. The fact that the hippocampus and surrounding regions have turned out to be both a memory system and map, I think, shows why memory originally evolved, and how grounded it is in its original functionality. There’s a reason that one of the ancient techniques for memorizing a speech involved associating various parts of it with parts of your house. (“In the first place.., in the second place…”, etc)

        I know when I think about various web sites and blogs, I hold them in a kind of memory map. I think of my blog, yours, and others as existing in a physical landscape, sometimes related to the physical geography of the author (if I know it), but not always.

        Liked by 1 person

  2. “In the end, there’s no fact of the matter. It’s a matter of philosophy. Consciousness lies in the eye of the beholder.”

    Indeed, and I’ve never been a fan of pigeon-holing — exceptions and fuzzy boundaries so often make it hard to lock down categories categorically.

    Who knows what kind of inner life an octopus or crow might have!

    Liked by 1 person

    1. Good question. Level 3 would be my guess too. At a minimum under HOT, I would think there would be HORs for the reflexive or habitual action plans (affective feelings), sensory images, and the bodily self. At this level, I don’t know that they would necessarily be in the cortex.

      Liked by 1 person

  3. “It’s a matter of philosophy.”

    Reminds me of a quote from Robert Pirsig:

    “Philosophologists, calling themselves philosophers, are just about all there are. You can imagine the ridiculousness if an art historian taking his students to museums, having them write a thesis on some historical or technical aspect of what they see there, and after a few years of this giving them degrees that say they are accomplished artists. They’ve never held a brush or a mallet and chisel in their hands. All they know is art history.

    Yet, ridiculous as it sounds, this is exactly what happens in the philosophology that calls itself philosophy. Students aren’t expected to philosophize. Their instructors would hardly know what to say if they did. They’d probably compare the student’s writing to Mill or Kant or somebody like that, find the student’s work grossly inferior, and tell him to abandon it. As a student Phædrus [Pirsig’s alter ego in the book] had been warned that he would ‘come a cropper’ if he got too attached to any philosophical ideas of his own.

    Literature, musicology, art history, and philosophology thrive in academic institutions because they are easy to teach. You just Xerox something some philosopher has said and make the students discuss it, make them memorize it, and then flunk them at the end of the quarter if they forget it.”

    Liked by 1 person

    1. I’ve actually never taken a philosophy class. I might have, but I think I sensed that it might be like you describe, studying the writing of historical figures, with little actual new reasoning. It would be like making science students read Copernicus, Galileo, Newton, or Darwin directly, instead of just getting the modern synthesis of their ideas and trying to work with them.

      One of the nice things about being an amateur, is I can decide how much of a deep dive I want to do in any area, and stop once I get bored.

      Liked by 1 person

      1. The biggest reason philosophy has such a bad rap is due primarily to the practice of philosophology. According to Whitehead, philosophy in general is nothing more than a footnote to the work Plato and Aristotle achieved.

        Philosophy and metaphysics just happens to be my thing, I’m genetically wired for it. Robert Pirsig was a major influence on my pursuit of the discipline because I could relate to his own burden of mental illness having been afflicted by the scourge of depression myself. Mutations occur from time to time in the evolutionary process, all of which leads to the next evolutionary ontological level of experience. Once that change occurs, one can never go back. Pirsig saw that process as the natural progression of static quality as it continually discovers dynamic quality. Personally, I consider that process the answer to David Chalmers compelling question of “why conscious experience at all?” Everything here is a condition, and that condition is a possibility. That statement is not untenable, the science of physics clearly demonstrates it efficacy.

        If this is all a game of a superior intelligence, surely that intelligence could build an algorithm to achieve the same end game without sentience. Consciousness was an adjunct to my original research, but from what I’ve discovered, sentience it at the heart and soul of materialism. Homo sapiens have a real problem with that concept, but really it’s nothing more than a culturally instilled bias. What if Renee Descartes had uttered the words “I am, therefore sentience” instead of “I think, therefore I am.” I am would be the predicate to corroborate sentience as a fundamentally universal paradigm. The science of physics would look entirely different that it does today. Panpsychism doesn’t take away from science, what panpsychism does is enhance the understanding of ourselves in the grander scheme of a magnificent work of art.

        Liked by 3 people

  4. Mike, your level 1 looks like teleology to me (well, teleonomy), but in a previous thread you said you don’t buy teleology. What do you mean by that? Are you saying there’s no such thing, or whatever it is it’s unrelated, or what?

    *

    Liked by 1 person

    1. James,
      I don’t buy teleology because I can’t see that it’s productive to ascribe purpose to natural systems. Evolved attributes can be adaptive, or not. If they are, they will enhance the persistence of the system in question, in other words, be naturally selected. This might give the appearance of purpose in a teleonomic sense. But a trait that is adaptive today might be maladaptive if the environment changes, and vice versa.

      Of course, we always use a shorthand when discussing evolved functionality, saying that the “purpose” is Y. Along those lines, biologists talk about “innovations” as though animals were entrepreneurially trying out new things, when in reality they simply have mutations that turn out to be useful. It’s all useful metaphorical shorthand, as long as we keep in mind that’s what it is.

      I think when we start speaking of evolution making predictions, implying it has forethought of some type, we’re crossing that line, and risking confusion about what we’re talking about.

      Liked by 1 person

      1. Fair enough, but I suggest that you, like many philosophers, may be hung up on the familiar concept of purpose, the kind that requires “forethought”, the one typically referred to as teleology, and I suggest you are failing to take into account a real phenomenon, which some term as teleonomy (Dawkins uses the term archeopurpose). Rovelli seems to be working out the physics/math of this concept right now.

        The reason I harp on teleonomy is that it is a necessary concept to explain representation, qualia, and the hard problem.

        Another question: when you hear neuroscientists saying that the brain works by prediction, do you think they are talking about the kind of prediction that requires forethought? When you perceive a rose, what do you see as the role of prediction in that perception?

        *

        Liked by 1 person

        1. I don’t really have a problem with teleonomy. I like the sound of archeo-purpose. It makes clear we’re not talking about intelligent purpose. Ultimately this is all subject to the limitations of language.

          “do you think they are talking about the kind of prediction that requires forethought?”

          Just to be clear here, you switched from purpose to prediction.

          Not necessarily. At the perception level, there is no forethought involved. It actually happens non-consciously and mostly involuntarily, although it is a mixture of innate and learned associations. We just get the result.

          If we get into layer 3 or 4, then the predictions can become more explicit, and therefore more likely to involve forethought.

          “When you perceive a rose, what do you see as the role of prediction in that perception?”

          When we recognize that the shape and color is a rose, that is a prediction, or perhaps more accurately, a prediction framework, which we may or may not utilize for our purposes.

          In truth, a rose probably seems vivid to us because we evolved to see ripe fruit as vivid, which tend to be red or yellow. The rose just benefits from that ripe fruit detector “misfiring.” (And yes, I’m making use of the shortcut language here. 🙂 )

          Liked by 1 person

          1. At the perception level, there is no forethought involved. It actually happens non-consciously and mostly involuntarily, although it is a mixture of innate and learned associations. We just get the result.

            So do you agree that there is prediction here, at the no-forethought perception level? Because I’m pretty sure that is what those neuroscientists are saying.

            *

            Liked by 1 person

          2. I need to sign off so my next reply won’t be until tomorrow, but what does no-forethought have to do with it? Are you saying that a prediction has to be planned? Why would that be?

            And remember, we were discussing forethought above in terms of purpose, not prediction. Or am I missing something?

            Liked by 1 person

          3. I’m trying to figure out what people mean when they say that my perception of a rose involves prediction. The (Merriam-Webster) definition of predict is “to declare or indicate in advance”. What in my perception of a rose was indicated, and in advance of what?

            To say it again, the use of the term “prediction” implies two separate time points: 1. the time of prediction of an event and 2. the time of the event.

            For myself, I can reconcile prediction with perception by saying that the prediction happens at the time of creation of the mechanism which will recognize a specific pattern. When I perceive a rose it is because there existed, prior to the perception, a mechanism which takes certain specific input and recognizes that input as “rose”. This mechanism had to have been created for the purpose (either teleological or teleonomic) of recognizing a rose. The creation of the Mechanism is the prediction that, in the future, if the appropriate input happens, then a “rose” is the case.

            So if you take away the teleonomy, I don’t see how perception could work as a prediction.

            *

            Liked by 1 person

          4. “What in my perception of a rose was indicated, and in advance of what?”

            You have to think in terms of what’s added to a system that only has reflexive automatic reactions. Recognizing a rose predicts that it matches a mental concept, as well as not matching a host of others. In other words, it’s not food, or a predator. In the case of humans, it’s not a tool (except maybe for mate seeking and retention).

            “To say it again, the use of the term “prediction” implies two separate time points: 1. the time of prediction of an event and 2. the time of the event.”

            The perception, using distance senses (vision, smell) would be the first event. The second event would depend on what was done with the information.

            “So if you take away the teleonomy, I don’t see how perception could work as a prediction.”

            Again, I don’t have an issue with teleonomy, the appearance of purpose from evolved mechanisms, just with teleology.

            Liked by 1 person

  5. I keep going back to the prefrontal synthesis thing in humans that seemed to produce an exponential leap in 3, 4, and possibly enabled 5 to happen.

    One problem with the layers is that certain neural capabilities might have manifest in different degrees that span across the layers. I assume you wouldn’t disagree with that. But it seems like there may need to be more of a matrix with capabilities in various degrees specific to the layer. For example, adding PFS to layer 3 enables goal planning that might extend across days or years with a large number of steps involved in reaching the goal. Whereas, more basic planning without PFS as exhibited of crows might only consist of a few steps and need to be compressed to less than an hour. What beyond PFS this would be applicable to I’m not sure. The idea just occurred to me and isn’t completely thought out.

    Some others might be:

    1- Basic either/or logic
    2- Counting ability
    3- Sophistication of communication – how many and how refined the signs used are
    4- Social/cultural orientation

    By the way on this last one the layer approach seems to be oriented to single organism approach without consideration of fact that organisms exist in communities (even slime mold).

    Liked by 1 person

    1. Good points. The hierarchy is definitely an oversimplification. I’m reminded of something Brian Greene said in one of his books, that he sometimes first needs to present an idea in an oversimplified manner, then clean up afterward. There’s definitely a factor of that type involved here.

      For example, I largely ignore habits in this hierarchy, because it’s a capability hierarchy, and habits strike me largely as learned reflexes. But you need at least 3 to establish them, so once 3 is there, there’s no reason to explicitly list them. But they are definitely in the mix.

      On PFS, if I’m understanding it correctly, it should move an organism from 3 to 4. The problem is that I think 4 exists in much more than humans. I’m currently watching a documentary, ‘Bird Brain’, which is showing crows, parrots, and other birds doing things that I perceive to be full 4, solving complex problems with one try (i.e. not trial and error), apparently better than chimpanzees, and in some cases solving problems in social groups, working together.
      https://www.pbs.org/wgbh/nova/video/bird-brain/

      Liked by 1 person

      1. Keep in mind I am not in any way disagreeing with the five layers. I’m just try to add extra dimensions to them.

        You might be right PFS is more associated with 4 but then where does the difference in degree come into play in the layers. A crow might be able to manage 3-4 steps with all of the items required visible for a problem solution but a human can manage many more steps and even retrieve items required for the problem that are not visible. Don’t get me wrong. I am really impressed with what crows can do but clearly there is a major difference in scale of capabilities. A human 4 and a crow 4 are not the same.

        Also, when we look at this in terms of human evolution, a lot of this capability appears relatively recently after the brain had stopped increasing in size. (Actually Neanderthal brains apparently were larger than sapiens.) So there seems to be a qualitative difference not explained by just more neurons.

        Perhaps there is more involved in the maturation rate delay of the PFC than just PFS.

        Side question: Do you think language is required for 5?

        Liked by 1 person

        1. Thanks for the clarification, but it would have been fine if you were disagreeing with it. I did ask for feedback after all. 🙂

          My suspicion has long been that the big difference between us and most other animals that can do 4, aside from sheer capacity, is symbolic thought. It enables us to work with concepts like hours, days, weeks, etc. If you think about it, planning far ahead without these frameworks would be pretty challenging. Symbolic thinking allows us to expand the scope of our planning far beyond our immediate individual sensorium.

          So, I totally agree that there’s a big difference between us and crows. But it likely takes the next level for that difference to be realized.

          I wouldn’t say that language is required for 5 so much as enabled by it. I’ve occasionally thought about adding a sixth layer for symbolic thought to explicitly chart the relationship. That’s not to say that language, and symbolic thought overall, doesn’t provide an enhancing feedback loop. Once language developed, I suspect ever increasing articulation and social skills became a selective trait.

          Liked by 1 person

          1. I mean it in the anthropological sense, volitional use of symbols for either perceptions, actions, or other symbols, along with using those symbols in structures and hierarchies, to extend thought beyond immediate experience. Language, art, mathematics, etc.

            Liked by 1 person

          2. When you talk about symbolic thought I need you to be more precise. Where are the physical symbols? Are they inside the brain? Or is symbolic thought a kind of thought necessary to create symbols like spoken words, written mathematical symbols, etc.

            *

            Liked by 1 person

          3. The physical symbols would be in the brain, as neural firing patterns, synapses, etc (in addition to whatever external version we might create). And your last sentence is true too. It’s not an either/ or thing.

            Liked by 1 person

  6. A couple of things Mike. You’ve mentioned that your hierarchy is inherently monistic, though to me that just doesn’t seem right. Couldn’t someone propose one or more levels of your hierarchy in an “otherworldly” fashion?

    Then secondly you’ve implied that any conception of consciousness which is not continuous, but rather is defined to have discrete “on” and “off” states, would inherently reflect substance dualism. This also does not seem right. Couldn’t someone define a given aspect of worldly causal dynamics as “conscious” in itself, and thus a state which will or will not exist on the basis of that criteria alone? For example we say that certain things “live” while others do not, but also presume that there are causal dynamics of this world which make this distinction useful. So I’d think that we might also develop a useful definition for “consciousness” that isn’t continuous (sorry levels 1 and 2!), and yet is just as natural.

    Beyond “dualism” however, of course I’m one of the people who considers it useful to define “sentience” as consciousness itself. Thus I’m happy that your latest hierarchy gives it a bit more emphasis. It’s your architecture rather than mine however, so obviously take it where you like. From there it’s my obligation to consider your positions on the basis of your themes in the attempt to understand.

    Liked by 2 people

    1. Eric,
      “Couldn’t someone propose one or more levels of your hierarchy in an “otherworldly” fashion?”

      Definitely. Someone could slide in an immaterial soul between a couple of the layers. The hierarchy as it stands doesn’t include anything like that, or posit it as necessary. But a determined dualist could find a spot for it.

      “Couldn’t someone define a given aspect of worldly causal dynamics as “conscious” in itself, and thus a state which will or will not exist on the basis of that criteria alone?”

      The questions is what would that be? When reading your point, I thought about Dehaene’s global neuronal workspace. Someone could say that’s either there or it isn’t. Except that the GNW is a complex emergent phenomenon. It wouldn’t have just popped up one day fully formed. It would have needed an incipient proto-form, and then gradually increasingly sophisticated versions. At what point would we declare that a workspace now existed?

      “For example we say that certain things “live” while others do not, but also presume that there are causal dynamics of this world which make this distinction useful.”

      Life is an excellent example, because it’s also extremely difficult to precisely define. Consider viruses, viroids, and prions. These are systems are right on the boundaries between life and complex chemistry. Are they life? Proto-life? We define cells as life, and proteins by themselves as non-life, but these agents seem somewhere in between.

      “It’s your architecture rather than mine however, so obviously take it where you like.”

      Again, it’s meant to be a theory neutral crutch. It’s most definitely not an architecture. For that, I’m currently favoring HOT like theories, although that changes over time. If you want to consider it, you might think how your model, or any other, might fit within it. (Or invalidate it.)

      Liked by 2 people

      1. Mike,
        Well your assertions that your hierarchy is monistic, while something like my own position will be dualistic, shocked me. But I know you as a reasonable guy so I tried to remain calm about it. I guess that you exaggerated there a bit. Just as we presume no ghost in the machine for “life”, my own non-hierarchy also needn’t be dualistic.

        To help right your model with mine, we wouldn’t say that 1. Reflexes or 2. Perception are initial steps for a consciousness hierarchy. Both plants and our robots accept input and provide output, though we don’t consider there “anything it is like” to be them. So I don’t think it’s helpful for you to include that sort of thing for this list. Let’s let’s leave that for panpsychists.

        James of Seattle has what I consider to be a “God’s eye” first stage, a “human eye” second stage, a “signal” stage 3, such as for hormones, an input and output signal stage for individual neurons, and a 5th stage for full integrated brains and computational devices. Still there isn’t inherently anything it is like to exist.

        I perceive that to enter in his 6th stage, which is to say “sentience” itself. Note that a computer may be said to have non-conscious goals. Well apparently he theorizes that such a machine might be set up to “represent” them in a way that a sentience based mode of function emerges as well. (This is as opposed to my own “hardware” based account for the creation of sentience/ consciousness.)

        If I were to create a hierarchy then it wouldn’t concern a sliding scale of consciousness. Sentience defines that completely in my model. Instead this hierarchy would concern a sliding scale of conscious functionality. Note that we could theoretically have an extremely advanced non-conscious computer, which also outputs a functionally useless and epiphenomenal level of sentience. It is from this foundation that we can observe ourselves and other functional sentient beings in order to better grasp the nature of functional consciousness. I’ve reduced this down to three forms of input, one form of processor, and one form of output.

        I’m more aligned with Feinberg and Mallatt than those higher order theories, which I consider to ignore associated fundamentals. As you know, just as electricity drives the function of our computers, and neurons drive the function of brains, I consider sentience to drive the function of the conscious form of computer.

        Liked by 2 people

        1. Eric,
          On dualism, at this point, I don’t recall the exact context, but it might have come from the way you were describing the second computer. It seems like you went through a stage where you wouldn’t consider it a virtual machine, or in any naturalistic sounding manner. It had started sounding a bit spooky to me. Although in retrospect, it might have been a limitation of language, where “virtual” carried connotations you saw as pejorative. (Although I didn’t. The term is commonly used in IT.)

          “I’m more aligned with Feinberg and Mallatt than those higher order theories, which I consider to ignore associated fundamentals.”

          F&M definitely seem more first order in their outlook. Looking at my hierarchy, I think they’d say layer 2 had exteroceptive and interoceptive consciousness, but that affective consciousness (sentience) wouldn’t arrive until layer 3. Their first criteria for affective consciousness is global instrumental learning. Although they might object to having layers 2 and 3 separate.

          F&M are also upfront that they’re not attempting to explain human level consciousness, bur primary or sensory consciousness. My issue with their description is that they lump affective consciousness under the “sensory” label. I think that’s wrong. Affects, whether lower or higher order phenomena, form in the motorium, not the sensorium. And I think their own criteria, requiring nonreflexive behavior, implies higher order circuitry.

          Liked by 2 people

          1. Mike,
            Guilty as charged that I’ve been slow to warm up to the “virtual computer” term, thus giving you the impression that I was actually a dualist. I might have been associating it with the “simulation” term — simulations of weather do not produce what weather produces for example. Perhaps I didn’t want to imply that this computer, which is outputted by another computer, thus isn’t “a true computer”?

            Regardless I now fully embrace the “virtual computer” term for consciousness. But then if everyone and their brother has a consciousness theory, it’s strange to me that my model would be so different from mainstream models. It’s widely known that we have both “non-conscious” and “conscious” forms of function. Why wouldn’t other models then propose associated computers for this distinction? Instead mainstream theorists seem to have all sorts of trouble trying to delineate the point where non-conscious function changes into conscious function. Your own mono-computer hierarchy seems to be all about that, and so spanning a chasm between all theory between panpsychism and HOT.

            Do you not consider affect as input (or “sense”) but rather output (or “motor”), and given that you perceive affect to be produced in the motorium? Hmm…. I don’t claim to know where it is that affect is produced, though it’s surely useful to consider it as input regardless of that, or at least architecturally. Pain is something that we feel and assess, which is to say input to the system, and regardless of where it’s produced.

            On nonreflexive behavior, yes it should by definition require nonreflexive output. But this should naturally have begun in a primitive way, or the opposite of “higher order circuitry”. So I guess that I support F&M there too. To understand why, consider my proposal for the emergence of the conscious form of computer:

            Perhaps around the early Cambrian we have non-conscious organisms that are naturally unable to deal with more “open” environments all that effectively, or exactly what where our robots have problems today. Some of these organisms should have started producing valence, and merely in the manner that evolution produces new traits in general, or through serendipity. Thus there would finally be something it is like for something to exist, and even if functionless for the organism. (Of course evolution has a knack for turning functionless things into functional things.)

            I theorize that this new conscious entity (and not the automatic “robot”) was randomly given control of output function in some capacity, and so it would have tried to do what made it feel best to the extent that it could. History tells us that it did so well enough to become adaptive as a new form of organism function.

            Note that beyond its valence input and limited muscle output function, neurons that have fired in the past would still tend to fire again. So here an incipient memory input to the conscious entity should inherently be set up as well. Adding at least some sense information (like hearing or smell, which would have originally provided the non-conscious organism information) would mean that the three forms of input (valence, sense, and memory), one form of processor (thought), and one form of output (muscle operation), would provide these organisms all of the basic categories that I theorize in the modern human, and potentially quite early.

            In order to truly grasp the dynamics of my models, you’d need to put my ideas to work for yourself. Perhaps when you ask me questions you could also note to yourself what I might say. That should put you into the mindset of “active” rather than just “passive” learner in this regard. Then you could check yourself once I answer.

            Liked by 2 people

          2. Eric,
            “Instead mainstream theorists seem to have all sorts of trouble trying to delineate the point where non-conscious function changes into conscious function.”

            I think the difference, as we’ve discussed before, is your model doesn’t get into the messy details, whereas many do at least attempt it. That’s what GWT and HOT have in common. (IIT unfortunately, as far as I can tell, doesn’t make that attempt.)

            Have you considered that your second computer may be equivalent to the global workspace in GWT, or the association of higher order representations in HOT? (I still think HOT advocates closing the door on consilience between HOT and GWT are being hasty, since the workspace might also be equivalent to all the HORs in HOT.)

            “Do you not consider affect as input (or “sense”) but rather output (or “motor”), and given that you perceive affect to be produced in the motorium?”

            I think it’s both input and output in that it’s an intermediate mechanism between the lower level motorium and higher level motorium, but this gets back to the idea of different brain regions talking to each other that you dislike so much. 🙂

            “On nonreflexive behavior, yes it should by definition require nonreflexive output.”

            What do you mean here by “nonreflexive” output? I ask because it doesn’t seem that it’s in the output that behavioral flexibility shows itself, but in the selection of that output. There is considerable fine tuning by the cerebellum, but it doesn’t seem to be conscious.

            We agree that valence is tied to behavioral flexibility. But you have a tendency to black box valence and then work from there. I’m not satisfied with that. I want to know what valences *are*, and to me the most plausible explanation is that they’re the signals from the reflexes starting to fire to higher level circuitry that decides which reflexes to allow or inhibit.

            “In order to truly grasp the dynamics of my models, you’d need to put my ideas to work for yourself.”

            What do you think of my attempted reconciliation above?

            Liked by 2 people

          3. Mike,
            I’m sure that conceptually you don’t consider there to be anything wrong with a person who observes human behavior, and so develops models associated with human function, though sans neuroscientific mechanisms . That’s psychology right? Well that’s exactly what I’ve done. First I developed a broad psychological framework, and then armed with those positions I developed what may be referred to as my “dual computers” model of brain function. Such architecture “supervenes” upon neuroscience. This is to say that evolution created our function bottom up, though our models must work this out top down.

            In a perfect world neuroscientists would be founding their models upon established psychology. It’s only because the field is so troubled that neuroscientist have had to fabricate their own psychology. For example Lisa Feldman Barrett proposes that babies don’t feel until they are are linguistically taught to feel. That’s the sort of thing which should naturally happen when we try to work things out backwards.

            I’d absolutely love for you to gain a working level understanding of my models, though my lectures alone shouldn’t be sufficient to provide them. Instead you’d need to try your conception of my ideas out so that the subtle nuances might be realized. For example, instead of asking me if the conscious form of virtual computer that I propose might be associated with GWT or HOT, you could try to answer this yourself. My answer is “No”, but rather than me telling you why, could you try to predict what I’d say? (In case you want to try, I’ve put a blockquote explanation at the end of this comment.)

            So you consider affect to exist as an intermediate between lower and higher level motoriums? Well that’s fine. Furthermore this seems contrary with my own brain architecture. In it the brain produces affect (it matters not where or how!), and this motivates a virtual computer that should do less than 1000th of one percent as many calculations as the brain which creates it. Of course I don’t dislike that different brain regions talk with each other. It’s just that my own model doesn’t get into any of that business.

            What do you mean here by “nonreflexive” output? I ask because it doesn’t seem that it’s in the output that behavioral flexibility shows itself, but in the selection of that output. There is considerable fine tuning by the cerebellum, but it doesn’t seem to be conscious.

            By “nonreflexive” I mean straight “conscious”. But it’s good to hear that behavior flexibility is observed in output selection rather than output itself, since my model is indeed able to account for this. When muscle operation is consciously chosen, according to my model this doesn’t directly operate muscles. The conscious computer merely makes a request, as denoted with an arrow to the non-conscious processor. It’s the non-conscious computer that actually operates muscles, and so cerebellum fine tuning should be displayed. Another example would be in paralysis — request all you like, but the brain doesn’t cooperate.

            I want to know what valences *are*, and to me the most plausible explanation is that they’re the signals from the reflexes starting to fire to higher level circuitry that decides which reflexes to allow or inhibit.

            Once again, okay, though this position is contrary with my own brain architecture. I do not propose one computer that changes from non-conscious to conscious somewhere, but a neuron based computer that outputs a distinctly different valence based computer.

            What do you think of my attempted reconciliation above?

            I always appreciate reconciliation attempts for our positions from you Mike! In truth however, it’s not your agreement that I seek in the end. What I most desire is your expertise, though I’ll only get a full measure of that to the extent that you gain a working level grasp of my models. That’s where you should be able to assess them for yourself. So why do you think I’d say that GWT and HOT do not correspond with my own conception of consciousness? To check yourself, my own explanation follows:

            In truth these positions haven’t interested me given that I perceive them to be so restrictive. Consider my last comment’s account for the Cambrian evolution of consciousness/ sentience — it displayed “conscious” function that was way, way, more primitive than the function of modern birds and mammals! Even if examples of GWT and HOT consciousness are included under my own conception of consciousness, they simply do not have my founding premise. They do not propose the brain as a computer that produces a virtual computer, and that this second computer has phenomenal inputs of valence, senses, and memory, a thought processor that interprets them and constructs scenarios from which to feel better, and a muscle operation output that the brain monitors and facilitates.

            Liked by 2 people

          4. Eric,
            Even within psychology, the dividing line between what is or isn’t part of consciousness is a contentious one. I highlighted the views of Ned Block the other day, and that was a largely psychology level discussion.

            “In a perfect world neuroscientists would be founding their models upon established psychology.”

            Ideally, neuroscientists would construct their models based on their evidence, and psychologists would construct theirs based on their evidence, and we’d eventually see some convergence. But when psychological models conflict with neuroscience, neuroscience wins. Sorry.

            Lisa Feldman Barrett is actually a psychologist who also works in neuroscience. (Frankly, I think the theories of any psychologist who isn’t at least well versed in neuroscience by now, should be viewed with caution.) So she seems to be working from your ideal perspective. However, when she attempted to find the neural basis for the common psychological models of emotions, she couldn’t. I don’t buy all her conclusions from that evidence, but that doesn’t mean we can reject the evidence itself.

            On your model and GWT and HOT, I honestly didn’t know, so I read your explanation. To me, it seems excessively focused on terminology rather than the actual concepts behind that terminology. But you’re not unique in that, as it’s the same sense I get with reading Hakwan Lau’s reasons why HOT and GWT can’t be reconciled.

            “In it the brain produces affect (it matters not where or how!),”

            It does to me.

            “Once again, okay, though this position is contrary with my own brain architecture. I do not propose one computer that changes from non-conscious to conscious somewhere, but a neuron based computer that outputs a distinctly different valence based computer.”

            So, my interpretation of this is that the second computer is emergent, exists as a higher level of organization or abstraction, from the first one. If so, then we’re only talking about things at different levels of abstraction. Your own model doesn’t get into the details of valences. Which is fine if that’s what you want. But if you then say that a description at a lower level is wrong, if you take a stance, then you have to actually get in and get your hands dirty and say why. At least if you want people to pay attention 🙂

            Liked by 2 people

          5. But when psychological models conflict with neuroscience, neuroscience wins. Sorry.

            .
            I do agree with you about that Mike. Neuroscience is highly respected while psychology is anything but (and I’d say for good reason). For many neuroscience might even be perceived as a “hard science”. Regardless it tends to lend credibility to people on the soft side of things. For example with a popular book about ethics and religion, what did Sam Harris do to give his emerging fan base a sense that he was “the real deal”? He went out and got himself a PhD in cognitive neuroscience. Actually he seems to have made lots of good moves in order to become such a popular public intellectual.

            Feldman Barrett provides another such example. If she merely had standard psychological evidence for her theory of constructed emotion, I don’t believe it would have gone anywhere — only neuroscientific work should have rendered that particular theory “respectable”. And I certainly wouldn’t call her approach “ideal”. I’m very much opposed to her position for its psychological implications, as well as that it was built from a mere void in apparent neurological signatures.

            In order for neuroscience to be of practical use, I believe that its evidence must essentially be interpreted in terms of psychological function, if not just physiological. And here’s the thing. If we’re clueless about psychology, then we shouldn’t be able to assess better or worse ways of interpreting such neuroscientific data. So consider the following heuristic: Psychology and neuroscience are joined at the hip, with neuroscience below and psychology above.

            If you decide that this isn’t a good assessment however, there does seem to be a potential way for you to demonstrate this. What useful bit of neuroscience (beyond physiology) can you think of that is not also useful in a psychological capacity?

            If it’s true that neuroscience can teach us about psychology, though we can’t effectively assess psychological models beyond neuroscience, they how might we cross check our neuroscientific interpretations?

            I don’t believe that I quite said GWT and HOT were “wrong”, but rather that these single computer models conflict with my own dual computers model. Indeed, it could be that one or both of them happen to be extremely effective, and in that case my own should not be. But then I’d love to see if their psychological implications are able to explain the standard human function which my own position is able to!

            So, my interpretation of this is that the second computer is emergent, exists as a higher level of organization or abstraction, from the first one.

            Yes I would say that it’s emergent, though not at a higher level of organization or abstraction. That, for example, is how psychology emerges from neuroscience (among others) — an epistemic distinction. For this I’m instead talking about an ontological distinction, such as how a car is produced by a car factory. It could be that the valence produced is entirely “information”. (I have a friend who is trying to create valence by means of software alone given this premise. Speak of the devil — you’re talking with Peter just below!) But I suspect that it emerges more in the manner that a computer animates a computer screen. Note that this interpretation keeps me square with John Searle’s Chinese room.

            Liked by 1 person

          6. Eric,
            I see Harris as a case of someone with academic credentials getting undue credibility in areas outside of his training. Although given his track record, not to mention the silliness his wife produced, I wouldn’t have much faith in whatever he wrote about neuroscience.

            At least Barrett is arguing within her field of expertise, but then I think she’s far less wrong than you do. My only real beef with her is the sharp line she draws between affects and emotions.

            “For this I’m instead talking about an ontological distinction”
            “But I suspect that it emerges more in the manner that a computer animates a computer screen.”

            Do you mean strong emergence? Or are you saying that the brain is producing some kind of field, energy, or ectoplasm? (We’re getting into the territory that led me to think of the dreaded d-word 🙂 ). Seriously, it feels like you need to flesh this part out. As it stands, it has an air of mysticism about it.

            “Note that this interpretation keeps me square with John Searle’s Chinese room.”

            Not really anything meaningful in my book. I find Searle’s thought experiment polemical, confused, and slipshod. As far as I can see, the only reason for his reputation is that he confirms popular biases.

            Liked by 2 people

          7. Okay Mike, let’s flesh this stuff out. I suspect that you wouldn’t admit that I’m more of a skeptic that you? So consider that you have personal motivations for what you believe, and I have personal motivations for what I believe. But that doesn’t quite square things. Here we display opposing positions where one should naturally be more effective than the other. If you can demonstrate that my biases have led me astray, then I’ll thank you for it. I presume it’s the same in reverse.

            The issue as I see it, and obviously check me on this, is that you believe that valences such as what a smashed thumb causes you to feel, exist entirely as information processing itself. Thus if the computer that you’re now looking at were to properly process the information that your brain does associated with smashed thumb input, the same output sensation would then be produced through it. Furthermore if the input information associated with this were written out on sheets of paper and passed to a person who looks up the same answers that your computer did, associated output sheets of paper would itself create what you know of as “thumb pain”. Is that right?

            Regardless I hold the bias that information processing alone will not create such valences. If your computer were to properly process the information that your brain does to produce thumb pain, I believe that this information would then need to be fed into a valence producing machine in order for the thumb pain to actually occur. This is similar to how it’s impossible for your computer’s processed information alone to produce screen images. Just as processed information is fed into a computer screen that’s set up to be animated by such information, properly processed thumb pain information would need to be fed into a valence producing machine in order for such an experience to actually occur. Furthermore I believe it would even be possible for written notes associated with the thumb pain information that your computer accepts, to be processed by means of lookup tables into new sheets of paper that would produce thumb pain, that is if they were fed into a valence producing machine which is set up to accept such information.

            I don’t believe that my position is inherently dualistic. If screen images can only emerge from screens, I don’t see how causal dynamics must fail if valences can only emerge from machines associated with those particular properties of physics, whatever they are. Perhaps some day we’ll even figure out the physics behind valence.

            So if it exists, where might such a machine be located in the brain? The work of F&M, and I suppose Jaak Panksepp, suggest it’s pretty basic. Thus my guess would be the brain stem. That’s well outside my expertise however. I’m an “architect” rather than an “engineer”.

            Liked by 2 people

          8. Eric,
            Your bias is a very common one. The problem is that there’s nothing in the nervous system to substantiate it. Neurons are information processing mechanisms, and the brain is a composite information processing organ. There are support cells (glia) to deliver energy, neurotransmitters, sodium ions, and other raw essentials, as well as perform other functions, but the main action is signalling between the neurons, and more broadly between regions of those neurons. The system receives neurochemical spikes from the outside world through the peripheral nervous system, and produces spikes that are transmitted to neuromuscular junctions, as well as release chemical messengers for slower adjustments.

            We can describe this system at various levels of abstraction, but none of that will change the basic reality. The brain is an information processing organ. Everything it does it built on those signalling components.

            On the screen metaphor, it’s worth mentioning that the screen itself is an information appliance. It’s job is to present information tailored to a human’s visual sensory systems. It simply converts electrical impulses into lighted pixels, generating photons that impinge on our retinal photoreceptors, which in turn convert the signal into neural electrochemical signals. People sometime speak of computer devices as extending our mind. This is because we can choose to view the overall arrangement as an extended information system.

            A better example for non-information processing might be a robotic motor or other physical output, where the magnitude of the energy is more important than the transmitted or processed patterns. Again, the issue there is that the brain doesn’t do any of that. It completely depends on the rest of the body for that kind of activity.

            The idea of feelings happening in the brainstem is a popular one. I bought it myself a few years ago. It seems like an elegant conception. And I don’t think it’s entirely wrong. Many of the impulses that eventually produce feelings do begin there. But as I’ve learned more neuroscience, my views were forced to change. The generation of the valence may begin at lower levels, but data seem to show that the predictions about it, the modeling of it, the perception or feeling of it, happens in higher level circuitry.

            If our views don’t change on new information, and we don’t keep trying to learn new information, if we only keep trying to argue and defend our initial preconceptions because they once felt right, then we become frozen, no longer scientific, but ideologues. I was listening to an inteview of Hakwan Lau, one of the neuroscientists whose work I’ve highlighted recently, and he noted one of the perils of working on theories is how easily they can blind us to disconfirming data, if we’re not very careful in the manner in which we hold them.

            Liked by 2 people

          9. Mike,
            Given the obvious complexity of the brain, how can you be so certain that it does nothing more than “process information” to produce valence? The brain does for example produce heat which isn’t simply processed information. So isn’t it possible that it somehow also mechanically produces valence, though in a way that the human does not yet grasp? How can you be so certain that there are no valence producing mechanisms in the brain?

            Let’s say that you’re right about this however. What other dynamics of reality exist by means of information processing alone? I don’t know of anything that the computer that I’m now typing on is able to “do” beyond physics based output mechanisms. So to turn things around, it seems to me that if anyone claims that something exists by means of information processing alone, then that will be interesting! What else exists by means of information processing alone? If valence happens to be the only such example, then wouldn’t it seem a bit magical?

            My models do not depend upon the “How?” of valence, but rather the “What?” of it — valence is valence regardless of the way it’s produced. It just doesn’t yet make sense to me that it would exist by means of information processing alone.

            Liked by 2 people

          10. Eric,
            I’m going by the evidence we have, not the evidence we hope might someday surface. Of course, if evidence does surface that no longer substantiates my view, I’ll be obligated to change it. But we need to be very careful about holding out for cherished notions in the epistemic gaps. What comes out of those gaps, when it isn’t more of what we’ve already seen, rarely supports our preconceived notions.

            On “information processing alone”, I’m not sure what you mean by that question. Obviously, just like the central processing chips of the device you’re using to read this, brains exist as part of a physical (embodied) system, that is, as part of an overall environment. Every information processing system is a physical system, albeit one where its causal interactions with the environment depend on intermediate amplifying and moderating systems (aka, I/O systems, peripheral nervous system, etc).

            “It just doesn’t yet make sense to me that it would exist by means of information processing alone.”

            What specifically about it causes you to say that? What aspect of valence is not informative?

            Liked by 2 people

          11. “How can you be so certain that there are no valence producing mechanisms in the brain?”

            Pardon my interruption, but what does it mean for a brain “produce” valence? I can understand how it might recognize or perceive it, but I don’t understand what it means to “produce” it.

            Liked by 2 people

          12. It’s good to have you here Wyrd, and excellent question.

            Sometimes we do not experience valences such as thumb pain, while other times we do. Thus Mike and I infer that when valences exist, they somehow get “produced” by means of the brain. But then getting to your point, once produced I also consider valences to be “recognized” or “perceived”. Valence output of the brain would exist as input to a virtual machine which the brain creates (the perceiver), or a conscious entity such as myself.

            My perception of the disagreement between Mike and I, is that he believes that valences occur by means of information processing alone. That seems too simple to me. Thus I’ve proposed a new version of Searle’s Chinese room thought experiment, and he may not be backing down. Here symbol laden sheets of paper which are properly processed into other symbol laden sheets of paper, could in itself create what each of us recognize as “thumb pain”.

            Mike,
            I wonder if you know of any situations in the computer world where processed information alone, without transmission to any mechanism, achieves inherent output function? If so then maybe that could help me grasp the concept of “processing itself as output”.

            I also now wonder if there has been a mix up. The statement which follows from you seems more associated with my position:

            Every information processing system is a physical system, albeit one where its causal interactions with the environment depend on intermediate amplifying and moderating systems (aka, I/O systems, peripheral nervous system, etc).

            If it’s true that valences are not produced by means of processed information alone, what would this mean? For one thing it wouldn’t be possible to upload a person into some kind of software world, or even to create programs which harbor entities in them that experience positive and negative valences. For any of that, “hard problem” mechanisms would need to be provided as well.

            Liked by 1 person

          13. “I wonder if you know of any situations in the computer world where processed information alone, without transmission to any mechanism, achieves inherent output function?”

            Eric, with this question, it seems like you’re implying that a mental state is an output of some type. It is an output in the sense of being a brain state induced by other brain processing, but overall that’s just an intermediate state in the system, similar to the in-memory data structures the device you’re reading this on hold (although far more complex). But the only functional output we have evidence the brain overall produces are efferent nerve spikes and hormone signals, which depend on downstream systems to transform into energy magnitude actions.

            If you do say it’s an output of the brain overall, then where is it being output to? And how does it input back into subsequent brain processing? If you’re not saying it’s an output of the overall brain, then I wonder what specifically are you saying?

            I’m also still curious what about valences make you think they can’t just be information. This seems like the core of the issue.

            Liked by 2 people

          14. Okay Mike, let’s start with the core of the issue that you’ve noted. I don’t think that valences can be produced by means of information alone, because this seems contrary with causal dynamics. If what I know of as thumb pain is produced by my brain by means of that alone (or something which may thus exist through various mediums), then what I know of as thumb pain might just as well be reproduced through my computer, or even through sheets of paper outfitted with the proper symbols. (I think therefore I am, and even if I ultimately reduce back to a stack of symbol laden sheets of paper!) To me such scenarios seem extremely spooky. Apparently here we have both “causal stuff”, as well as independent “information stuff” associated with valence/ consciousness. Naturalism, if true, mandates that information must exist as a product of causal dynamics rather than above causal dynamics.

            I submit that my brain processing animates valence producing mechanisms in it, and these are what cause me to feel things like thumb pain. You can call such mechanisms “intermediate” if you like, though they’re still functional. Furthermore my computer should suffice as well, that is if it were hooked up to valence producing mechanisms set up to accept its data. Even properly symboled sheets of paper might do the trick, that is if inputted to the proper mechanisms.

            If my friend Peter does end up demonstrating that he can produce valences through software, or anyone else does so, then I’ll admit that you were right and I was wrong about valence existing as information alone. Would this also kill my dual computers model? I don’t think so. This model concerns brain architecture rather than what we’re currently discussing, or “hard problem engineering”. It’s just that I, like Einstein, don’t like spooky stuff.

            If you do say it’s an output of the brain overall, then where is it being output to?

            As I model this, the first computer (or “brain”) outputs a virtual second computer (or the conscious entity). So it outputs it to… wherever the conscious entity can be said to exist. I consider my own consciousness to exist because of my brain rather than in my brain. Surely you could say the same.

            And how does it input back into subsequent brain processing?

            Well that should be simple, that is if the brain which produces consciousness, does so in order to monitor such function for clues about what to non-consciously do. As I model it, consciousness exists as a tiny valence driven virtual computer which is set up to do what brains do poorly, or to function under more open environments. This provides a mode of function which harbors teleology.

            Liked by 2 people

          15. Eric,

            “Apparently here we have both “causal stuff”, as well as independent “information stuff” associated with valence/ consciousness.”

            So how are you defining “information” here? In particular, what makes it distinct from “causal stuff”? I ask because my conception of information is very physical: physical patterns that, due to the causal history that led to their formation, have the potential to affect the efficacy of a system’s causal mechanisms. Within that understanding, information is a type of causal stuff. It is caused, and has at least causal potential.

            “As I model this, the first computer (or “brain”) outputs a virtual second computer (or the conscious entity).”

            Okay, but at the level of abstraction of neural processing, that wouldn’t be output from the brain, just more intermediate states within the brain, wouldn’t it? What you’re doing here is using the word “output” to refer to the contributions of lower level abstractions to higher level ones. Or would you disagree? If so, then I think I need a fresh precise description of what you see the second computer to be.

            Liked by 2 people

          16. “If my friend Peter does end up demonstrating that he can produce valences through software, or anyone else does so, then I’ll admit that you were right and I was wrong about valence existing as information alone.”

            What is your exact definition of “valences” here?

            Does a tree growing to maximize its access to sunlight pursue the “valence” of life-giving sunlight?

            I once wrote an application that monitored a web extrusion machine. It communicated with a laser scanner (input) that measured the thickness of the material being extruded. It compared its readings to a loadable preset for that type of material. Its outputs were a graphic display showing the operator what was going on, plus an alarm (sound and flashing red light) to alert if the material went out of spec.

            Was there “valence” in that case? The software knew what “good” readings were as well as “less good readings” (no alarm, but displayed on the graph) and also “bad” readings (alarm! alarm! alarm!).

            If these aren’t “valence” to you, why not? What makes your thumb pain different?

            Liked by 2 people

          17. Wyrd,
            By “valence” I’m referring to punishment and reward associated with existing. Here there will be something it is like to exist, and otherwise not. It’s generally presumed that all non-living things harbor no valence, and even machines that we build with alarms and such. Furthermore the same presumption is generally made regarding plants, fungus, and microorganisms. It’s thought that even when they’re perceived to be “in distress” or “content”, that we’re instead being fooled by automatic function that carries no associated valence.

            There seems to be greater potential for there to be valence among animals with brains. My own position is that it’s possible for such a machine to produce a virtual second machine that functions by means of valence, and because it helps an organism deal with more “open” environments.

            One question here given my model, is if someone were to build a machine that does actually have valence, how might we verify this? Even if a machine could be developed which produces an entity that suffers, such suffering shouldn’t inherently be displayed to us. We’d somehow need to give the suffering entity (and mind you that it would still only be hypothetical to us!) a way to reduce its own suffering, though also have reason to believe that it was doing this rather than the non-conscious machine. With enough evidence that a conscious entity was choosing to relieve itself of (presumed) suffering (and I suppose because we perceive that the non-conscious part shouldn’t otherwise function that way), a case could be made that the hard “How?” of consciousness had indeed been solved!

            Mike,
            If you’re using a definition for “information” in a way that preserves full causality, as well as relegating what I’ve been calling “brain output” to “intermediate states within the brain”, then we may be able to square our positions.

            My computer screen analogy from which to think about brain production of valence isn’t a perfect one — in it there is no “tangible” second machine to animate. But I do consider what you’re calling “intermediate brain function” which creates valence, productive to call “output” in relation to my dual computers model. Brain function may produce the conscious entity with a vast array of valences, or input to the virtual computer, just as my computer outputs signals which become input to a computer screen that lights up pixels in an associated way.

            The point of all this is to say that I believe valence can only be produced by the brain by means of associated mechanisms rather than “software” alone. As I see it causality simply wouldn’t be preserved if any random computer, or even a stack of symbol laden sheets of paper, could feel what I know of as “thumb pain”. This is contrary with all sorts of sci-fi dreams about getting uploaded into a computer for perpetual existence and whatnot. Conceptually I do believe in the potential for brain information to be uploaded, but on the other end of things this would need to be actualized through a machine with valence producing mechanisms (like my brain). So do you think we’re now square here, or am I missing something?

            Liked by 1 person

          18. Eric,
            I’m not sure you’re using “valence” correctly here. You seem to be attributing more to it than is actually there. A valence is just the attractiveness or aversiveness of a stimuli. I see no reason why it, in and of itself, requires consciousness. In that sense, a unicellular organism has valence in its reaction to noxious or nutritious chemicals. There’s also valence when my phone complains about being low on battery. This is similar to our discussions on “value”, which is closely related to valence, and also predates anything we’d be tempted to label as consciousness.

            I think what you’re really talking about here are affects, much more complex states involving the feeling of the valence along with the level of arousal and motivational disposition. I think affects are formed from the signals from reflexes or survival circuits that are beginning to fire, primarily as input into higher level circuitry that has to decide whether to allow or inhibit that reflex.

            On information, I don’t know of any version of it that doesn’t preserve causality. (Maybe in quantum physics, but I don’t think it’s relevant here.) It seems intimately related and integrated with it. If you agree with that for the nervous system, what about a computer system, or some other system of encoding, makes you think causality isn’t preserved? Usually the computer system is seen as the more causal one. On both sides, as far as I can tell, we’re talking about causal systems, indeed, systems of concentrated causality.

            Liked by 3 people

          19. “By ‘valence’ I’m referring to punishment and reward associated with existing.”

            So you are equating “valence” with the high-level evolved mental states of suffering and joy?

            But I thought “valence” was instrumental from your point of view in evolving consciousness in the first place. If only a high-level consciousness is capable of “valence” then it can’t very well be instrumental in creating that consciousness.

            As I’ve said to you before, your idea of “valence” requires an evolved consciousness to already exist.

            “My own position is that it’s possible for such a machine to produce a virtual second machine that functions by means of valence, and because it helps an organism deal with more ‘open’ environments.”

            I’ve already debunked all of this in our previous conversations, so I’m not going over it again. Suffice to say I find your position ungrounded and lacking in coherence.

            Look at it this way: Ultimately, what makes the brain any different from a machine? Isn’t the brain just a very complicated machine?

            Liked by 1 person

          20. Okay Mike, wherever I’ve used the term “valence”, substitute the term “affect”. If that’s the deal then I’ll try not to make this mistake with you again.

            Then on “information”, here it’s actually me that is trying to preserve causal integrity from other notions, such as computers in general will experience affects given the right software, or even a stack of paper which symbolically represents such a computer’s function.

            So are we square?

            Liked by 1 person

          21. Eric,
            I’m not sure what you mean by “causal integrity” or how it’s not preserved in the idea of a computer system having affects. Maybe you’re implying that there’s something causal in the nervous system that can’t be captured by software? If so, that’s a pretty common sentiment. (It’s basically Searle’s sentiment.) Unfortunately, it also typically comes with no description of the specific something that would be missing.

            Liked by 2 people

          22. Okay Mike, I now take it that we’re not square. As mentioned earlier, I’m skeptical enough to believe that we’re all biased. Furthermore quoting Hakwan Lau, you mentioned that working on theory can easily blind us to disconfirming data. Given our agreement about this, it would seem productive to try to better grasp the positions of people who we disagree with. Unfortunately achieving such a grasp can be far more difficult than doing so for people who we agree with!

            I believe that affect probably isn’t just produced by my brain through the “pattern” of its neuron function (if you take my meaning), but that pattern in relation to associated substrates. It seems to me that even with the same essential pattern, different substrates should tend to yield different results. Chemistry should matter.

            I do believe that it would be possible for a technological computer to produce affect, that is if the physics of its substrate were properly structured in relation to the pattern of its function.

            Hopefully I’ve now been clear enough to be understood, though let me know if not. And if you think that I don’t grasp your position all that well, let me know what I’m missing. I currently perceive you to believe that affect production is relatively substrate independent.

            Liked by 1 person

          23. Eric,
            I do believe it’s substrate independent, or to be more precise, I can’t currently see any reason why it wouldn’t be. The problem I see with the substrate argument is that it exists in the current gap between biological and technological systems, but that gap, while still wide, is constantly narrowing. To believe that there’ something unique about the substrate implies that we’ll eventually hit some limit in AI progress. Maybe we will, but if so, I can’t see any evidence of it yet.

            That’s not to say that, as a practical matter, the current standard hardware architecture wouldn’t have serious performance issues. Biological nervous systems are far slower than technological systems, but make up for it with massive parallelism. I can’t see any reason why technological systems can’t eventually go the same way. But that’s a performance issue, not a hard block on replicating the functionality.

            I also wonder, if you think substrate independence is not possible, why you call the concepts in your model “computers”.

            Liked by 2 people

          24. “Chemistry should matter.”

            Why?

            I’ll ask again: Ultimately, what makes the brain any different from a machine? Isn’t the brain just a very complicated machine?

            Do you subscribe to biological chauvinism?

            Liked by 2 people

          25. Mike,
            I think I’ve finally grasped the issue here, and fortunately it renders neither of us as dualists. I believe that you’re defining “computation” itself to be substrate independent, and thus any computation can not only be done by a “computer”, but also by means of sheets of paper with appropriate symbols on them. Here we’re discussing algorithms: “Processes or sets of rules to be followed in calculations or problem solving operations.” And so if affect does exist as computation alone, then it must by definition be able to exist on paper as well. No dualism there!

            From your definition for computation my own position is that affect does not exist as computation alone, though I do suspect that computation does always happen to be involved (or at least when “functional”). It’s like the way algorithms in themselves can never produce a computer screen image. For a screen image to be produced, the “chemistry” of a screen is quite mandated. Here a computer will animate a separate device in order to produce screen images. So I believe that the computer in my head animates a separate device in there that animates the affects that I feel. Thus I believe that in order for the human to build a device that functions in the mode of a human brain, a computer will need to be hooked up to an affect producing device. So there’s no fundamental limit to AI progress, but rather one very hard problem to overcome!

            What I think we need to do is accept that each of us perceive affect in fundamentally different ways. It doesn’t matter how many times you ask me where in the brain an affect producing machine exists — I simply don’t know. Perhaps neuroscientists will discover this some day? I do believe that such a machine exists however, and this is because I do not believe it’s possible for any algorithm alone (perhaps written on sheets of paper) to experience an affect such as the thumb pain that I might.

            Wyrd,
            I do appreciate that you’re trying to stick up for Mike here. I consider him to be a very good friend as well. But he’s more than capable of both defending himself, as well as attacking!

            Liked by 1 person

          26. “I do appreciate that you’re trying to stick up for Mike here.”

            Heh. We seem to have come a long way from, “It’s good to have you here Wyrd, and excellent question.”

            Dude, I’m not “sticking up” for anyone. I’m asking questions you don’t seem willing to answer. In particular: What makes the brain any different from a machine? Why should biological chemistry matter?

            “It’s like the way algorithms in themselves can never produce a computer screen image.”

            So what if I use the algorithm to draw dots on paper?

            What about the difference between displays consisting of: OLEDs, quantum dots, conventional LCDs, epaper, an old CRT, or millions of people packed together holding up the right color cards? All of them could use the same algorithm to produce the patterns (images).

            Liked by 2 people

          27. Sorry for not getting back to you earlier Wyrd. I’ve had some other things on my mind.

            Actually I do consider brains effective to call “machines”. In fact in discussions with others I even refer to the brain as a specific kind of machine, or “a computer”. Given your vast experience with technological computers however, it does make sense to me that you might consider it better only refer to brains as machines, though not the computer variety of machine.

            If you use an algorithm to draw dots on paper, then it seems to me that you may be said to exist as the output device of that algorithm — the algorithm in itself should not be able to put dots on paper. In fact it seems to me that each of the situations that you’ve mentioned may effectively be referred to as computational output devices.

            Liked by 1 person

          28. “Given your vast experience with technological computers however, it does make sense to me that you might consider it better only refer to brains as machines, though not the computer variety of machine.”

            This depends on how one defines “computation.” I prefer the computer science definition, because it avoids confusion about what a “computer” is then capable of. This definition requires a CPU and algorithms to make it function.

            Mike favors a broader definition that includes signal processing. This definition includes what are called “analog computers” as well as, for example, an analog radio or amplifier. This definition requires signal processing hardware.

            But there is a crucial distinction between them with regard to your ideas about the brain “computer” creating a “virtual computer” of consciousness: Analog computers can’t create virtual computers. That is strictly the purview of digital computers using algorithms to emulate a non-native system.

            And since the brain is not a (CS) type of computer, it therefore cannot create a virtual system. It can only do what it does, like any analog system.

            “In fact it seems to me that each of the situations that you’ve mentioned may effectively be referred to as computational output devices.”

            Indeed. My point was (A) output can be realized in multiple ways, and (B) more importantly, has nothing to do with the computation (be it digital or analog) that creates the image.

            Whatever “computation” the devices themselves are involved with has strictly to do with the mechanics of implementing the display.

            One can allow the original algorithm to do all the calculations necessary to generate the output, and the output device merely takes that information and displays it. This is especially apparent if one uses a CRT.

            Liked by 2 people

          29. Wyrd,
            Let’s begin with where we seem to agree.

            My point was (A) output can be realized in multiple ways, and (B) more importantly, has nothing to do with the computation (be it digital or analog) that creates the image.

            Exactly. This is to say that output such as thumb pain or a given image, will probably not exist by means of instructions in themselves, but rather by means of how instructions are implemented in a given medium, such as a computer screen, people holding up colored cards, or some kind of brain based affect producing machine. This is to say that each of us (and Searle) believe that both images and affects exist by means of something beyond algorithms alone.

            Thus in practice we should believe that it’s possible for the brain to produce affect (which beyond dualists who put this into a supernatural realm, isn’t disputed). So if my thumb gets whacked, information from the event goes to my brain and this incites some kind of non computer device which creates the horrible pain that I thus feel. (But then it could instead be as Mike currently seems to argue, or that my pain will exist in the activation of associated algorithms themselves. This seems to violate your computational output stipulation. Here I suppose he’d say that affect such as my thumb pain does not exist as “output” but rather “processing”.)

            Regardless, it’s from this point that I consider it useful to say that the entity which feels things like pain will exist as the conscious entity itself (like you or me). Such an entity will not be referred to as “brain”, but rather as a product of what the brain does. But you also seem to be telling me that an analog machine like the brain cannot create something more than what it is. Apparently you believe that the brain would need to function with discrete digital states in order to produce virtual function. But if it’s not the analog computational parts of the brain which create affects, as you and I believe contra Mike, then why would that matter? Couldn’t this machine accept signals from my thumb, process it, and then rely upon affect producing mechanisms to cause the pain? And if not, then how exactly do you consider pain to exist?

            Liked by 1 person

          30. “This is to say that each of us (and Searle) believe that both images and affects exist by means of something beyond algorithms alone.”

            That is not what I’m saying. Everything pertaining to the creation of the image comes from the algorithm. The display is just output that need not involve any further processing.

            As far as some entity with eyes seeing that image, it requires some form of display or transmission, but that is a distinctly separate matter. (Consider that two computers can share image information without it ever being displayed in any way.)

            To be clear, I completely disagree with any connection between displaying an image and what you’re calling affect.

            “So if my thumb gets whacked, information from the event goes to my brain and this incites some kind of non computer device which creates the horrible pain that I thus feel.”

            This is an unfounded assumption. Within the very broad definition of “computer” that you seem to be using, there is no reason to suspect anything “non-computer” is required.

            “This seems to violate your computational output stipulation.”

            I’m not sure what you perceive my “computational output stipulation” to be nor how it violates anything here.

            “Such an entity will not be referred to as “brain”, but rather as a product of what the brain does.”

            Do you just mean the difference between brain and mind here? (The latter being precisely what the brain does.)

            “But if it’s not the analog computational parts of the brain which create affects, as you and I believe contra Mike, then why would that matter?”

            Because you’ve apparently misunderstood what I’m saying. I agree with Mike that what the brain does (as an “analog computer”) is entirely responsible for all affects.

            “Couldn’t this machine accept signals from my thumb, process it, and then rely upon affect producing mechanisms to cause the pain?”

            The problem is that “affect producing mechanisms” is a complete hand-wave on your part, and you have no grounding for it, nor any neuroscience explanation to account for it.

            Liked by 2 people

          31. Interesting Wyrd. You’re instead saying that affect does exist by means of analog computation alone, just as Mike is? And even though you are a strong supporter of Searle’s Chinese room thought experiment regarding the concept of “understanding Chinese”, though Mike considers it to be a complete joke? Strange. Anyway let’s explore your position by means of my own affect based version of that experiment.

            If affect happens to be substrate independent as Mike and apparently you propose, then the computer that I’m now typing on could create something which experiences what I know of as thumb pain, that is if it were to process an applicable algorithm. (And note that I said only “process” here rather than “output what’s processed to something else”, or my own position.) Furthermore it should be possible for that very algorithm to be written out on paper such that, if processed (and I suppose that a properly trained human would need to process it by reading this information), would thus create an entity which feels thumb pain. Mike seems to be holding his ground on this so far. Will you also own up to the sorts of implications which are associated with affect existing as processed information alone?

            One thing that’s notable about this conflict is that in the end it’s not a critical platform from which to support the models that I’ve developed. This is “hard problem” speculation. Conversely my own models are set up to describe our function regardless of how it is that affect happens to be produced. But it seems to me that any neuroscientist today who decides going in that the brain harbors no affect producing mechanisms, but rather only algorithms that thus produce affect, may be missing something. I don’t understand how someone today who understand how primitive modern brain understandings happen to be, could be so certain about such a position, and especially given its bizarre implications.

            Liked by 1 person

          32. “You’re instead saying that affect does exist by means of analog computation alone, just as Mike is?”

            Yes.

            “And even though you are a strong supporter of Searle’s Chinese room thought experiment regarding the concept of ‘understanding Chinese’, though Mike considers it to be a complete joke?”

            Where did you get that idea? From my blog post The Giant File Room: So it’s a poor analogue for consciousness, is my point.

            Searle’s point is that a computational system (computation per the CS definition) doesn’t seem capable of phenomenology, and because I’m skeptical of computationalism, I tend to agree on that point.

            But the brain is nothing like such a system despite sharing some of its characteristics.

            “If affect happens to be substrate independent as Mike and apparently you propose, then the computer that I’m now typing on could create something which experiences what I know of as thumb pain,”

            No. You’re still confused about the difference between digital computers and analog “computers.”

            Until facts prove otherwise, I utterly reject computationalism. I’ve written several dozen posts explaining exactly why.

            When I say substrate independent, I’m referring to an isomorphic analogue of the human brain, one with the same complex parallel network.

            “Will you also own up to the sorts of implications which are associated with affect existing as processed information alone?”

            Again, you’re confused about the difference between digital systems using algorithms and analog systems using signals. They are completely different. My most recent post, Magnitudes vs Numbers discusses this explicitly.

            “But it seems to me that any neuroscientist today who decides going in that the brain harbors no affect producing mechanisms, but rather only algorithms that thus produce affect, may be missing something.”

            You are missing something: That those aren’t the only two options. (And I don’t think anyone thinks there are algorithms in the brain.)

            Liked by 1 person

          33. Wyrd,
            I agree with you that numbers exist merely as abstractions associated with human conscious function. This also means that there are no truly “digital” computers in the end. You’ve said the same I think in the post that you linked to. But yes it does still seem useful to classify the ultimately analog computers that we build as “digital”, that is given that we find it useful to design them to function in relatively discrete rather than continuous ways. Though wrong in the end, this does seem to be a reasonable heuristic.

            You’ll get no protest from me for saying that our machines are utterly primitive when compared against the function of life itself, or something which was created without being saddled with the burden of “understanding”. Definitely. But in the end you can’t hide behind how much more advanced living function happens to be engineered than the function of human developed machines.

            You’re currently making a positive claim that something like “thumb pain” will exist by means of information processing alone — evolved parallel analog processing though it may be. This assertion should naturally have implications to assess. Then conversely I’m saying that such processing alone won’t get the job done, and regardless of how analog or advanced such processing happens to be. Instead I’m saying that there must be some kind of physics which causes something to feel thumb pain, or a base mechanism that brain processing has evolved to animate.

            I wonder if you can think of anything else that the brain does, not by output device as I propose, but rather by advanced analog parallel processing alone? If affect happens to be the only such example, then wouldn’t that be a bit special?

            Liked by 1 person

          34. “I agree with you that numbers exist merely as abstractions associated with human conscious function.”

            That’s not quite right. I lean towards Platonism and the belief that numbers are real. How we deal with them is the abstraction. For example, if my field has 127 sheep, that number of sheep is a real thing (as is their heights and weights and what not). The abstraction grounds integers in set theory where they are the cardinal sizes of sets. (This differs from magnitudes, which are a whole other can of worms.)

            Simply put, abstractions refer to something real.

            “This also means that there are no truly “digital” computers in the end. You’ve said the same I think in the post that you linked to.”

            You seem to have misunderstood the point of the post. Yes, digital computers have analog aspects, but their designs explicitly discount those aspects. They wouldn’t work otherwise. Their behavior is explicitly digital.

            “But yes it does still seem useful to classify the ultimately analog computers that we build as ‘digital’, that is given that we find it useful to design them to function in relatively discrete rather than continuous ways.”

            I think you still don’t understand the difference between analog and digital computers. Nearly all the computers we build are digital (in the full sense of the word). Certainly all laptops, desktops, and mobile devices are. Just about every device commonly called a “computer” is. Very few people deal with analog computers (technicians and scientists, mainly).

            “Instead I’m saying that there must be some kind of physics which causes something to feel thumb pain, or a base mechanism that brain processing has evolved to animate.”

            I understand, but that’s just a guess on your part, and it’s one you haven’t grounded in anything factual. Further, a lot of your analysis seems to me to ignore the facts we do know.

            The implication that the brain does what it does without magic is demonstrated by (A) having no physical account for this putative magic, and (B) an increasing understanding by neuroscience of how the brain does work. As Mike has said many times, there’s no reason to think this understanding won’t grow.

            “I wonder if you can think of anything else that the brain does […] by advanced analog parallel processing alone?”

            Just everything. There are no facts that suggest otherwise.

            That all said, phenomenal experience is a bit of a mystery (the hard problem), but we currently have no reason to believe it won’t turn out to be physically explained. It may well turn out to be simply what it is like to be a sufficiently complex analytical (analog!) machine.

            Liked by 1 person

          35. Dear Mike, Eric, Wyrd and other commenters,

            Hello! I have enjoyed the marathon of reading your conversations here.

            All in all, I have hyperlinked these four posts in the following order under the heading “Related Articles” in my post entitled “SoundEagle in Debating Animal Artistry and Musicality” at http://soundeagle.wordpress.com/2013/07/13/soundeagle-in-debating-animal-artistry-and-musicality/

            (1) Hierarchy of consciousness, January 2021 edition
            (2) Layers of consciousness, September 2019 edition
            (3) Dimensions of animal consciousness
            (4) The problem of animal minds

            Mike, given the number of issues being broached by all of you and their salient details, you might want to consider expanding your current post and the other three posts accordingly by collating, summarizing, elaborating and/or categorizing those issues, considering that very few people will have the patience and/or intellect to wade through and digest all of the comments.

            You are very welcome to notify and advise me anytime if or when you encounter or think of something relevant or worthwhile that you would like to be mentioned, included or hyperlinked within my said post by leaving comment(s) there.

            Liked by 1 person

  7. I do see valence, in the sense of a measure of goodness for the organism, as a crucial element of the architecture from which consciousness arises. It is used to determine what action to take and what to pay attention to, and in so doing it brings everything into a single ‘currency’ so that options can be compared. Valence enables a current state of the organism to be translated into action and attention sets. I wouldn’t take seriously any model that doesn’t include valence or an equivalent (such as Friston’s surprisal, in a more mathematical formulation).

    I would then see consciousness as then arising when all of that stuff – current state, valence, potential and actual action and attention sets – get run through the same mill so that the organism can respond to its own mental processing. Perhaps this corresponds to Eric’s virtual second computer concept.

    In simple terms, low level processing characterises where the organism is in its state space (like where it is on a chess board). Valence creates a 3d terrain across that state space so that the organism can move (act) to improve its valence. In this sense valence acts like the fuel in the engine of behaviour and explains many of the social phenomena that physics doesn’t get much purchase on. Consciousness lets the organism reflect on (and modify) this view, which gets it out of local minima (ie stops it getting stuck in a rut). Infinite regress is avoided because it can deploy its low level processing (originally developed for simple learned stimulus-response action) to now travel through its own mental terrain.

    Liked by 1 person

    1. Peter,
      Do you have any thoughts on the difference between instrumental learning that is habit acquisition vs goal-directed? It’s something I’m struggling with right now. From the literature, it seems like goal-directed requires representations for the valence, but habit acquisition doesn’t, that is, they can happen model free, which makes me wonder how conscious a creature needs to be to have that capability.

      LeDoux asserts that there’s no convincing evidence for goal-directed learning in pre-mammalian vertebrates, and the few studies I’ve dug up so far haven’t conclusively contradicted him. For instance, teaching a goldfish to stay away from a painful stimulus, in one study, seemed to require repetition (habit acquisition?). It’s not something they acquired on one exposure.

      Liked by 1 person

      1. I have developed software to do instrumental (punishment/reward) learning in a simple case. Valence provides a uniform measure of goodness, positive for good results (eg +1), negative for bad results (eg -1), and this is used to learn what action to take in response to what input state. The absolute (unsigned) value of valence is then used to learn what to pay attention to – i.e. pay attention to anything that leads to a good or bad result, rather than a neutral one. This can start with a random initial state and rapidly converge to paying attention to the right things, and taking the right actions, in quite a striking way. It uses valence (as I have defined it here), but is not conscious.

        I think habits can follow the same learning pattern. Perhaps for those habits that are rather neutral (not leading to strong positive or negative outcomes), a factor is the extent to which we favour certainty of a neutral outcome versus preferring and element of random exploration (e.g. play, risk-taking). This can be factored into the learning model by including a random element in selecting between things with neutral (zero) valence.

        This, for me, provides the subconscious substrate upon which consciousness can arise. This is by taking all the data items in the above (input state, valence, expected valence for different potential actions and attention sets) and treating them all as inputs.

        It also includes being able to use part of this space as a scratchpad to try out potential actions (or action sequences), and their hypothetical valences and state outcomes, to communicate these to others or to received training. Maybe this is where goal-directed learning comes in. We can explore unusual potential sets of actions, then tee up internal states that will drive different future actions, while treating the goal and hypothesised outcomes as part of the input information, alongside real world inputs. The important distinction here is that consciousness works with a representation of the valences mentioned in the first paragraph (eg expected future valence at some time in the future, with some probability), rather than actual valence, and with representations of potential actions.

        Once novel action sequences have been painstakingly put together by conscious reasoning, or communication from a teacher, not only can the input state/action relationship be subsumed into subconscious learning, but the sequencing from one event to the next action can also be automated (e.g. in learning to drive, then this becoming automatic).

        I think there is a good argument for all learning being subconscious, but all teaching being conscious. When we revise for an exam, we can consciously drag ourselves through the necessary material and exercises, but the underlying processes that actually embed learning are not under conscious control.

        Liked by 1 person

        1. Thanks Peter. The subconscious bit is what I’m wondering about. If pre-mammalians only learn subconsciously, then we can’t take their learning, in and of itself, as evidence for consciousness. The valences might be there, but in a nonconscious manner.

          I’m still making my way through the various studies that Feinberg and Mallatt cite to see if LeDoux’s assertion holds up. What I’m finding is that a lot of those studies are coming from people who appear heavily predisposed to find conscious feelings in their data. The more cautious ones seem to present a pretty stimulus-response bound form of learning.

          Liked by 2 people

          1. Yes I agree that learning, and the valence that drives that learning, are not sufficient for consciousness, otherwise we could claim it in some really simple learning systems.

            As already outlined, I consider consciousness to arise from reading, processing and amending that low level information, so that the organism can respond differently based on its own subconscious representations and processing. (To put it another away, it knows where it is in its own neurally represented state space, and where it can move to in that space).

            I want to add that richer self awareness results from structuring that representation so as to distinguish what is attributable to self and what to external world (or internal body state). This also helps a lot with learning, because you know what you have direct control of, versus what is external (or visceral) and you just have to respond to.

            While my main motivation for understanding consciousness is because there isn’t yet an agreed account, I do think it is going to advance machine learning, as there is strong synergy between learning and consciousness.

            Liked by 1 person

          2. Definitely on the machine learning. Representation free habit learning is actually model free reinforcement learning, a concept in machine learning. (I wish I’d had that realization myself, but one of the papers explicitly used the “model free RL” term.)

            Liked by 1 person

  8. I don’t know if someone has mentioned this before, but there is a paper on the evidence of conditioned behavior in amoebae. According to your cognitive hierarchy, instrumental behavior / sentience is number 3 but it is at least plausible that even amoeba show signs of instrumental behavior as stated in de la Fuente et al. (2019). What are your thoughts on this?
    References
    De la Fuente et al. (2019). Evidence of conditioned behavior in amoebae

    Liked by 2 people

    1. The description for that layer is one of the things I’d like to change in a newer version. The language here assumes far too much and somewhat begs the question by using words like “affects” and “sentience.”

      From what I’ve read, the evidence for instrumental learning in unicellular organisms is controversial. Simona Ginsburg and Eva Jablonka, in their book “The Evolution of the Sensitive Soul” argue that the learning at that level amounts to sensitization and habituation and is not associative. I discuss in more detail here: https://selfawarepatterns.com/2020/04/07/unlimited-associative-learning/ (Sorry for all the linking, but you keep asking interesting questions. 🙂 )

      But for its role in this hierarchy, it shouldn’t be seen in isolation from the layers below and above it. For instance, I know of no evidence that unicellular organisms generate predictive models of their environment. This is global instrumental / operant conditioning, what Ginsburg and Jablonka call unlimited associative learning. It involves crossmodal sensory associations as well as associations across the overall sensory and motor planning systems.

      Liked by 2 people

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.