Layers of consciousness, September 2019 edition

A couple of years ago, when writing about panpsychism, I introduced a five layer conception of consciousness.  The idea back then was to show a couple of things.

One was that very simple conceptions of consciousness, such as interactions with the environment, were missing a lot of capabilities that we intuitively think of as belonging to a conscious entity.

But the other was to show how gradually the emergence of all these capabilities were.  There isn’t a sharp objective line between conscious and non-conscious systems, just degrees of capabilities.  For this reason, it’s somewhat meaningless to ask if species X is conscious, as though consciousness is something they either possess or don’t.  That’s inherently dualistic thinking, essentially asking whether or not the system in question has a soul, a ghost in the machine.

I’ve always stipulated that this hierarchy isn’t itself any new theory of consciousness.  It’s actually meant to be theory agnostic, at least to a degree.  (It is inherently monistic.)  It allows me to keep things straight, and can serve as a kind of pedagogical tool for getting ideas across.  And I’ve always noted that it might change as my own understanding improved.

Well, although disagreeing with him on a number of important points, after reading Joseph LeDoux’s account of the evolution of the mind, as well as going through a lot of papers in the last year, along with many of the conversations we’ve had, it’s become clear that my hierarchy has changed.

Here’s the new version:

  1. Reflexes and fixed action patterns.  Automatic reactions to sensory stimuli and automatic actions from innate impulses.  In biology, these are survival circuits which can be subject to local classical conditioning.
  2. Perception.  Predictive models built from distance senses such as vision, hearing, and smell.  This expands the scope of what the reflexes are reacting to.  It also includes bottom-up attention, meta-reflexive prioritization of what the reflexes react to.
  3. Instrumental behavior / sentience.  The ability to remember past cause and effect interactions and make goal driven decisions based on them.  It is here where reflexes start to become affects, dispositions to act rather than automatic action.  Top down attention begins here.
  4. Deliberation.  Imagination.  The ability to engage in hypothetical sensory-action scenario simulations to solve novel situations.
  5. Introspection.  Sophisticated hierarchical and recursive metacognition, enabling mental-self awareness, symbolic thought, enhancing 3 and 4 dramatically.

Note that attention has been demoted from a layer in and of itself to aspects of other layers.  It rises through them, increasing in sophisticating as it does, from early bottom up meta-reflexes, to deliberative and introspective top down control of focus.

Note also that I’ve stopped calling the fifth layer “metacognition”.  The reason is a growing sense that primal metacognition may not be as rare as I thought when I formulated the original hierarchy, although the particularly sophisticated variety used for introspection likely remains unique to humans.

Some of you who were bothered by sentience being so high in the hierarchy might be happy to see it move down a notch.  LeDoux convinced me that what I was lumping together under “Imagination” probably needed to be broken up into at least a couple of layers, and I think sentience, affective feelings, start with the lower one, although they increase in sophistication in the higher layers.

I noted that mental-self awareness is in layer 5.  I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.  I think with layer 2, the system has to have a body-self representation in relation to the environment, so it’s tempting to put it there, but putting the word “awareness” at that layer feels misleading.  (I’m open to suggestions.)

It seems clear that all life, including plants and unicellular organisms, have 1, reflexes.

All vertebrates, arthropods, and cephalopods have 2, perception.  It’s possible some of these have a simple version of 3, instrumental behavior.  (Cephalopods in particular might have 4.)

All mammals and birds have 3.

Who has 4, deliberation, is an interesting question; LeDoux asserts only primates, but I wouldn’t be surprised if elephants, dolphins, crows, and some other species traditionally thought to be high in intelligence show signs of it.  And again, possibly cephalopods.

And only humans seem to have 5.

In terms of neural correlates, 1 seems to be in the midbrain and subcortical forebrain regions.  2 is in those regions as well as cortical ones.  LeDoux identifies 3 as being subcortical forebrain, although I suspect he’s downplaying the cortex here.  4 seems mostly a prefrontal phenomenon, and 5 seems to exist at the very anterior (front) of the prefrontal cortex.

Where in the hierarchy does consciousness begin?  For primary consciousness, my intuition is layer 3.  But the subjective experience we all have as humans requires all five layers.  In the end, there’s no fact of the matter.  It’s a matter of philosophy.  Consciousness lies in the eye of the beholder.

Unless of course, I’m missing something?  What do you think?  Is this hierarchy useful?  Or is it just muddying the picture?  Would a different breakdown work better?

This entry was posted in Mind and AI and tagged , , , , , , . Bookmark the permalink.

47 Responses to Layers of consciousness, September 2019 edition

  1. Steve Ruis says:

    Re ” I don’t specify where body-self awareness begins in this hierarchy, because I’m not sure where to put it.” This is grounded, I would think, in proprioception, the unconscious awareness of where our bodies are in space, which seems to come from a complex feedback mechanism. AN example is that when we receive a Novocaine injection from a dentist, we often seem to lose the ability to talk. Out numb lips and gums do not provide the feedback our brains need to allow for speech.
    I suspect that many of our conscious abilities have roots in our physical abilities. My question is why would we become conscious of these sub- or un-conscious abilities. There must be some evolutionary advantage.

    Liked by 2 people

    • On proprioception, I think that’s definitely part of it. And that seems like part of layer 2. The issue is I don’t want to use the word “aware” there. Maybe the right way to think of this is as of a representation at layer 2 that the system becomes aware of in layer 3.

      On being rooted in our physical abilities, I think that’s an insightful observation. I’m struck by the fact that complex physical abilities seem linked to intelligence: primates, elephants (trunk), cephalopods, etc. Apparently, complex physicality requires a lot more intelligence.

      Liked by 1 person

    • James Cross says:

      “I suspect that many of our conscious abilities have roots in our physical abilities.”

      I agree about that being insightful. It reminds me of some observations from this article:

      https://aeon.co/essays/how-cognitive-maps-help-animals-navigate-the-world

      It seems to me that cognitive maps probably began with maps of the internal body and basic things like orientation. But the method of organization has been extended to all sorts of others things like how we organization events in time, social structure, and even organization of complex ideas.

      Like

      • The fact that the hippocampus and surrounding regions have turned out to be both a memory system and map, I think, shows why memory originally evolved, and how grounded it is in its original functionality. There’s a reason that one of the ancient techniques for memorizing a speech involved associating various parts of it with parts of your house. (“In the first place.., in the second place…”, etc)

        I know when I think about various web sites and blogs, I hold them in a kind of memory map. I think of my blog, yours, and others as existing in a physical landscape, sometimes related to the physical geography of the author (if I know it), but not always.

        Like

  2. Wyrd Smythe says:

    “In the end, there’s no fact of the matter. It’s a matter of philosophy. Consciousness lies in the eye of the beholder.”

    Indeed, and I’ve never been a fan of pigeon-holing — exceptions and fuzzy boundaries so often make it hard to lock down categories categorically.

    Who knows what kind of inner life an octopus or crow might have!

    Liked by 1 person

  3. paultorek says:

    Where does higher-order representation come in? Level 3?

    Like

    • Good question. Level 3 would be my guess too. At a minimum under HOT, I would think there would be HORs for the reflexive or habitual action plans (affective feelings), sensory images, and the bodily self. At this level, I don’t know that they would necessarily be in the cortex.

      Like

  4. Lee Roetcisoender says:

    “It’s a matter of philosophy.”

    Reminds me of a quote from Robert Pirsig:

    “Philosophologists, calling themselves philosophers, are just about all there are. You can imagine the ridiculousness if an art historian taking his students to museums, having them write a thesis on some historical or technical aspect of what they see there, and after a few years of this giving them degrees that say they are accomplished artists. They’ve never held a brush or a mallet and chisel in their hands. All they know is art history.

    Yet, ridiculous as it sounds, this is exactly what happens in the philosophology that calls itself philosophy. Students aren’t expected to philosophize. Their instructors would hardly know what to say if they did. They’d probably compare the student’s writing to Mill or Kant or somebody like that, find the student’s work grossly inferior, and tell him to abandon it. As a student Phædrus [Pirsig’s alter ego in the book] had been warned that he would ‘come a cropper’ if he got too attached to any philosophical ideas of his own.

    Literature, musicology, art history, and philosophology thrive in academic institutions because they are easy to teach. You just Xerox something some philosopher has said and make the students discuss it, make them memorize it, and then flunk them at the end of the quarter if they forget it.”

    Like

    • I’ve actually never taken a philosophy class. I might have, but I think I sensed that it might be like you describe, studying the writing of historical figures, with little actual new reasoning. It would be like making science students read Copernicus, Galileo, Newton, or Darwin directly, instead of just getting the modern synthesis of their ideas and trying to work with them.

      One of the nice things about being an amateur, is I can decide how much of a deep dive I want to do in any area, and stop once I get bored.

      Like

      • Lee Roetcisoender says:

        The biggest reason philosophy has such a bad rap is due primarily to the practice of philosophology. According to Whitehead, philosophy in general is nothing more than a footnote to the work Plato and Aristotle achieved.

        Philosophy and metaphysics just happens to be my thing, I’m genetically wired for it. Robert Pirsig was a major influence on my pursuit of the discipline because I could relate to his own burden of mental illness having been afflicted by the scourge of depression myself. Mutations occur from time to time in the evolutionary process, all of which leads to the next evolutionary ontological level of experience. Once that change occurs, one can never go back. Pirsig saw that process as the natural progression of static quality as it continually discovers dynamic quality. Personally, I consider that process the answer to David Chalmers compelling question of “why conscious experience at all?” Everything here is a condition, and that condition is a possibility. That statement is not untenable, the science of physics clearly demonstrates it efficacy.

        If this is all a game of a superior intelligence, surely that intelligence could build an algorithm to achieve the same end game without sentience. Consciousness was an adjunct to my original research, but from what I’ve discovered, sentience it at the heart and soul of materialism. Homo sapiens have a real problem with that concept, but really it’s nothing more than a culturally instilled bias. What if Renee Descartes had uttered the words “I am, therefore sentience” instead of “I think, therefore I am.” I am would be the predicate to corroborate sentience as a fundamentally universal paradigm. The science of physics would look entirely different that it does today. Panpsychism doesn’t take away from science, what panpsychism does is enhance the understanding of ourselves in the grander scheme of a magnificent work of art.

        Liked by 2 people

  5. Mike, your level 1 looks like teleology to me (well, teleonomy), but in a previous thread you said you don’t buy teleology. What do you mean by that? Are you saying there’s no such thing, or whatever it is it’s unrelated, or what?

    *

    Like

    • James,
      I don’t buy teleology because I can’t see that it’s productive to ascribe purpose to natural systems. Evolved attributes can be adaptive, or not. If they are, they will enhance the persistence of the system in question, in other words, be naturally selected. This might give the appearance of purpose in a teleonomic sense. But a trait that is adaptive today might be maladaptive if the environment changes, and vice versa.

      Of course, we always use a shorthand when discussing evolved functionality, saying that the “purpose” is Y. Along those lines, biologists talk about “innovations” as though animals were entrepreneurially trying out new things, when in reality they simply have mutations that turn out to be useful. It’s all useful metaphorical shorthand, as long as we keep in mind that’s what it is.

      I think when we start speaking of evolution making predictions, implying it has forethought of some type, we’re crossing that line, and risking confusion about what we’re talking about.

      Like

      • Fair enough, but I suggest that you, like many philosophers, may be hung up on the familiar concept of purpose, the kind that requires “forethought”, the one typically referred to as teleology, and I suggest you are failing to take into account a real phenomenon, which some term as teleonomy (Dawkins uses the term archeopurpose). Rovelli seems to be working out the physics/math of this concept right now.

        The reason I harp on teleonomy is that it is a necessary concept to explain representation, qualia, and the hard problem.

        Another question: when you hear neuroscientists saying that the brain works by prediction, do you think they are talking about the kind of prediction that requires forethought? When you perceive a rose, what do you see as the role of prediction in that perception?

        *

        Like

        • I don’t really have a problem with teleonomy. I like the sound of archeo-purpose. It makes clear we’re not talking about intelligent purpose. Ultimately this is all subject to the limitations of language.

          “do you think they are talking about the kind of prediction that requires forethought?”

          Just to be clear here, you switched from purpose to prediction.

          Not necessarily. At the perception level, there is no forethought involved. It actually happens non-consciously and mostly involuntarily, although it is a mixture of innate and learned associations. We just get the result.

          If we get into layer 3 or 4, then the predictions can become more explicit, and therefore more likely to involve forethought.

          “When you perceive a rose, what do you see as the role of prediction in that perception?”

          When we recognize that the shape and color is a rose, that is a prediction, or perhaps more accurately, a prediction framework, which we may or may not utilize for our purposes.

          In truth, a rose probably seems vivid to us because we evolved to see ripe fruit as vivid, which tend to be red or yellow. The rose just benefits from that ripe fruit detector “misfiring.” (And yes, I’m making use of the shortcut language here. 🙂 )

          Like

          • At the perception level, there is no forethought involved. It actually happens non-consciously and mostly involuntarily, although it is a mixture of innate and learned associations. We just get the result.

            So do you agree that there is prediction here, at the no-forethought perception level? Because I’m pretty sure that is what those neuroscientists are saying.

            *

            Like

          • Yes, that’s what I meant.

            Like

          • Wait, what? So how is the no-forethought perception a prediction? As opposed to being an affordance for subsequent prediction?

            *

            Like

          • I need to sign off so my next reply won’t be until tomorrow, but what does no-forethought have to do with it? Are you saying that a prediction has to be planned? Why would that be?

            And remember, we were discussing forethought above in terms of purpose, not prediction. Or am I missing something?

            Like

          • I’m trying to figure out what people mean when they say that my perception of a rose involves prediction. The (Merriam-Webster) definition of predict is “to declare or indicate in advance”. What in my perception of a rose was indicated, and in advance of what?

            To say it again, the use of the term “prediction” implies two separate time points: 1. the time of prediction of an event and 2. the time of the event.

            For myself, I can reconcile prediction with perception by saying that the prediction happens at the time of creation of the mechanism which will recognize a specific pattern. When I perceive a rose it is because there existed, prior to the perception, a mechanism which takes certain specific input and recognizes that input as “rose”. This mechanism had to have been created for the purpose (either teleological or teleonomic) of recognizing a rose. The creation of the Mechanism is the prediction that, in the future, if the appropriate input happens, then a “rose” is the case.

            So if you take away the teleonomy, I don’t see how perception could work as a prediction.

            *

            Like

          • “What in my perception of a rose was indicated, and in advance of what?”

            You have to think in terms of what’s added to a system that only has reflexive automatic reactions. Recognizing a rose predicts that it matches a mental concept, as well as not matching a host of others. In other words, it’s not food, or a predator. In the case of humans, it’s not a tool (except maybe for mate seeking and retention).

            “To say it again, the use of the term “prediction” implies two separate time points: 1. the time of prediction of an event and 2. the time of the event.”

            The perception, using distance senses (vision, smell) would be the first event. The second event would depend on what was done with the information.

            “So if you take away the teleonomy, I don’t see how perception could work as a prediction.”

            Again, I don’t have an issue with teleonomy, the appearance of purpose from evolved mechanisms, just with teleology.

            Like

  6. James Cross says:

    I keep going back to the prefrontal synthesis thing in humans that seemed to produce an exponential leap in 3, 4, and possibly enabled 5 to happen.

    One problem with the layers is that certain neural capabilities might have manifest in different degrees that span across the layers. I assume you wouldn’t disagree with that. But it seems like there may need to be more of a matrix with capabilities in various degrees specific to the layer. For example, adding PFS to layer 3 enables goal planning that might extend across days or years with a large number of steps involved in reaching the goal. Whereas, more basic planning without PFS as exhibited of crows might only consist of a few steps and need to be compressed to less than an hour. What beyond PFS this would be applicable to I’m not sure. The idea just occurred to me and isn’t completely thought out.

    Some others might be:

    1- Basic either/or logic
    2- Counting ability
    3- Sophistication of communication – how many and how refined the signs used are
    4- Social/cultural orientation

    By the way on this last one the layer approach seems to be oriented to single organism approach without consideration of fact that organisms exist in communities (even slime mold).

    Like

    • Good points. The hierarchy is definitely an oversimplification. I’m reminded of something Brian Greene said in one of his books, that he sometimes first needs to present an idea in an oversimplified manner, then clean up afterward. There’s definitely a factor of that type involved here.

      For example, I largely ignore habits in this hierarchy, because it’s a capability hierarchy, and habits strike me largely as learned reflexes. But you need at least 3 to establish them, so once 3 is there, there’s no reason to explicitly list them. But they are definitely in the mix.

      On PFS, if I’m understanding it correctly, it should move an organism from 3 to 4. The problem is that I think 4 exists in much more than humans. I’m currently watching a documentary, ‘Bird Brain’, which is showing crows, parrots, and other birds doing things that I perceive to be full 4, solving complex problems with one try (i.e. not trial and error), apparently better than chimpanzees, and in some cases solving problems in social groups, working together.
      https://www.pbs.org/wgbh/nova/video/bird-brain/

      Like

      • James Cross says:

        Keep in mind I am not in any way disagreeing with the five layers. I’m just try to add extra dimensions to them.

        You might be right PFS is more associated with 4 but then where does the difference in degree come into play in the layers. A crow might be able to manage 3-4 steps with all of the items required visible for a problem solution but a human can manage many more steps and even retrieve items required for the problem that are not visible. Don’t get me wrong. I am really impressed with what crows can do but clearly there is a major difference in scale of capabilities. A human 4 and a crow 4 are not the same.

        Also, when we look at this in terms of human evolution, a lot of this capability appears relatively recently after the brain had stopped increasing in size. (Actually Neanderthal brains apparently were larger than sapiens.) So there seems to be a qualitative difference not explained by just more neurons.

        Perhaps there is more involved in the maturation rate delay of the PFC than just PFS.

        Side question: Do you think language is required for 5?

        Like

        • Thanks for the clarification, but it would have been fine if you were disagreeing with it. I did ask for feedback after all. 🙂

          My suspicion has long been that the big difference between us and most other animals that can do 4, aside from sheer capacity, is symbolic thought. It enables us to work with concepts like hours, days, weeks, etc. If you think about it, planning far ahead without these frameworks would be pretty challenging. Symbolic thinking allows us to expand the scope of our planning far beyond our immediate individual sensorium.

          So, I totally agree that there’s a big difference between us and crows. But it likely takes the next level for that difference to be realized.

          I wouldn’t say that language is required for 5 so much as enabled by it. I’ve occasionally thought about adding a sixth layer for symbolic thought to explicitly chart the relationship. That’s not to say that language, and symbolic thought overall, doesn’t provide an enhancing feedback loop. Once language developed, I suspect ever increasing articulation and social skills became a selective trait.

          Like

          • Um, what do you mean by “symbolic thought”? How would you test for it in crows?

            *

            Like

          • I mean it in the anthropological sense, volitional use of symbols for either perceptions, actions, or other symbols, along with using those symbols in structures and hierarchies, to extend thought beyond immediate experience. Language, art, mathematics, etc.

            Like

          • When you talk about symbolic thought I need you to be more precise. Where are the physical symbols? Are they inside the brain? Or is symbolic thought a kind of thought necessary to create symbols like spoken words, written mathematical symbols, etc.

            *

            Like

          • The physical symbols would be in the brain, as neural firing patterns, synapses, etc (in addition to whatever external version we might create). And your last sentence is true too. It’s not an either/ or thing.

            Like

  7. A couple of things Mike. You’ve mentioned that your hierarchy is inherently monistic, though to me that just doesn’t seem right. Couldn’t someone propose one or more levels of your hierarchy in an “otherworldly” fashion?

    Then secondly you’ve implied that any conception of consciousness which is not continuous, but rather is defined to have discrete “on” and “off” states, would inherently reflect substance dualism. This also does not seem right. Couldn’t someone define a given aspect of worldly causal dynamics as “conscious” in itself, and thus a state which will or will not exist on the basis of that criteria alone? For example we say that certain things “live” while others do not, but also presume that there are causal dynamics of this world which make this distinction useful. So I’d think that we might also develop a useful definition for “consciousness” that isn’t continuous (sorry levels 1 and 2!), and yet is just as natural.

    Beyond “dualism” however, of course I’m one of the people who considers it useful to define “sentience” as consciousness itself. Thus I’m happy that your latest hierarchy gives it a bit more emphasis. It’s your architecture rather than mine however, so obviously take it where you like. From there it’s my obligation to consider your positions on the basis of your themes in the attempt to understand.

    Liked by 1 person

    • Eric,
      “Couldn’t someone propose one or more levels of your hierarchy in an “otherworldly” fashion?”

      Definitely. Someone could slide in an immaterial soul between a couple of the layers. The hierarchy as it stands doesn’t include anything like that, or posit it as necessary. But a determined dualist could find a spot for it.

      “Couldn’t someone define a given aspect of worldly causal dynamics as “conscious” in itself, and thus a state which will or will not exist on the basis of that criteria alone?”

      The questions is what would that be? When reading your point, I thought about Dehaene’s global neuronal workspace. Someone could say that’s either there or it isn’t. Except that the GNW is a complex emergent phenomenon. It wouldn’t have just popped up one day fully formed. It would have needed an incipient proto-form, and then gradually increasingly sophisticated versions. At what point would we declare that a workspace now existed?

      “For example we say that certain things “live” while others do not, but also presume that there are causal dynamics of this world which make this distinction useful.”

      Life is an excellent example, because it’s also extremely difficult to precisely define. Consider viruses, viroids, and prions. These are systems are right on the boundaries between life and complex chemistry. Are they life? Proto-life? We define cells as life, and proteins by themselves as non-life, but these agents seem somewhere in between.

      “It’s your architecture rather than mine however, so obviously take it where you like.”

      Again, it’s meant to be a theory neutral crutch. It’s most definitely not an architecture. For that, I’m currently favoring HOT like theories, although that changes over time. If you want to consider it, you might think how your model, or any other, might fit within it. (Or invalidate it.)

      Liked by 1 person

      • Mike,
        Well your assertions that your hierarchy is monistic, while something like my own position will be dualistic, shocked me. But I know you as a reasonable guy so I tried to remain calm about it. I guess that you exaggerated there a bit. Just as we presume no ghost in the machine for “life”, my own non-hierarchy also needn’t be dualistic.

        To help right your model with mine, we wouldn’t say that 1. Reflexes or 2. Perception are initial steps for a consciousness hierarchy. Both plants and our robots accept input and provide output, though we don’t consider there “anything it is like” to be them. So I don’t think it’s helpful for you to include that sort of thing for this list. Let’s let’s leave that for panpsychists.

        James of Seattle has what I consider to be a “God’s eye” first stage, a “human eye” second stage, a “signal” stage 3, such as for hormones, an input and output signal stage for individual neurons, and a 5th stage for full integrated brains and computational devices. Still there isn’t inherently anything it is like to exist.

        I perceive that to enter in his 6th stage, which is to say “sentience” itself. Note that a computer may be said to have non-conscious goals. Well apparently he theorizes that such a machine might be set up to “represent” them in a way that a sentience based mode of function emerges as well. (This is as opposed to my own “hardware” based account for the creation of sentience/ consciousness.)

        If I were to create a hierarchy then it wouldn’t concern a sliding scale of consciousness. Sentience defines that completely in my model. Instead this hierarchy would concern a sliding scale of conscious functionality. Note that we could theoretically have an extremely advanced non-conscious computer, which also outputs a functionally useless and epiphenomenal level of sentience. It is from this foundation that we can observe ourselves and other functional sentient beings in order to better grasp the nature of functional consciousness. I’ve reduced this down to three forms of input, one form of processor, and one form of output.

        I’m more aligned with Feinberg and Mallatt than those higher order theories, which I consider to ignore associated fundamentals. As you know, just as electricity drives the function of our computers, and neurons drive the function of brains, I consider sentience to drive the function of the conscious form of computer.

        Liked by 1 person

        • Eric,
          On dualism, at this point, I don’t recall the exact context, but it might have come from the way you were describing the second computer. It seems like you went through a stage where you wouldn’t consider it a virtual machine, or in any naturalistic sounding manner. It had started sounding a bit spooky to me. Although in retrospect, it might have been a limitation of language, where “virtual” carried connotations you saw as pejorative. (Although I didn’t. The term is commonly used in IT.)

          “I’m more aligned with Feinberg and Mallatt than those higher order theories, which I consider to ignore associated fundamentals.”

          F&M definitely seem more first order in their outlook. Looking at my hierarchy, I think they’d say layer 2 had exteroceptive and interoceptive consciousness, but that affective consciousness (sentience) wouldn’t arrive until layer 3. Their first criteria for affective consciousness is global instrumental learning. Although they might object to having layers 2 and 3 separate.

          F&M are also upfront that they’re not attempting to explain human level consciousness, bur primary or sensory consciousness. My issue with their description is that they lump affective consciousness under the “sensory” label. I think that’s wrong. Affects, whether lower or higher order phenomena, form in the motorium, not the sensorium. And I think their own criteria, requiring nonreflexive behavior, implies higher order circuitry.

          Liked by 1 person

          • Mike,
            Guilty as charged that I’ve been slow to warm up to the “virtual computer” term, thus giving you the impression that I was actually a dualist. I might have been associating it with the “simulation” term — simulations of weather do not produce what weather produces for example. Perhaps I didn’t want to imply that this computer, which is outputted by another computer, thus isn’t “a true computer”?

            Regardless I now fully embrace the “virtual computer” term for consciousness. But then if everyone and their brother has a consciousness theory, it’s strange to me that my model would be so different from mainstream models. It’s widely known that we have both “non-conscious” and “conscious” forms of function. Why wouldn’t other models then propose associated computers for this distinction? Instead mainstream theorists seem to have all sorts of trouble trying to delineate the point where non-conscious function changes into conscious function. Your own mono-computer hierarchy seems to be all about that, and so spanning a chasm between all theory between panpsychism and HOT.

            Do you not consider affect as input (or “sense”) but rather output (or “motor”), and given that you perceive affect to be produced in the motorium? Hmm…. I don’t claim to know where it is that affect is produced, though it’s surely useful to consider it as input regardless of that, or at least architecturally. Pain is something that we feel and assess, which is to say input to the system, and regardless of where it’s produced.

            On nonreflexive behavior, yes it should by definition require nonreflexive output. But this should naturally have begun in a primitive way, or the opposite of “higher order circuitry”. So I guess that I support F&M there too. To understand why, consider my proposal for the emergence of the conscious form of computer:

            Perhaps around the early Cambrian we have non-conscious organisms that are naturally unable to deal with more “open” environments all that effectively, or exactly what where our robots have problems today. Some of these organisms should have started producing valence, and merely in the manner that evolution produces new traits in general, or through serendipity. Thus there would finally be something it is like for something to exist, and even if functionless for the organism. (Of course evolution has a knack for turning functionless things into functional things.)

            I theorize that this new conscious entity (and not the automatic “robot”) was randomly given control of output function in some capacity, and so it would have tried to do what made it feel best to the extent that it could. History tells us that it did so well enough to become adaptive as a new form of organism function.

            Note that beyond its valence input and limited muscle output function, neurons that have fired in the past would still tend to fire again. So here an incipient memory input to the conscious entity should inherently be set up as well. Adding at least some sense information (like hearing or smell, which would have originally provided the non-conscious organism information) would mean that the three forms of input (valence, sense, and memory), one form of processor (thought), and one form of output (muscle operation), would provide these organisms all of the basic categories that I theorize in the modern human, and potentially quite early.

            In order to truly grasp the dynamics of my models, you’d need to put my ideas to work for yourself. Perhaps when you ask me questions you could also note to yourself what I might say. That should put you into the mindset of “active” rather than just “passive” learner in this regard. Then you could check yourself once I answer.

            Liked by 1 person

          • Eric,
            “Instead mainstream theorists seem to have all sorts of trouble trying to delineate the point where non-conscious function changes into conscious function.”

            I think the difference, as we’ve discussed before, is your model doesn’t get into the messy details, whereas many do at least attempt it. That’s what GWT and HOT have in common. (IIT unfortunately, as far as I can tell, doesn’t make that attempt.)

            Have you considered that your second computer may be equivalent to the global workspace in GWT, or the association of higher order representations in HOT? (I still think HOT advocates closing the door on consilience between HOT and GWT are being hasty, since the workspace might also be equivalent to all the HORs in HOT.)

            “Do you not consider affect as input (or “sense”) but rather output (or “motor”), and given that you perceive affect to be produced in the motorium?”

            I think it’s both input and output in that it’s an intermediate mechanism between the lower level motorium and higher level motorium, but this gets back to the idea of different brain regions talking to each other that you dislike so much. 🙂

            “On nonreflexive behavior, yes it should by definition require nonreflexive output.”

            What do you mean here by “nonreflexive” output? I ask because it doesn’t seem that it’s in the output that behavioral flexibility shows itself, but in the selection of that output. There is considerable fine tuning by the cerebellum, but it doesn’t seem to be conscious.

            We agree that valence is tied to behavioral flexibility. But you have a tendency to black box valence and then work from there. I’m not satisfied with that. I want to know what valences *are*, and to me the most plausible explanation is that they’re the signals from the reflexes starting to fire to higher level circuitry that decides which reflexes to allow or inhibit.

            “In order to truly grasp the dynamics of my models, you’d need to put my ideas to work for yourself.”

            What do you think of my attempted reconciliation above?

            Liked by 1 person

          • Mike,
            I’m sure that conceptually you don’t consider there to be anything wrong with a person who observes human behavior, and so develops models associated with human function, though sans neuroscientific mechanisms . That’s psychology right? Well that’s exactly what I’ve done. First I developed a broad psychological framework, and then armed with those positions I developed what may be referred to as my “dual computers” model of brain function. Such architecture “supervenes” upon neuroscience. This is to say that evolution created our function bottom up, though our models must work this out top down.

            In a perfect world neuroscientists would be founding their models upon established psychology. It’s only because the field is so troubled that neuroscientist have had to fabricate their own psychology. For example Lisa Feldman Barrett proposes that babies don’t feel until they are are linguistically taught to feel. That’s the sort of thing which should naturally happen when we try to work things out backwards.

            I’d absolutely love for you to gain a working level understanding of my models, though my lectures alone shouldn’t be sufficient to provide them. Instead you’d need to try your conception of my ideas out so that the subtle nuances might be realized. For example, instead of asking me if the conscious form of virtual computer that I propose might be associated with GWT or HOT, you could try to answer this yourself. My answer is “No”, but rather than me telling you why, could you try to predict what I’d say? (In case you want to try, I’ve put a blockquote explanation at the end of this comment.)

            So you consider affect to exist as an intermediate between lower and higher level motoriums? Well that’s fine. Furthermore this seems contrary with my own brain architecture. In it the brain produces affect (it matters not where or how!), and this motivates a virtual computer that should do less than 1000th of one percent as many calculations as the brain which creates it. Of course I don’t dislike that different brain regions talk with each other. It’s just that my own model doesn’t get into any of that business.

            What do you mean here by “nonreflexive” output? I ask because it doesn’t seem that it’s in the output that behavioral flexibility shows itself, but in the selection of that output. There is considerable fine tuning by the cerebellum, but it doesn’t seem to be conscious.

            By “nonreflexive” I mean straight “conscious”. But it’s good to hear that behavior flexibility is observed in output selection rather than output itself, since my model is indeed able to account for this. When muscle operation is consciously chosen, according to my model this doesn’t directly operate muscles. The conscious computer merely makes a request, as denoted with an arrow to the non-conscious processor. It’s the non-conscious computer that actually operates muscles, and so cerebellum fine tuning should be displayed. Another example would be in paralysis — request all you like, but the brain doesn’t cooperate.

            I want to know what valences *are*, and to me the most plausible explanation is that they’re the signals from the reflexes starting to fire to higher level circuitry that decides which reflexes to allow or inhibit.

            Once again, okay, though this position is contrary with my own brain architecture. I do not propose one computer that changes from non-conscious to conscious somewhere, but a neuron based computer that outputs a distinctly different valence based computer.

            What do you think of my attempted reconciliation above?

            I always appreciate reconciliation attempts for our positions from you Mike! In truth however, it’s not your agreement that I seek in the end. What I most desire is your expertise, though I’ll only get a full measure of that to the extent that you gain a working level grasp of my models. That’s where you should be able to assess them for yourself. So why do you think I’d say that GWT and HOT do not correspond with my own conception of consciousness? To check yourself, my own explanation follows:

            In truth these positions haven’t interested me given that I perceive them to be so restrictive. Consider my last comment’s account for the Cambrian evolution of consciousness/ sentience — it displayed “conscious” function that was way, way, more primitive than the function of modern birds and mammals! Even if examples of GWT and HOT consciousness are included under my own conception of consciousness, they simply do not have my founding premise. They do not propose the brain as a computer that produces a virtual computer, and that this second computer has phenomenal inputs of valence, senses, and memory, a thought processor that interprets them and constructs scenarios from which to feel better, and a muscle operation output that the brain monitors and facilitates.

            Liked by 1 person

          • Eric,
            Even within psychology, the dividing line between what is or isn’t part of consciousness is a contentious one. I highlighted the views of Ned Block the other day, and that was a largely psychology level discussion.

            “In a perfect world neuroscientists would be founding their models upon established psychology.”

            Ideally, neuroscientists would construct their models based on their evidence, and psychologists would construct theirs based on their evidence, and we’d eventually see some convergence. But when psychological models conflict with neuroscience, neuroscience wins. Sorry.

            Lisa Feldman Barrett is actually a psychologist who also works in neuroscience. (Frankly, I think the theories of any psychologist who isn’t at least well versed in neuroscience by now, should be viewed with caution.) So she seems to be working from your ideal perspective. However, when she attempted to find the neural basis for the common psychological models of emotions, she couldn’t. I don’t buy all her conclusions from that evidence, but that doesn’t mean we can reject the evidence itself.

            On your model and GWT and HOT, I honestly didn’t know, so I read your explanation. To me, it seems excessively focused on terminology rather than the actual concepts behind that terminology. But you’re not unique in that, as it’s the same sense I get with reading Hakwan Lau’s reasons why HOT and GWT can’t be reconciled.

            “In it the brain produces affect (it matters not where or how!),”

            It does to me.

            “Once again, okay, though this position is contrary with my own brain architecture. I do not propose one computer that changes from non-conscious to conscious somewhere, but a neuron based computer that outputs a distinctly different valence based computer.”

            So, my interpretation of this is that the second computer is emergent, exists as a higher level of organization or abstraction, from the first one. If so, then we’re only talking about things at different levels of abstraction. Your own model doesn’t get into the details of valences. Which is fine if that’s what you want. But if you then say that a description at a lower level is wrong, if you take a stance, then you have to actually get in and get your hands dirty and say why. At least if you want people to pay attention 🙂

            Liked by 1 person

          • But when psychological models conflict with neuroscience, neuroscience wins. Sorry.

            .
            I do agree with you about that Mike. Neuroscience is highly respected while psychology is anything but (and I’d say for good reason). For many neuroscience might even be perceived as a “hard science”. Regardless it tends to lend credibility to people on the soft side of things. For example with a popular book about ethics and religion, what did Sam Harris do to give his emerging fan base a sense that he was “the real deal”? He went out and got himself a PhD in cognitive neuroscience. Actually he seems to have made lots of good moves in order to become such a popular public intellectual.

            Feldman Barrett provides another such example. If she merely had standard psychological evidence for her theory of constructed emotion, I don’t believe it would have gone anywhere — only neuroscientific work should have rendered that particular theory “respectable”. And I certainly wouldn’t call her approach “ideal”. I’m very much opposed to her position for its psychological implications, as well as that it was built from a mere void in apparent neurological signatures.

            In order for neuroscience to be of practical use, I believe that its evidence must essentially be interpreted in terms of psychological function, if not just physiological. And here’s the thing. If we’re clueless about psychology, then we shouldn’t be able to assess better or worse ways of interpreting such neuroscientific data. So consider the following heuristic: Psychology and neuroscience are joined at the hip, with neuroscience below and psychology above.

            If you decide that this isn’t a good assessment however, there does seem to be a potential way for you to demonstrate this. What useful bit of neuroscience (beyond physiology) can you think of that is not also useful in a psychological capacity?

            If it’s true that neuroscience can teach us about psychology, though we can’t effectively assess psychological models beyond neuroscience, they how might we cross check our neuroscientific interpretations?

            I don’t believe that I quite said GWT and HOT were “wrong”, but rather that these single computer models conflict with my own dual computers model. Indeed, it could be that one or both of them happen to be extremely effective, and in that case my own should not be. But then I’d love to see if their psychological implications are able to explain the standard human function which my own position is able to!

            So, my interpretation of this is that the second computer is emergent, exists as a higher level of organization or abstraction, from the first one.

            Yes I would say that it’s emergent, though not at a higher level of organization or abstraction. That, for example, is how psychology emerges from neuroscience (among others) — an epistemic distinction. For this I’m instead talking about an ontological distinction, such as how a car is produced by a car factory. It could be that the valence produced is entirely “information”. (I have a friend who is trying to create valence by means of software alone given this premise. Speak of the devil — you’re talking with Peter just below!) But I suspect that it emerges more in the manner that a computer animates a computer screen. Note that this interpretation keeps me square with John Searle’s Chinese room.

            Like

          • Eric,
            I see Harris as a case of someone with academic credentials getting undue credibility in areas outside of his training. Although given his track record, not to mention the silliness his wife produced, I wouldn’t have much faith in whatever he wrote about neuroscience.

            At least Barrett is arguing within her field of expertise, but then I think she’s far less wrong than you do. My only real beef with her is the sharp line she draws between affects and emotions.

            “For this I’m instead talking about an ontological distinction”
            “But I suspect that it emerges more in the manner that a computer animates a computer screen.”

            Do you mean strong emergence? Or are you saying that the brain is producing some kind of field, energy, or ectoplasm? (We’re getting into the territory that led me to think of the dreaded d-word 🙂 ). Seriously, it feels like you need to flesh this part out. As it stands, it has an air of mysticism about it.

            “Note that this interpretation keeps me square with John Searle’s Chinese room.”

            Not really anything meaningful in my book. I find Searle’s thought experiment polemical, confused, and slipshod. As far as I can see, the only reason for his reputation is that he confirms popular biases.

            Liked by 1 person

          • Okay Mike, let’s flesh this stuff out. I suspect that you wouldn’t admit that I’m more of a skeptic that you? So consider that you have personal motivations for what you believe, and I have personal motivations for what I believe. But that doesn’t quite square things. Here we display opposing positions where one should naturally be more effective than the other. If you can demonstrate that my biases have led me astray, then I’ll thank you for it. I presume it’s the same in reverse.

            The issue as I see it, and obviously check me on this, is that you believe that valences such as what a smashed thumb causes you to feel, exist entirely as information processing itself. Thus if the computer that you’re now looking at were to properly process the information that your brain does associated with smashed thumb input, the same output sensation would then be produced through it. Furthermore if the input information associated with this were written out on sheets of paper and passed to a person who looks up the same answers that your computer did, associated output sheets of paper would itself create what you know of as “thumb pain”. Is that right?

            Regardless I hold the bias that information processing alone will not create such valences. If your computer were to properly process the information that your brain does to produce thumb pain, I believe that this information would then need to be fed into a valence producing machine in order for the thumb pain to actually occur. This is similar to how it’s impossible for your computer’s processed information alone to produce screen images. Just as processed information is fed into a computer screen that’s set up to be animated by such information, properly processed thumb pain information would need to be fed into a valence producing machine in order for such an experience to actually occur. Furthermore I believe it would even be possible for written notes associated with the thumb pain information that your computer accepts, to be processed by means of lookup tables into new sheets of paper that would produce thumb pain, that is if they were fed into a valence producing machine which is set up to accept such information.

            I don’t believe that my position is inherently dualistic. If screen images can only emerge from screens, I don’t see how causal dynamics must fail if valences can only emerge from machines associated with those particular properties of physics, whatever they are. Perhaps some day we’ll even figure out the physics behind valence.

            So if it exists, where might such a machine be located in the brain? The work of F&M, and I suppose Jaak Panksepp, suggest it’s pretty basic. Thus my guess would be the brain stem. That’s well outside my expertise however. I’m an “architect” rather than an “engineer”.

            Liked by 1 person

          • Eric,
            Your bias is a very common one. The problem is that there’s nothing in the nervous system to substantiate it. Neurons are information processing mechanisms, and the brain is a composite information processing organ. There are support cells (glia) to deliver energy, neurotransmitters, sodium ions, and other raw essentials, as well as perform other functions, but the main action is signalling between the neurons, and more broadly between regions of those neurons. The system receives neurochemical spikes from the outside world through the peripheral nervous system, and produces spikes that are transmitted to neuromuscular junctions, as well as release chemical messengers for slower adjustments.

            We can describe this system at various levels of abstraction, but none of that will change the basic reality. The brain is an information processing organ. Everything it does it built on those signalling components.

            On the screen metaphor, it’s worth mentioning that the screen itself is an information appliance. It’s job is to present information tailored to a human’s visual sensory systems. It simply converts electrical impulses into lighted pixels, generating photons that impinge on our retinal photoreceptors, which in turn convert the signal into neural electrochemical signals. People sometime speak of computer devices as extending our mind. This is because we can choose to view the overall arrangement as an extended information system.

            A better example for non-information processing might be a robotic motor or other physical output, where the magnitude of the energy is more important than the transmitted or processed patterns. Again, the issue there is that the brain doesn’t do any of that. It completely depends on the rest of the body for that kind of activity.

            The idea of feelings happening in the brainstem is a popular one. I bought it myself a few years ago. It seems like an elegant conception. And I don’t think it’s entirely wrong. Many of the impulses that eventually produce feelings do begin there. But as I’ve learned more neuroscience, my views were forced to change. The generation of the valence may begin at lower levels, but data seem to show that the predictions about it, the modeling of it, the perception or feeling of it, happens in higher level circuitry.

            If our views don’t change on new information, and we don’t keep trying to learn new information, if we only keep trying to argue and defend our initial preconceptions because they once felt right, then we become frozen, no longer scientific, but ideologues. I was listening to an inteview of Hakwan Lau, one of the neuroscientists whose work I’ve highlighted recently, and he noted one of the perils of working on theories is how easily they can blind us to disconfirming data, if we’re not very careful in the manner in which we hold them.

            Like

  8. PJMartin says:

    I do see valence, in the sense of a measure of goodness for the organism, as a crucial element of the architecture from which consciousness arises. It is used to determine what action to take and what to pay attention to, and in so doing it brings everything into a single ‘currency’ so that options can be compared. Valence enables a current state of the organism to be translated into action and attention sets. I wouldn’t take seriously any model that doesn’t include valence or an equivalent (such as Friston’s surprisal, in a more mathematical formulation).

    I would then see consciousness as then arising when all of that stuff – current state, valence, potential and actual action and attention sets – get run through the same mill so that the organism can respond to its own mental processing. Perhaps this corresponds to Eric’s virtual second computer concept.

    In simple terms, low level processing characterises where the organism is in its state space (like where it is on a chess board). Valence creates a 3d terrain across that state space so that the organism can move (act) to improve its valence. In this sense valence acts like the fuel in the engine of behaviour and explains many of the social phenomena that physics doesn’t get much purchase on. Consciousness lets the organism reflect on (and modify) this view, which gets it out of local minima (ie stops it getting stuck in a rut). Infinite regress is avoided because it can deploy its low level processing (originally developed for simple learned stimulus-response action) to now travel through its own mental terrain.

    Like

    • Peter,
      Do you have any thoughts on the difference between instrumental learning that is habit acquisition vs goal-directed? It’s something I’m struggling with right now. From the literature, it seems like goal-directed requires representations for the valence, but habit acquisition doesn’t, that is, they can happen model free, which makes me wonder how conscious a creature needs to be to have that capability.

      LeDoux asserts that there’s no convincing evidence for goal-directed learning in pre-mammalian vertebrates, and the few studies I’ve dug up so far haven’t conclusively contradicted him. For instance, teaching a goldfish to stay away from a painful stimulus, in one study, seemed to require repetition (habit acquisition?). It’s not something they acquired on one exposure.

      Like

      • PJMartin says:

        I have developed software to do instrumental (punishment/reward) learning in a simple case. Valence provides a uniform measure of goodness, positive for good results (eg +1), negative for bad results (eg -1), and this is used to learn what action to take in response to what input state. The absolute (unsigned) value of valence is then used to learn what to pay attention to – i.e. pay attention to anything that leads to a good or bad result, rather than a neutral one. This can start with a random initial state and rapidly converge to paying attention to the right things, and taking the right actions, in quite a striking way. It uses valence (as I have defined it here), but is not conscious.

        I think habits can follow the same learning pattern. Perhaps for those habits that are rather neutral (not leading to strong positive or negative outcomes), a factor is the extent to which we favour certainty of a neutral outcome versus preferring and element of random exploration (e.g. play, risk-taking). This can be factored into the learning model by including a random element in selecting between things with neutral (zero) valence.

        This, for me, provides the subconscious substrate upon which consciousness can arise. This is by taking all the data items in the above (input state, valence, expected valence for different potential actions and attention sets) and treating them all as inputs.

        It also includes being able to use part of this space as a scratchpad to try out potential actions (or action sequences), and their hypothetical valences and state outcomes, to communicate these to others or to received training. Maybe this is where goal-directed learning comes in. We can explore unusual potential sets of actions, then tee up internal states that will drive different future actions, while treating the goal and hypothesised outcomes as part of the input information, alongside real world inputs. The important distinction here is that consciousness works with a representation of the valences mentioned in the first paragraph (eg expected future valence at some time in the future, with some probability), rather than actual valence, and with representations of potential actions.

        Once novel action sequences have been painstakingly put together by conscious reasoning, or communication from a teacher, not only can the input state/action relationship be subsumed into subconscious learning, but the sequencing from one event to the next action can also be automated (e.g. in learning to drive, then this becoming automatic).

        I think there is a good argument for all learning being subconscious, but all teaching being conscious. When we revise for an exam, we can consciously drag ourselves through the necessary material and exercises, but the underlying processes that actually embed learning are not under conscious control.

        Like

        • Thanks Peter. The subconscious bit is what I’m wondering about. If pre-mammalians only learn subconsciously, then we can’t take their learning, in and of itself, as evidence for consciousness. The valences might be there, but in a nonconscious manner.

          I’m still making my way through the various studies that Feinberg and Mallatt cite to see if LeDoux’s assertion holds up. What I’m finding is that a lot of those studies are coming from people who appear heavily predisposed to find conscious feelings in their data. The more cautious ones seem to present a pretty stimulus-response bound form of learning.

          Like

          • PJMartin says:

            Yes I agree that learning, and the valence that drives that learning, are not sufficient for consciousness, otherwise we could claim it in some really simple learning systems.

            As already outlined, I consider consciousness to arise from reading, processing and amending that low level information, so that the organism can respond differently based on its own subconscious representations and processing. (To put it another away, it knows where it is in its own neurally represented state space, and where it can move to in that space).

            I want to add that richer self awareness results from structuring that representation so as to distinguish what is attributable to self and what to external world (or internal body state). This also helps a lot with learning, because you know what you have direct control of, versus what is external (or visceral) and you just have to respond to.

            While my main motivation for understanding consciousness is because there isn’t yet an agreed account, I do think it is going to advance machine learning, as there is strong synergy between learning and consciousness.

            Like

          • Definitely on the machine learning. Representation free habit learning is actually model free reinforcement learning, a concept in machine learning. (I wish I’d had that realization myself, but one of the papers explicitly used the “model free RL” term.)

            Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.