Higher order theories of consciousness and metacognition

Some of you know, from various conversations, that over the last year or so I’ve flirted with the idea that consciousness is metacognition, although I’ve gradually backed away from it.  In humans, we typically define mental activity that we can introspect to be conscious and anything else to be unconscious.  But I’m swayed by the argument that mental activity accessible to introspection, but that we never get around to actually introspecting, is nevertheless conscious activity.

I had thought the idea of consciousness being metacognition was essentially what HOT (high order theories) of consciousness were all about, and so my backing away from the metacognition idea seemed to entail backing away from HOT.  However, a recent paper written to clear up common misconceptions about HOT points out that this is mistaken (page 5).

Just to review: metacognition is cognition about cognition, thinking about thinking, awareness of our own awareness, etc.  It’s essentially introspection and is necessary for introspective self awareness.

HOT, on the other hand, posits that there are two types of mental representations. There are simple representations about the external world, such as the neural pattern that forms in the visual cortex based on signals from the retina.  This would be a first order representation.  First order representations are often associated with early sensory processing regions and are not themselves sufficient for us to be conscious of them.

Then there are representations about these first order representations.  These are second order, or higher order representations, and are often associated with the prefrontal cortex, the executive center of the brain.

Under HOT, the contents of consciousness are these higher order representations.  The higher order representation is us being aware, conscious, of the first order representation.  (To be conscious of the higher order representation itself requires another higher order representation.)  Our sense of inner awareness comes from these representations of the representations.

Given that I’ve often pondered that qualia are the raw stuff of the communication from the perceiving parts of the brain (where the first order representations are) to the planning parts of the brain (where the second order representations are), HOT strikes me as very plausible.

Crucially, higher order representations, despite their name, are much more primal in nature than metacognition.  The paper does admit that they likely share some common functional areas, but metacognition is a more sophisticated and comprehensive activity.  It strikes me as very likely that metacognition is built with recursive higher order representations.

My only reservation with HOT, and I’m sure various specific versions handle this, is that not just any higher order representation is going to be conscious.  It will depend on where in the information flows that representation is formed.  That and the higher order representation shouldn’t be thought of a simple echo of the first order one, but as informational structures which have their own unique functionality.

Ultimately HOT will have to be judged by how well it matches observations, but a nice implication of it is that inner experiences aren’t ruled out for species that show little or no evidence for metacognition.

53 thoughts on “Higher order theories of consciousness and metacognition

  1. Excellent discourse on the HOT and Metacognition, but it strikes me that MC is a subset of HOT (i.e., MC might likely be an nth-order representation of sensory perceptions) unless you could give some examples of MC that are independent of our sensory perceptions. I’d like to read more about MC vs HOT differences.

    Liked by 1 person

    1. Thanks! I think you’re right that metacognition is a subset of higher order representations (HOR). The point wasn’t that they aren’t related, they are, but that they’re not equivalent. I think of it as MC is an application built on top of many layers of HOR.

      I haven’t looked at them, but the paper cites a few other papers when it states that most HOT proponents don’t consider them equivalent.

      Like

      1. And the second order, or higher order representation of a headache is? … [drum roll] …

        Or do HOT theorists believe that we don’t experience headaches?

        Like

  2. I’m a big fan of your hypothesis that qualia are the raw stuff of the communication from the perceiving parts of the brain to the planning parts. And that makes HOT appropriate to qualia, but not necessarily to other aspects of consciousness. For example, I think we are conscious both of temperature-as-objective-property AND of subjective hot/cold qualia, and that these can diverge in one single overall experience.

    Liked by 1 person

    1. Thanks Paul.

      Have you ever considered the qualia of abstract concepts? For example, when you mentioned temparature-as-objective-property, I imagined in rapid succession a thermostat, the temperature display on wunderground.com, and then temperatures along a number line. Every abstract concept in my mind seems to have a corresponding image, generally a metaphorical one, but still an image.

      Like

  3. If there are two types of mental representations: first order, qualia and perceptual processing stuff; second order, being aware, conscious, of the first order;

    Where do the following fit:

    1- A relived memory of a pleasant/unpleasant experience
    2- An anticipated pleasant/unpleasant experience
    3- Working on a crossword puzzle
    4- My perception of a vehicle in front of me while I am driving and paying attention to the music on the radio
    5- Adding a column of numbers
    6- A false memory of being abducted by aliens
    7- A dream of flying

    Liked by 1 person

    1. James,
      Here are my shots at them, but don’t take them as anything authoritative since I haven’t read extensively on HOT.

      1 and 2 are imaginative simulations coordinated by the pfc (prefrontal cortex). The overall framework that links the series of images is probably a hierarchy of high order representations, each linking to a retroactivated perceptual image in the sensory cortices.

      3 is similar to 1-2, but in this case the simulation is in a tight feedback loop with ongoing sensory input. The appreciation of the image of the puzzle is a HOR (higher order representation) which references the perceptual image in the visual cortex. But the effort to find words involves retroactivating a lot of word images. The language centers are also heavily involved.

      In 4, the HOR is accessing retroactivated images related to the music. There is a first order image of the car ahead in your visual cortex, but your pfc is probably not accessing it. If the car hits its brakes, your basal ganglia might jolt the system into attending to what’s happening.

      5 seems similar to 3.

      6 is similar to 1 and 2, albeit with an inaccurate model of reality.

      On 7, I have no idea. I’ve read that in dreaming, the pfc is often not very active. The problem is that it is active when we remember the dream. How can we know whether we’re remembering experiences from the dream or confabulating them after we wake up?

      Like

      1. The statement was there are two types of mental representations so I expected more an answer that each one of my examples would be either the first or the second.

        Maybe you are saying they are all second order?

        Saying that doing a crossword involves “retroactivating a lot of word images” seems to be really straining to discover an “us being aware, conscious, of the first order representation”.

        Liked by 1 person

        1. James, I’m pretty sure that the suggestion is that, like you said, all conscious representations are second order. The first order representations are somewhere in the system, but they do not generate consciousness.

          *

          Liked by 1 person

        2. I think James caught the main point. I’d say that a conscious percept requires both the first order representation and the second or higher order one. In some ways I’m a little nervous even calling the higher order representation a representation. To me they seem more like cognitive linkage systems.

          On straining, despite what some philosophers insist, this stuff isn’t simple. We should expect a lot of the mappings to be complex.

          Like

          1. I am stuck on the question of the first order representation in the context of finding the answers in a crossword puzzle.

            Certainly the physical grid of the crossword is a first order visual perception, but I talking about the mental activity of discovering a word.

            Take the word “cat”. There is the sight of the word and the sound of the word. At some point early in my life, I began to associate the sound with the animal. Later I associated the printed word with the sound and the animal. Over the years I have gathered various associations of actual cats with the word.

            However, the word “cat” is a symbol represented as a pattern of ink on a page or screen or a particular sequence of sounds. It It is abstract in that it does not refer to any particular cat but to the abstract concept of a cat. We could use “gato” to refer to the same concept/animal with a different pattern of ink and different sequence of sounds.

            So when I work on the clue “Tabby”, I don’t think I am “retroactivating word images”. I think I associating either the word “tabby” with “cat” (two abstract symbols associated because commonly said together) or perhaps thinking of a particular pattern of stripes (retroactivating maybe an image of the fur of a cat) and associating it to the physical animal then to the symbolic representation in the word “cat”. The last phase of adding the answer to the puzzle consists of three abstract letter symbols that comprises the word itself which does not refer to a particular cat or to a particular viewing of the word in my first grade reader. The end result may be a sequence of patterns written into the grid but the process along the way mainly involved symbol recall and association.

            To me it is a stretch to argue that symbols, symbol manipulation, various problem solving activities, and logical/mathematical thinking involve first order representations. Certainly if at some point first order representations were involved, the associations have long been disconnected from them. There is nothing first order that I can see that associates the symbol “cat” with the concept of the animal.

            Liked by 1 person

          2. So, introspect carefully on this. Can you really think about the cat concept without visualizing a cat, imagining the sound of the word, or visualizing the word and letters in your mind? I can’t.

            It seems like, once learned, the associations between objects, words, and letter happen unconsciously. We’re only conscious of the results. Granted, establishing those associations originally did take conscious effort, but even in that initial period, were you ever able to think of the abstract concept of a cat without some sensory imagining?

            Like

          3. James Cross, I would suggest to you that first order representations are symbols (in a proto-semiotics sense). Second order representations are also symbols. So the question is, what makes one “first order” and another “second order”.

            It may be that first-order representations are simply unitary, that is, symbolize a single concept. So for example, suppose you are given three unitary symbols: cat, red, small. And from these you create a second order symbol which combines them, namely, a drawing of a small red cat. You could then, if you wish, give a name to this particular concept, say, Bilmo, the small red cat. In the future, you could use “Bilmo” as a unitary first-order representation. If you wish, you could break down “Bilmo” into its parts, and then break down those parts, etc. I think Mike might say that if you keep breaking them down, eventually you get to primary sensory parts. I’m not sure this is true, but it seems worth exploring.

            *

            Like

  4. “[O]ver the last year or so I’ve flirted with the idea that consciousness is metacognition, although I’ve gradually backed away from it.”

    I have a question: what is the difference here between consciousness and cognition (meta or otherwise)?

    Liked by 1 person

      1. Sure, but what I wanted to know is how cognition can occur in something that isn’t a conscious mind to begin with.

        If consciousness is defined in terms of cognition, or meta-cognition, isn’t that circular?

        Liked by 1 person

        1. Hmmm. Well since cognition can be unconscious, it seems like it doesn’t necessarily require consciousness,. Of course, this depends on what we mean by “cognition”. For example, most people probably wouldn’t consider the medulla’s autonomic functions to be cognition.

          I would define consciousness as cognition that is within the scope of introspection, in other words, cognition that can be accessed metacognitively (at least in humans).

          Although I have to admit that if you scan the archives of this blog, you’ll probably find me using “consciousness” and “cognition” somewhat synonymously, so I’m not sure how precise either term really is. (Or maybe my usage was just sloppy.)

          Liked by 1 person

          1. “For example, most people probably wouldn’t consider the medulla’s autonomic functions to be cognition.”

            That’s getting to what I’m asking about.

            We’re a little confounded by two meanings for “conscious mind.” The former meaning the conscious state, opposed to the unconscious state. The latter meaning a sapient mind (in any conscious state). Here I meant the latter.

            To me cognition (in a conscious state, unconscious state, sub-conscious state) is a property only of a conscious (sapient) mind. Defining consciousness in terms of cognition seems putting the horse behind the cart.

            I think they are synonymous, given that neither is well-defined.

            Liked by 1 person

        2. It’s possible that there is more than one “conscious” entity within the brain. Damasio refers to different selves, the big one being the autobiographical self. Marvin Minsky wrote a book called The Society of Mind with basically the theme of multiple agents working within the brain. So some events might be subconscious or unconscious relative to, say, the autobiographical self.

          *

          Liked by 2 people

          1. One of the things that I find interesting is that the superior colliculus in the upper brainstem seems to form low resolution colorless visual images for its processing. But we have no conscious access to those images, only our reflexive reactions to them. Our conscious access to visual imagery is limited to the higher resolution color ones in the cortex.

            Like

          2. Wyrd, what I’m getting at is that if there are different agents doing conscious-type things, the consciousness of one is invisible/subconscious/unconscious to the others. If there is an agent we can call the autobiographical self which gets inputs from other agents, the autobiographical self doesn’t (have to) know anything about the other agents. All it knows is the inputs it gets.

            *

            Like

    1. The real thing as in the external reality? Well, we only know an external object through a theory, a model, a representation. Our only measure of these representations is in how accurate the predictions they enable turn out to be. The more accurate, the more it seems we can trust that the representation is an accurate echo of reality. Of course, that’s my theory of how this works, which can only be judged by how accurate its predictions are….

      Like

        1. Donald Hoffman, in his theory, is conflating accuracy with completeness. He’s basing those comments on running competing artificial life forms. To some he would only give enough information to recognize a pattern. To the others he gave all the information available. The ones using less information did better.

          I compare it to giving one person some information (really big animal, stripes, two eyes looking at you intently) and giving another person all the information, every whisker, every hair, the length and weight of every limb, every tooth, every muscle, etc. Which one will finish processing and start running first?

          *

          Liked by 1 person

          1. “conflating accuracy with completeness”

            Not exactly. Completeness is part of it but actually he is saying that perceptions generally will not resemble what they are representing. So the big animal may not really have stripes or two eyes. He compares perceptions to an user interface,

            “A successful user interface does not, in general, resemble what it represents. Instead it dumbs down and reformats in a manner useful to the user. Because it simplifies, rather than resembles, a user interface usefully and swiftly informs the actions of the user. The features in an interface usually differ from those in the represented domain, with
            no loss of effectiveness. A perceptual user interface, simplifying and reformatting for the niche of an organism, gives that organism an adaptive advantage over one encumbered with constructing a complex approximation to the objective world. The race is to the swift; a user interface makes one swift by not resembling the world.”

            Like

        2. There is accuracy and there is accuracy. Our representations are accurate to a certain level of adaptiveness. Obviously they don’t tell us about all the electromagnetic fields in the environment or other things that historically weren’t adaptively relevant for us. They increase the accuracy of predictions of future conscious perceptions. When I use the word “accurate” in this context, that’s what I mean.

          Hoffman is selling idealism, in the sense of there being no objective reality, and I find a lot of the way he describes things oriented toward that advocacy.

          Like

      1. Accuracy is a side issue; I’ll get back around to it :).
        What I’m going to maintain is:
        1) that our “sense impressions” are primary, and it is not helpful to speak of sense impressions and internal vs. external reality in the first place.
        2)Despite being primary – the grist of our theories – our experiences are not properly basic or brute facts or whatever other term you want to use to indicate lack of composition.
        3) There is only one brute fact, which is a sort of orientation, which is consciousness, and that is all we have license to say about consciousness.
        So, when we theorize about the composition of consciousness, we will inevitably be off a little bit.
        But in the case of consciousness, being off the mark a little bit is the same as being completely mistaken.

        Liked by 1 person

        1. “But in the case of consciousness, being off the mark a little bit is the same as being completely mistaken.”
          What brings you to this conclusion?

          Myself, I think consciousness is a complex suite of information processing capabilities. So being mostly right strikes me as being only a little wrong. But I say this as someone who thinks the hard problem is an illusion and that there are only “easy” problems. (“Easy” in quotes because they remain pretty difficult, albeit scientifically tractable problems.)

          Like

          1. In terms of descriptive accuracy, Jonathan Edwards, Bishop Berkeley, and Daniel Dennett can all give a workable account of what constitutes consciousness.
            But if one is even a little bit right the others must be totally wrong.
            How would you prove it?

            Like

          2. Dennett’s version is at least somewhat falsifiable. There’s no way to test idealism, so I find it best to ignore it. If reality is an illusion, it exacts painful consequences for not taking it seriously, and it’s easier to just play the game.

            Liked by 1 person

          3. They are all just-so stories. Eliminative theories operate on a more sophisticated level, but I don’t see how they are any more falsifiable.
            They must finally rely on some kind of self-report which is what they claim is unnecessary in the first place.

            Liked by 1 person

  5. Mike, you said
    Given that I’ve often pondered that qualia are the raw stuff of the communication from the perceiving parts of the brain (where the first order representations are) to the planning parts of the brain (where the second order representations are), HOT strikes me as very plausible.

    I’m wondering if you have any specific ideas how this might work. In what sense is qualia “raw stuff”, and in what sense is it communicated?

    *

    Liked by 1 person

    1. By “raw stuff”, I just mean that it’s the communication, the electrochemical nerve impulses between the sensory processing regions of the brain and the movement planning regions.

      I do have to be careful here. I don’t want to imply that the communication is one way, like a cable TV signal. It very much isn’t. The frontal lobes are constantly communicating with the parietal, occipital, and termporal lobes (through the thalamus 🙂 ). The frontal regions receive a signal from the posterior association cortex in the parietal lobe of, say, the view of a house. The frontal lobes then retrieve details of this house view, some higher level ones from the posterior association cortex, but more lower level details from the earlier sensory regions. All in an ongoing loop, one that happens so fast that our metacognitive model of it is of a Cartesian theater.

      Like

  6. This is a general request for help. I think I’m coming around to understanding how Global Workspace Theory might work, and it seems compatible with HOT. The referenced paper suggests there are differences. Does anyone feel comfortable comparing them? Here’s how I see them working:

    Sensory signals, separately and in combinations, become available to a mechanism. These are the “first order” representations. An attention mechanism chooses among a small number of these and then creates a neural firing pattern in a workspace that represents that subset of representations. This new firing pattern is the second order representation. These workspace neurons “broadcast” that pattern, but not necessarily “globally”. Instead they could just broadcast to a number of other mechanisms which care about what shows up in the workspace.

    So I guess Global Workspace Theory would say only those things showing up in the workspace become conscious, whereas HOT would say there could be other second order representations that become conscious without showing up in the workspace.

    Do I have that right?

    *

    Liked by 1 person

    1. I have to admit that my own grasp on these isn’t strong enough to answer, although your account sounds plausible to me. Sounds like we both could do some more reading on this.

      I would just add that attention can be both bottom up (such as realizing that a spider is on your arm) or top down (such as the decision to focus on reading this comment). The bottom up aspects are more reflexive and sub-cortical in nature, while the top down seem driven by the executive planning regions.

      Like

      1. I’m not so sure I like the ideas of top and bottom here. I see what you are calling the bottom, namely reflexive influences on attention, as more (perhaps evolutionarily) primal, but I don’t think they are sub-cortical. In my just above proposed model, the sub-cortical parts would be mechanisms that are watching what come up in the workspace and then react accordingly, possibly influencing future attention. Actually, that might be a good neurology question: does the affect on attention happen before or after perception.

        In any case, I see the reflexive mechanisms as simply competing with the executive influences, which presumably evolved later.

        *
        [And you can guess where I think the Global Workspace is. Actually, my current guess is that the first-order representations are inputs from cortical columns to the thalamus, and the thalamus builds the second-order structure in some other part. Hypothalamus?]

        Like

        1. Ack. I just read what I wrote. Yes, sub-cortical competes with pre-frontal for attention. Still don’t like Top v. Bottom. Would like to know how much sub-cortical happens pre-workspace.

          *

          Like

        2. Again, my knowledge of GWT is limited to what I’ve read in a few papers and the wikipedia article. It seems roughly equivalent to working memory (although all sources stipulate that this isn’t necessary true).

          I said “sub-cortical” above for reflexive bottom up attention. When I said that, I was referring to impulses from the mid-brain region (notably the superior colliculus) that determine reflexive eye movement, head turning, and other forms of overt attention. That’s what I meant by bottom up. I probably could have been more precise if I had said “sub-cerebral”.

          I think most neuroscientists see top down attentional impulses come from the frontal lobes, notably the prefrontal cortex but some may also come from the premotor cortex.

          You’ll like this. The focal point for attention is the thalamus, particularly the pulvinar nucei. But I think it’d be wrong to consider the pulvinar to be “in charge” of attention. It seems like the focal point, where all the impulses coalesce and then project out again. (The name “pulvinar” actually is Latin for “empty throne”.) It’s one of many focal points in the brain.

          But as to where the global workspace is, I suspect most GWT proponents will see it existing either in the frontal lobes or the posterior association cortex (or both). For working memory in particular, I tend to think a pointer table of sorts for it is maintained somewhere in the frontal lobes, possibly the prefrontal or premotor cortex. Most higher order theories posit these areas for the higher order representations, but it sounds like it isn’t unanimous. (Wherever they are, there needs to be enough computational substrate there to handle them, which I think means somewhere in the cortex.)

          Like

          1. Cool. I officially predict those who think the “global workspace” (possibly short term memory) is located in the cortex are wrong. Now I need to look at Chris Eliasmith’s stuff again to see exactly where he puts the equivalent. Basal ganglia, maybe.

            *

            Like

          2. To be sure, this is my radical new theory under construction. I’m just using Eliasmith’s work as proof of concept. He builds neural network models based on biologically plausible models. I trust his neuroscience over me plus book.

            *

            Like

  7. I appreciate you’ve given me an opportunity to think on some new issues. I’ve had a chance to read some of the linked article and have some questions and comments.

    Your article stated that HOT has two types of representation:

    1- First order – simple representations about the external world, such as the neural pattern that forms in the visual cortex based on signals from the retina.
    2- Second order – representations about these first order representations.

    The linked article suggests two features of HOT::

    1- First order representations, such as the representation of something in the environment, are not sufficient for conscious experiences to arise.
    2- if an organism is in a mental state, such as a perceptual state, but is in no way aware of itself as being in that state, then the organism is not in a phenomenally conscious state.

    The types in your article and the features in the linked article are related but not quite the same. For example, from the linked article, I would say that HOT assumes higher order representation is required for consciousness but not necessarily that “the higher order representation is us being aware, conscious, of the first order representation”. In other, conscious activity could consist of something other than “being aware, conscious, of the first order representation”, for example abstract thought. That may be an incorrect statement about HOT but, going on the basis of the two features, there is nothing that requires second order representations always be about first order ones. Hence, the root of some earlier objections. Maybe your interpretation is correct. I haven’t read enough on the topic to say.

    To address the features of HOT more directly.

    First, we always have the practical problem of how to know the mind of another. How would I know if my cat is aware of itself being in a mental state? How would I know if a snail is? How would I know for sure if my wife is? Does HOT try to address this?

    Second, what is the evolutionary role of any second order representation? We would presume that since it exists as a biological fact it must be something other than an evolutionary accident. So it must have provided some evolutionary advantage. Perhaps if we understood that we would also have a general idea where in the evolution of the brain and nervous system it appeared.

    Third, are there gradations of consciousness or is it all or nothing? If I am hypnotized, am I conscious? If I am having a non-lucid dream, am I conscious? If I am engrossed in any kind of complex problem-solving, am I conscious? It seems frequently we have mental activity without specific awareness of being conscious. In fact, it may be that most mental activity has little self-awareness associated with it. True, if I am engrossed in solving a problem, I might be asked a question or hear a comment that might trigger self-awareness. Does that mean I was conscious even while engrossed in the problem-solving? Or did I suddenly jump from unconsciousness to consciousness based on the trigger?

    Items 2 and 3 seem to be indirectly addressed in the linked article as misconceptions (particularly misconception 2) about HOT but unfortunately the debunking of the misconception doesn’t seem to provide any answers. For example, it states:

    “With the leaner sense of introspection and self, higher-order theorists are
    free to speculate that non-human animals, infants, and even non-biological
    agents, could have the necessary kind of thoughts to have simple conscious
    experiences, such as conscious perceptions.”

    HOT requires something more than first order representation for consciousness but allows for second order representational capability in maybe snails or computer circuits. If this is the case, there potentially would be hardly any species meeting the criteria of feature 1 and virtually all organisms would meet the criteria of feature 2. So potentially the two key features of HOT stated in the article would be making a distinction that almost doesn’t exist in Nature. That is, almost every organism with a nervous system capable of qualia could be conscious.

    But that is mostly what I would expect anyway.

    Like

    1. Obviously when I give a summation of a paper, it’s from my perspective and reflects my own understanding. That’s why I link to the source, so you can get your own perspective. I went back and scanned the relevant sections of the article, but I still think my way of describing it is accurate based on the totality of the paper, but again, it’s good for anyone interested to make their own pass through it.

      I have to admit I’d forgotten about the “leaner sense of introspection” language. It makes me wonder about the distinction they’re drawing between HOT and metacognition. A leaner sense of introspection strikes me as a leaner sense of metacognition. This is starting to similar to the IAU’s “a dwarf planet is not a planet” assertion.

      Testing for metacognition, as I’ve discussed in other posts, is extremely difficult. I have no idea how anyone could go about testing for this lean inner sense in animals. I agree that the key is thinking about what adaptive benefit it might provide. If the lean inner sense is so lean it provides no detectable adaptive benefit, maybe it isn’t actually there.

      Liked by 1 person

      1. Yes, that particular statement does seem to undermine their best arguments for the leaner versions at least.

        If you go with fattest versions of HOT, probably only Gurdjieff masters are reasonably conscious most of the time. With the lean versions, most creatures are conscious most of the time they are awake.

        Ultimately without some external test for second order representation, HOT seems to be adaptable to a wide range of theories of consciousness and some version of it could probably find empirical proof for almost any empirical observation.

        Liked by 1 person

        1. The more I think about HOT, the more I think they sound a bit simplistic. I might feel differently if I got into the specific theories.

          There’s no question that there are information structures other than first order ones in the brain, but I’m not sure it makes sense to refer to them as “higher order representations”. It might make more sense to just call them associations, action plans, and other descriptions, many of which will reference the sensory representations.

          Like

Leave a reply to JamesOfSeattle Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.