Layers of self awareness and animal cognition

In the last consciousness post, which discussed issues with panpsychism and simple definitions of consciousness, I laid out five functional layers of cognition which I find helpful when trying to think about systems that are more or less conscious.  Just to recap, those layers are:

  1. Reflexes, primal reactions to stimuli.
  2. Perception, sensory models of the environment that increase the scope of what the reflexes can react to.
  3. Attention, prioritizing which perceptions the reflexes are reacting to.
  4. Imagination, action planning, scenario simulations, deciding which reflexes to allow or inhibit.
  5. Metacognition, introspective access to portions of the processing happening in the above layers.

In the discussion thread on that post, self awareness came up a few times, particularly in relation to this framework.  As you might imagine, as someone who’s been posting under the name “SelfAwarePatterns” for several years, I have some thoughts on this.

Just like consciousness overall, I don’t think self awareness is a simple concept.  It can mean different things in different contexts.  For purposes of this post, I’m going to divide it up into four concepts and try to relate them to the layers above.

At consciousness layer 2, perception, I think we get the simplest form of self awareness, body awareness.  In essence, this is having a sense that there is something different about your body from the rest of the environment.  I think body awareness is phylogenetically ancient, dating back to the Cambrian explosion, and is pervasive in the animal kingdom, including any animal with distance senses (sight, hearing, smell).  As I’ve said before, distance senses seem pointless unless they enable modeling of the environment, and those models are themselves of limited use if they don’t include your body and its relation to that environment.

The next type is attention awareness, which models the brain’s attentional state.  I think of this as layer 4 modeling what’s happening in layer 3.  (These layers appear to be handled by different regions of the brain.)  This type of awareness is explored in Michael Graziano’s attention schema theory.  It provides what we typically think of as top down attention, as opposed to bottom up attention driven from the perceptions in layer 2.

The third type, affect awareness, is integral to the scenario simulations that happen in layer 4.  Affects can be thought of as roughly synonymous with emotions or feelings, although at a broader and more primal level.  Affects include states like fear, pleasure, anger, but also more primal ones like hunger.

Each action scenario needs to be assessed on its desirability, whether it should be the action attempted, and those assessments happen in terms of the affects each scenario triggers.  The results of the simulations are that some reflexes are inhibited and some allowed.  Arguably, it’s this change from automatic action to possible action that turn the reflexes into affects, so in a sense, affect awareness could be considered reflex awareness that enables the creation of affects.

The types of self awareness discussed so far are essentially a system modeling the function of something else.  Body awareness is the brain modeling the body, attention awareness is the planning regions of the brain modeling the attention regions, and affect awareness is the planning regions modeling the sub-cortical reflex circuits.  But the final type, metacognitive awareness, recursive self reflection, is different.  It’s the planning regions modeling their own processing.

Metacognitive awareness lives in layer 5, metacognition.  This is self awareness in its most profound sense.  It’s being aware of your own awareness, experiencing your own experience, thinking about your own thoughts, being conscious of your own consciousness.  But it’s more than that, because if you understand this paragraph, it shows you have the ability to be aware of the awareness of your awareness.  And if you understood the last sentence, it means you have the ability to do so to an arbitrary level of recursion.

This type of awareness is far rarer in the animal kingdom than the other kinds.  It requires a metacognitive capability, an ability to build models not just of the environment, your own body, your attention, or your affective states, but to build models of the models, to reason about your own reasoning.  This capability appears to be limited to only a few species.  But scientifically determining exactly which species is difficult.

Mirror test with a baboon
Image credit: Moshe Blank via Wikipedia

One test that’s been around for a few decades is the mirror test.  You sneak a mark or sticker on the animal where it can’t see it, then put them in front of a mirror.  If the animal sees its reflection, notices the mark or sticker and tries to remove it, then, the advocates of this test propose, it is aware of itself.  But this test seems to conflate the different types of self awareness noted above, so it’s not clear what’s being demonstrated.  It could be only body awareness, although I can also see a case that it might demonstrate attention awareness too.

Regardless, most species fail the mirror test.  Mammals that pass include elephants, chimpanzees, bonobos, orangutans, dolphins, and killer whales.  The only non-mammal that passes is the Eurasian magpie.  Gorillas, monkeys, dogs, cats, octopusses, and other tested species, all fail.

But testing for the higher form of self awareness, metacognitive awareness, means testing for metacognition itself, which more recent tests try to get at directly.

One test looks at how animals behave when they’ve been given ambiguous information about how to get a reward (usually a piece of food).  If the ambiguity causes them to display uncertainty, the reasoning goes, then they must understand how limited their knowledge is.  Dolphins and monkeys seem to pass this test, but not birds.  However, this test has been criticized because it’s not clear that the displayed behavior comes from knowledge of uncertainty, or just uncertainty.  It could be argued that fruit flies display uncertainty.  Does that prove they have metacognition?

A more rigorous experiment starts by showing an animal information, then hides that information.  The animal then has to decide whether to take a test on what they remember seeing.  If they decide not to take the test, they get a moderately tasty treat.  If they do take the test and fail, they get nothing.  But if they take it and succeed, they get a much tastier treat.  The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information.  The goal of the overall experiment is to measure how accurately the animal can assess its own memory.

Some primates pass this more rigorous test, but nothing else seems to.  Dolphins and birds reportedly fail it.  This type of self reflective ability appears to be restricted to only primates.  (There was a study that seemed to show rats passing a similar test, but the specific test reportedly had a flaw where the rats might simply have learned an optimized sequence without any metacognition.)

What do all these tests mean?  Well, failure to pass them is not necessarily conclusive.  There may be confounding variables.  For example, all of these tests seem to require relatively high intelligence.  I think this is a particularly serious issue for the mirror test.  What it’s testing for is a fairly straightforward type of body or attention awareness, but the intelligence required to figure out who the reflection is seems bound to generate false negatives.

This seems like less of an issue for the metacognition tests.  Metacognition could itself be considered a type of intelligence.  And its functionality might not be a useful adaptation unless it’s paired with a certain level of intelligence.  Still, as I noted in the panpsychism post, any time a test shows that only primates have a certain ability, we need to be mindful of the possibility of an anthropocentric bias.

Again, my own sense is that body awareness is pervasive among animals.  I think attention and affect awareness are also relatively pervasive, although as this NY Times article that amanimal shared with me discusses, humans are able to imagine and plan far more deeply and much further into the future than other animals.  Most animals can only think ahead by a few minutes, whereas humans can do it days, months, years, or even decades into the future.

This seems to indicate that the level 4 capabilities of most animals, along with the associated attention and affect awareness, are far more limited than in humans.  And metacognitive awareness, the highest form of self awareness, only appears to exist in humans and, to a lesser extent, in a few other species.

Considering that our sense of inner experience likely comes from a combination of attention, affect, and metacognitive awareness, it seems like the results of these tests are a stark reminder that we should be careful to not project our own cognitive scope on animals, even when our intuitions are powerfully urging us to do so.

Unless of course there are aspects of this I’m missing?

40 thoughts on “Layers of self awareness and animal cognition

  1. Maybe not so much. It is a guide for archers and archery coaches on the mental side of the sport. ;o) (An archery shot is a guided attention sequence in which awareness needs to be limited to those things that can affect the shot (shifting wind, etc.) while excluding those things that can act as distractions.)

    Liked by 2 people

    1. I’m sure it’s something that requires lots of practice, although I think practice learning is a lot more effective if you know good principles ahead of time. I know I wished I’d understood when I was younger that the conscious difficulty I have when starting to learn something new inevitably fades with practice as things become ingrained habit.

      Good luck with the guide!

      Liked by 1 person

  2. I’m surprised octopi failed the mirror test. I was under the impression they were very intelligent and keen observers. Although now that I’m thinking about it, the experiments I’ve read about showed octopi observing and imitating behavior, which I guess is a different thing than what the mirror test is trying to evaluate.

    Liked by 2 people

    1. I’ve seen some discussion that the environment they live in might be a factor. When would they ever naturally encounter a reflection of themselves? That and the architecture of their nervous system is so radically different from ours, that asking them to pass mammalian cognition tests may not be fair.

      Liked by 3 people

  3. Nice post, as always.

    I’m wondering what you (and others) mean when they say the brain builds models. At the very top, you refer to “sensory models of the environment”. I’m wondering because Doug Hofstadter’s (Goedel, Escher, Bach) most recent “thing” is that all thought is done by analogy. My perhaps naive take on this is that situations and events we encounter trigger, more or less, memories of similar situations as analogies. So I’m wondering if such memories count as a “model”, or if you think a model requires more.

    *

    Liked by 2 people

    1. Thanks James.

      The models I’m talking about are, for the most part, memories, although some models appear to be easier to form than others. For example, humans appear to come prewired to remember and recognize other human faces. A model could be thought of as a running sensory prediction gradually fine tuned by new sensory information streaming in.

      Going down a bit further, models can be thought of as sensory image maps, associations of neural firing patterns. Take the wolf concept. Wolves have certain shapes, colors, textures, move in certain ways, have particular smells, and make certain sounds. Each of these sensory impressions, as they work their way through the sensory pathway layers, trigger certain hierarchies of neural firing patterns.

      The more these patterns fire concurrently, the stronger the chance of connecting synapses forming between them. If they do, one pattern getting triggered may in turn trigger the others. So if you hear a wolf howl in the distance, the neural hierarchy triggered by that sensation triggers the network of wolf neural hierarchies, although not all the way to the early sensory layers since only the one auditory sensation is currently coming in. Which is why visual memories of a wolf aren’t as vivid as currently seeing one.

      It’s worth noting that memories are stored in a distributed fashion throughout the brain. Auditory memories are stored in the auditory processing regions, visual memories in the visual processing regions, action oriented memories in the action regions, etc. There are hierarchies of association cortices that end up integrating the information from these regions.

      Hope that answers your question.

      If Hofstadter is saying all thought is analogy, I’m not sure I buy that. I do think all symbolic thought is ultimately done by analogy, since that’s what symbolic thought is, mapping symbols to sensory experience. We can only understand things like electrons or galactic super clusters by using symbols, analogies or metaphors, such as calling an electron a “particle” even though it doesn’t behave like any particle that we’ve ever actually observed (such as a dust mote). Even the mathematics of it, when you think about it, is essentially an exercise in analogies, albeit quantitatively precise ones.

      But when I’m trying to find a spot in the woods to take a pee, I have a hard time thinking there are any analogies at play, unless we want to call all sensory models analogies.

      Liked by 1 person

      1. I want to add here (and it may not be exactly relevant) a Lacanian psychoanalytic perspective. Everything is a metaphor, this includes electrons and galactic superclusters but also includes everything else from a tree to a television. It is difficult to see this because of how used to we are in dealing with these things. But looking at a tree, for example, the word ‘tree’ is an abstract idea or a metaphor we use to describe the real thing. Without labeling it a tree we are faced with what it really is. You could try to describe it like, ‘a tall structure with many branches that are made of wood and that are covered with many leaves’ but even still this is a metaphor using other metaphors like tall, structure, branches, wood, leaves (the best metaphor is tree instead). Plus we haven’t even described it in detail yet, once we try to do that we may as well write a book describing one specific tree but all the descriptions we give will be using other abstract (and useful) concepts/ words.
        When we think about things we normally use abstractions, which are removed from the actual thing, these abstractions are more culturally defined then we believe them to be. The sensory experience of the real thing cannot be put into thought or words without it being removed a step into the abstract (symbolic). The sensory is just experience without thought to it, as soon as thought enters we have left the real for the symbolic representation of it. This symbolic representation is shared between us culturally and is the basis for language.

        Liked by 2 people

        1. Thanks. I’m not familiar with Lacanianism. Sounds like it was an elaboration, or perhaps a reaction to the Freudians.

          It seems like there are multiple concepts here. We have the sensory experience of one tree. Then we have the mental imagery of trees in general, which in the reality of how our brains appear to work, is inescapably tangled up with the sensory experience of each tree. We actually see what we expect to see, with follow up corrections from the senses. We don’t experience the tree, but our internal model of the tree, updated by sensory input.

          You could call that mental imagery, what I usually refer to as models, an analogy or metaphor, but it seems like a stretch on the ideas of analogies and metaphors. But I have to admit you could regard the mental image as a representation, which could be considered a kind of symbol.

          Then we have the language of describing the tree. There’s definitely symbolism here. The word “tree” is a symbol for a mental image of a tree, and words like “leaf” are symbols for the associated mental imagery. But given that each symbol here links to a sensory perception, I’m not sure if we’ve reach analogies yet.

          Then we have something like the evolutionary tree of life. Here we’ve definitely reached an analogy, describing something very complicated, like the history of speciation, by relating it to something more primal.

          But maybe I’m missing some gradations in these concepts.

          Totally agreed that symbolism is a cultural thing.

          Liked by 2 people

          1. I completely agree with you that our brains work in an associative way. And what we see in our inner mental model is a combination of what the sensory experience is triggering within our brains.
            I’m using metaphor in a special sense here, not in the way we normally use, where you are right to say it is a stretch to call things like trees and leafs metaphors. The way I’m using it is in a more crude sense. I use the word metaphor for the lack of a better metaphor to explain this sense.
            If we look at perception, do we perceive one tree ? Before cognition comes into play what do we perceive. We are getting visual data (or maybe auditory, olfactory data as well) (all these words are again metaphors). But that’s all, this is just data with no meaning. It is not a tree yet. It is only when this data starts to be categorised and associated that we start formulating what we are perceiving. So the words tree and leaf, are they referring to a true external sensory perception or are they referring to our mental abstraction of certain data signals which we have learned to categories in a certain way.
            So the mental image of a tree is itself a symbol which represents something in the real. And language is more pervasive than a means of communication it is also what we do inside our heads.
            The evolutionary tree of life is also a metaphor as it represents something in the real. In this way they are both similar. But I agree that the later is our normal use of the term metaphor.

            Liked by 2 people

          2. Hi Fizan, and thanks for your interesting contribution. You say, “So the mental image of a tree is itself a symbol which represents something in the real.” Is it not also arising along with, and arising in part as, ‘something in the real’? In other words, is it not inseparable from what we regard as the external (to brain processing)? I’m inviting you to answer whether (what we call) consciousness is or is not purely an internalised state. See: Enactivism.

            Liked by 1 person

          3. Hi Hariod, I’m not familiar with Enactivism but doing a cursory reading it appears to be in flow with my opinion in general. I agree with you that it is inseparable from what we regard as external as it is our representation of something external. This representation takes the place of this ‘something’ in the real and in this sense it is symbolic. However what the ‘real’ is, is unknowable (not in a sense that we don’t know it yet but in the sense that it is ultimately unknowable).
            Taking it a step further, from a psychoanalytic point of view; we are constantly living in our ‘symbolic’ bubble (and imaginary bubble as well) faced with the ‘real’. All what we know is how we have symbolized the interference from the real. Our symbolic system in a way protects us from ever having to deal with the real, it is in a way a defense mechanism. If our symbolic system was to break down or become weak (as happens in true psychosis) we are faced with the real (to some extent) as now we can’t make sense of anything, there is no sense. We then cling more strongly to our imaginary system (and defense), hence forming delusions. As there is a breakdown of all symbolic identities the distinction between internal and external disappears and we perceive our own thoughts as like they came externally (hallucinations)
            I won’t want to go into much further detail here, as it will get too lengthy. But I am using Lacanian psychoanalytic concepts if you are interested.

            Like

          4. Thank you Fizan. I think you’re still making a distinction (maybe a perfectly valid one, and I suspect Mike would agree with it) between a consciousness ‘here’ and a reality ‘there’; in other words, consciousness is ‘all in the head’ (and body), not just functionally as physical processes, but as its own extant state of being. I wonder if this kind of Representationalist theory — the representation being spatially isolated and discrete — might be supposing too much. Indeed, why consider it (consciousness) a spatially defined quality at all?

            Is the real (as you call it) necessarily unknowable, and if it is, how do we know it is out there? A facile argument, one might say, yet needing an answer if we are not to fall into Solipsism. Is inference alone sufficient for it? I agree it is not knowable as the thing-in-itself, but it is still known, surely, as something-in-itself? Were it not to be, it would not be ‘real’. I wonder if this knowing (of the real ‘out there’) rather negates the idea of consciousness as a spatially isolated and discrete quality?

            I don’t know anything about psychoanalysis, but what you are saying reminds me of Jung’s suggestion (which I broadly accord with), that we ‘slip imperceptibly into a world of concepts’. Nonetheless, the very fact (if one accepts it as such) that we have, as toddlers, slipped into this conceptual world (of language, imagery, memory, etc.) then by implication one must be able to slip out of it, or at least acknowledge that the human animal can in its early developmental stages exist outside of it, or better, without it — in a non-conceptual, non-symbolic state.

            Like

          5. Hariod,
            Maybe one way to think about this is what it means to “know” something outside of our mind? I think it means constructing a sensory model of that thing that is predictive at some level of effectiveness. So knowing a tree, perceiving it, involves building a sensory model of it, relating it to the overall concept of trees, to the point where we can predict what will happen if we walk under it, touch it, etc. Is there any other way to know the tree?

            Liked by 1 person

          6. Thanks for the reply. I first want to make it clear that I consider myself to be ultimately agnostic to all these ideas, although after recent discussions with a colleague I have become inclined towards Lacanian views (I may not be doing this position justice). My take on these concepts is that it doesn’t say too much about consciousness, rather more about the way we are, one might equate that to consciousness.
            This may be my own take on it: There is no ultimate external or internal, as external and internal are only words to represent something we already know about. I think we can agree that we humans have a symbolic system in place. The imaginary system is the another one, which is basically what gives us a grounding, by fixed beliefs, a priori assumptions etc, which are rigid and unshakable.
            So what is the evidence for the real? The real is more elusive to describe, but we can glean at it from the interference it causes in our established symbolic (even perhaps imaginary) systems. For example, the pre-existing and established symbolic system of classical physics was shaken by the interference pattern observed in the young’s double slit experiment or by Einstein trying to make sense of relative motion. Or irregularities in the orbit of Uranus that led to a mathematical prediction and then the discovery of Neptune. There are many such examples.
            What happens in these instances is that the pre-existing system is not adequate enough, there are some cracks and loose edges. When we come across such interference we immediately start to symbolize it and incorporate it into our system so that it becomes more consistent. And then something else happens and we have to repeat this process.
            In this sense, we exist within our symbolic and imaginary systems and the real is all there actually is. So consciousness at least how we normally perceive it is within these systems that we exist in (at least shaped and molded by them). By losing these we lose our ability to distinguish anything, everything is one and infinite, identities disintegrate. We perhaps function at a more automated level of being i.e. of actions and reactions, but this is difficult to speculate about as any speculation done is from within these systems. It is debatable whether that is all there is to consciousness or whether there is some basic form of consciousness that exists even outside these systems for example for animals, newborns, early pre-language humans or of course everything else.

            Liked by 1 person

          7. Thank you Mike. Re: Means of knowing. Yes, a sort of index of loose percepts retained as memories; if enough boxes get ticked, it gets identified with the ‘Tree’ percept category. You ask, Is there any other way to know a tree? I’m suggesting that us locking it (knowledge) all down to something in the head, describing processes and functions, as if consciousness itself were a spatial entity, may be insufficient to describe what it (now as conscious knowledge) is in totality. I do feel sympathetic to a kind of externalism that says the tree ‘out there’ remains intrinsic to what we think of consciousness. The senses cognise its (external) shape, form, scent, sound, touch, and our percept index (your/Grazziano’s internal ‘models’) re-cognise it; the two run-together, arising conascently. Its just a question of whether its valid to say consciousness per se is ‘all in the head’.

            Liked by 1 person

          8. Hariod,
            I think there’s something to the externalism concept. The brain’s models / representations / image maps are, in actuality, pretty sparse things. In truth, drawing a line around the brain and saying it all only happens here is somewhat artificial.

            Still, all indications are, destroy the brain and you destroy the consciousness. If you isolate an adult brain from its environment, it remains consciousness, albeit with only its memories as content. The question is what would happen if a brain were to be isolated all through its development. Would it be conscious? I’m not sure. If it would, it would be a poor and desolate consciousness by our standards.

            All of which is to say, I think consciousness as we experience it is both an internal and external framework. Brains don’t exist in isolation, they are nexuses of information and action.

            Liked by 1 person

          9. Thanks Fizan, that’s interesting. In saying, ‘There is no ultimate external or internal’ you appear to be approaching the kind of externalism (sorry!) that I’m suggesting (to Mike) as regards the nature of consciousness. But we are somewhat hamstrung by living in a world of spatially defined things and events, whereas conscious knowledge may not be a matter of spatial referencing at all, at least, insofar as it’s not something that can be located. What does Lacan say, if anything, about a pre-conceptual state of awareness, which might also be thought of as an objectless awareness?

            Like

          10. Yes but I don’t know if it equates to consciousness is external. Because consciousness as we understand it and feel it is what it is in our human condition. What it is outside of us or if we don’t exist does not make too much sense. Is there something external to us – Yes (because from time to time, it shakes up our existent existence). But is that consciousness or anything else we can describe – No (because it is external to all we know and are).

            Liked by 1 person

          11. Thanks Fizan. To be clear, I’m not suggesting consciousness may be external, as you put it, rather that it is not extractable (cannot be described completely, nor exists independently) from what is external to the body. Internal and external, as I believe you remarked yourself previously, are perhaps not ultimately helpful spatial delineations for a quality that simultaneously engages in both as they arise conascently in the spatially extant world, and which same quality makes no sense (literally and logically) if either is absent.

            One form of this is called Radical Externalism (Ted Honderich), which again, although using the designation ‘external’, does not imply that consciousness is something outside of us, nor is it something exclusively inside of us. Mike has addressed this nicely himself at 8:03 a.m. so I’ll not reframe it in another way.

            Like

          12. Oh ok, I get it now. Although I still seem to have some issue with it (don’t if it’s justified) that what if we don’t exist at all?
            My guess is, in that case, there is no consciousness. So then consciousness must be tied in with our own existence. I agree that it cannot be described completely or independently of what is external. But I don’t completely agree with the second part, I believe these illusory boundaries are exactly what is needed to describe consciousness because conscious perhaps is that boundary.

            Like

          13. “. . . what if we don’t exist at all?” — We’d probably agree there is no permanently instantiated self-entity (and certainly not any putative soul) within or about us. So the question would require that you clarify what the ‘we’ is, and also what constitutes ‘existence’.

            Like

          14. P.S. “. . . because conscious perhaps is that boundary.” — then where is consciousness, where is it located? For it to be a boundary it must reside in or across some location(s), and I think we’ve just agreed it can’t rightly be said to reside exclusively in the brain.

            Like

          15. Hariod, I would suggest that the part of consciousness which is not located in the brain is the causal history of the information being interpreted. So for example, the physical light waves entering the eye contain information. That information is “about” the causal history of those light waves. Given an agent with the proper knowledge, those light waves can be interpreted as being “about”, say, a ripe apple. The “aboutness”, the semantics, is not in the brain, but all of the events involved in the interpretation are in the brain.

            *

            Like

        2. Ah, okay. I think what you mean by “metaphor”, I mean by “model”, many philosophers mean by “representation”, and neurobiologists mean by “image map”. I think these are all the same things. I don’t doubt that animals do this as much as humans.

          But when I use “symbol”, I mean a stand-in for one of these representations / models / image maps. This is symbolic thought. It includes language, and appears to be an almost completely unique capability of humans.

          And then we get to the more normal use of the word metaphor, to mean a symbol that represents a complex idea, usually itself a collection of symbols. An interesting question might be, when does a symbol become a metaphor (in the normal sense)?

          All of which is to way, I think we’re on the same page, just with different terminology.

          Liked by 2 people

  4. I agree that it is easy for us to be very anthropocentric in our testing. The way we try to normally understand the subjective experiences of others is through projection and introspection. It is very difficult to imagine how these tests that we put animals through are actually being processed by them. Our normally fall back is to equate some human attribute (or part of it) to the animal based on the results we see. So in the more rigorous test example above, the idea is that depending on whether the animal takes the test we can infer how well it reflects on its own memory ability. The biggest problem I can see here is the inference from our side. Perhaps the animal has a good reflection of its own memory ability (if that even means anything in their world) but it can still decide to either take or not take the test and vice versa. The other reasons could be numerous like it is hungrier, greedier, a risk taker, very risk aversive, doesn’t care either way, just feels like it etc. or even some we being humans aren’t able to grasp.

    Liked by 3 people

    1. Good points. In truth, before I started searching for it, I wondered if there was any experiment someone could do to test introspection in animals. I have to hand it to the scientists who came up with that experiment. They found a way to get at something that is incredibly difficult to measure. (It’s much easier with human subjects since they can talk about their mental states.)

      I left out a lot of detail on the experimental methodology. I know at least some of the details worked to control for the variables you mentioned, such as making sure the animals would want the offered treats. The experiment was run numerous times with each subject, with the information sometimes being clear, and sometimes being purposefully ambiguous, to create situation where the animal wouldn’t really know the right answer, and to see if they able in most cases to accurately ascertain their own knowledge.

      Liked by 2 people

    2. Fizan,
      I see that the comments aren’t quite set up to go through for your new “Meta Scientist” blog. I get a message of something like “Error: That action isn’t allowed”. Let us know when you get that straightened out, since I think you’re doing some great stuff over there!

      Liked by 1 person

      1. Thank you, Philosopher Eric, for letting me know. My site is pretty new so I’m still getting the hang of setting it up properly. However, I have enabled the comments now and done a test comment which seemed to be working (I think it is blocking very short comments). Please feel free to leave a comment and I would be grateful for further feedback. Looking forward to it.

        Liked by 1 person

  5. Excellent piece Mike, clear and well-structured, as always. One thought I had in regard to your levels (hierarchy) is the presupposition that the human capacity to juggle concepts ‘offline’, meaning whilst not reactively attending to physical sensory input, is somehow a higher level function as regards its value — to its own species and others. Perhaps I’m reading too much into the implications of your levels?

    Liked by 1 person

    1. Thanks Hariod! I should emphasize that these levels are really just my current way of making sense out of what is an enormously complex system. They might be different next month or next year as I learn more.

      One thing I haven’t really covered is the scope of these layers. Layer 2, the perception layer, is vast, taking up most of the back half and bottom side of the cerebral cortex. So a lot is always going on in that layer that we don’t attend to. Not sure I’d call it “higher”, but it’s definitely happening. I would suspect that species with a large cerebral cortex have a major advantage here. A species with a small forebrain probably doesn’t have the substrate for much unattended perception.

      It’s worth noting that we can also attend to things without being aware that we’re attending to them, as counter-intuitive as that seems. Stage magicians make use of this fact all the time.

      One thing I’m not sure of is to what extent we can run imaginative simulations without being aware of it. Obviously we’re not aware of the lower level mechanics of those simulations, but it seems like the things we’re most aware of are those simulations. Still, the simulation engine in the prefrontal cortex actually farms out the mental imagery work to the perception regions mentioned above. So it seems possible that we’re not always metacognitively aware of all the simulations that are running.

      Liked by 1 person

      1. Thanks Mike. “One thing I’m not sure of is to what extent we can run imaginative simulations without being aware of it.” Might I suggest that depends on the activation (i.e. the running status) of memory? What we think of as our being mindfully aware of (i.e. consciously attending to) something, is very largely a memory (a real-time fabricated re-presentation) of what happened ‘just then’ outside of our conscious apprehending. If we take that common experience of driving whilst daydreaming, and not noticing (i.e. being able to recall) anything of the last five miles, then surely the mind was, during that five miles, running ‘imaginative simulations’ as regards what that truck to our left might be about to do, and if there may be a speed radar up on that bridge ahead. Our mind simulated these hypotheses and accordingly we drove safely, yet memory was being applied elsewhere in our daydreaming — memory being a conscious process as known in its occurrence, and therefore a serial, not parallel-processing, one.

        Liked by 1 person

        1. Hariod,
          That reminds me of something I read or heard once that consciousness is “remembering the present.” Although based on the neuroscience, there appear to be two regions in the prefrontal cortex involved in introspection, one focused on introspecting current experience, and a second to introspect about past ones.

          But “memory” is a pretty broad concept. It includes just about any change of state in the brain. So we have perceptual memories, muscle memories, semantic memories, episodic memories, etc.

          And I think we can have conscious and unconscious memories. Sometimes the unconscious ones can bubble up into consciousness, but it seems like a lot of them would continue being unconscious unless something called our attention to them.

          Ultimately it seems to come down to what the metacognitive regions can inspect. The simulations seem to be the main thing in their purview. Which makes sense since both are in the prefrontal cortex. When our mind is wandering, we’re simulating something other than the current sensory stream, although the lower habitual processes may still be responding to that stream. When a situation becomes novel, we start simulating it in real time. In other words, we become conscious of it. But it may be that we can do some minor simulations without metacognitive notice.

          Liked by 1 person

    2. “What is, ‘introspecting current experience’, if not re-member-ing it?”

      You’re assuming that introspection is pulling the content of the experience from some stored location, that it’s reviewing a recording of the experience. Maybe it is. But based on what I’ve read, it’s more likely receiving signals from the processing of the simulation. Which might mean that what we refer to as “the experience” is constructed from the signals the introspective regions receive, and may be a distorted, limited, and summarized version of the actual processing, which the conscious part of us doesn’t have any other access to. If you think that through, the implications are profound.

      Liked by 1 person

      1. Thanks Mike. I don’t see memory as a filing or storage system as such, but think of it more as sympathetic resonances. I don’t imagine it matters a great deal in a discussion such as this, though my re-member-ing means simulating the member parts of any given experience, not as verbatim facsimiles, but as resonances with similar characteristics. There are a great many ways to play a Gmaj7 chord, but the effect is always very similar.

        On your second point, then I currently consider that what we take to be experience is in part a meta-level post hoc mental construct — i.e. an abstraction, a pseudo-explanatory schemata along with its ubiquitous hallucinations, confabulations, time-shifts and all that — and in part unmediated and (relatively) direct sense impressions that arise reflexively with the environment. It’s not all of it the fanciful dreaming that some say, but there’s some sense in which we dream our world into existence, taking it, the dream, erroneously as the world itself.

        Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.