Dogs have metacognition, maybe

Last year in a post on panpsychism, I introduced a hierarchy I use to conceptualize the capabilities of systems that we intuitively see as conscious.  This isn’t a new theory of consciousness or anything, just my own way of making sense of what is an enormously complicated subject.

That hierarchy of consciousness was as follows:

  1. Reflexive survival circuits, programmatic reactions to stimuli adaptive toward an organism’s survival.
  2. Perception, mental imagery, image maps, predictive models of the environment which expand the scope of what the reflexes are reacting to.
  3. Attention, prioritization of what the reflexes are reacting to.  Attention can be both bottom up, driven reflexively, or top down, driven by the following layers.
  4. Imagination, brokering of contradictory reactions from 1-3, running action-sensory simulations of possible courses of action, each of which is in turn reacted to by 1.  It is here where the reflexes in 1 become decoupled, changing an automatic reaction to a propensity for action, changing (some) reflexes into affects, emotional feelings.
  5. Metacognition, introspective self awareness, in essence the ability to assess the performance of the system in the above layers and adjust accordingly.  It is this layer, if sophisticated enough, that enables symbolic thought: language, mathematics, art, etc.

In that post, I pointed out how crucial metacognition (layer 5) is for human level consciousness and that, despite my own intuition that it was more widespread (in varying degrees of sophistication), the evidence only showed that humans, and to a lesser extent other primates, had it.  Well, it looks like there may be evidence of metacognition in dogs.

Dogs know when they don’t know

When they don’t have enough information to make an accurate decision, dogs will search for more – similarly to chimpanzees and humans.

Researchers at the DogStudies lab at the Max Planck Institute for the Science of Human History have shown that dogs possess some “metacognitive” abilities – specifically, they are aware of when they do not have enough information to solve a problem and will actively seek more information, similarly to primates. To investigate this, the researchers created a test in which dogs had to find a reward – a toy or food – behind one of two fences. They found that the dogs looked for additional information significantly more often when they had not seen where the reward was hidden.

I was initially skeptical when I read the press release, but after going through the actual paper, I’m more convinced.

The dogs were faced with a choice that, if they chose wrong, meant they didn’t get to have a reward.  A treat or a toy was hidden behind one of two V-shaped fences.  The dogs made their choice by going around the fence to reach the desired item, if it was there.  Each fence had a slit that the dogs could approach prior to their choice to see or smell if the item was present.  Sometimes they were able to watch while the treat or toy was placed, and other times they were prevented from watching the placement.

Image from the paper:

When they couldn’t see where it was placed, they were much more likely to approach the slit and gather more information.  In other words, they knew when they didn’t know where the treat or toy was and adjusted their actions accordingly.  In addition, they adjusted their strategy based on the desirability of the treat or whether the item was their favorite toy, indicating that they weren’t just reflexively following an instinctive sequence.

Image from the paper:

My initial skepticism was whether this amounted to actual evidence for metacognition.  Couldn’t the dogs have simply been acting on whatever knowledge they had or didn’t have without accessing that knowledge introspectively?  Honestly, I’m still a little unsure on this, but I can see the argument that the actual act of stopping to gather more information is significant.  An animal without metacognition might just guess more accurately when the have the information than when they don’t.

This gets into why metacognition is adaptive.  It allows an animal to deal with uncertainty in a more effective manner, to know when they themselves are uncertain about something and decide whether they should act or first try to gather additional information.  It’s a more obvious benefit for a primate that needs to decide whether they can successfully leap to the next tree, but it can be a benefit for just about any species.

That said, the paper does acknowledge that this evidence isn’t completely unequivocal and that more research is required.  It’s possible to conceive of non-metacognitive explanations for the observed behavior.  And it’s worth noting that the metacognitive ability of the dogs, if it is in fact metacognition, is more limited than what is observed in primates.  If they do introspect, it’s in a more limited fashion than non-human primates, which in turn appears to be far more limited than what happens in humans.

It seems to me that whether dogs have metacognition has broader implications than what’s going on with our pets.  If it is there, then it means that metacognition, albeit in a limited fashion, exists in most mammals.  That gives them a “higher order” version of consciousness than the primary or sensory version (layers 1-4 above), and I see that as a very significant thing.

Unless of course I’m missing something?

h/t ScienceDaily

Layers of self awareness and animal cognition

In the last consciousness post, which discussed issues with panpsychism and simple definitions of consciousness, I laid out five functional layers of cognition which I find helpful when trying to think about systems that are more or less conscious.  Just to recap, those layers are:

  1. Reflexes, primal reactions to stimuli.
  2. Perception, sensory models of the environment that increase the scope of what the reflexes can react to.
  3. Attention, prioritizing which perceptions the reflexes are reacting to.
  4. Imagination, action planning, scenario simulations, deciding which reflexes to allow or inhibit.
  5. Metacognition, introspective access to portions of the processing happening in the above layers.

In the discussion thread on that post, self awareness came up a few times, particularly in relation to this framework.  As you might imagine, as someone who’s been posting under the name “SelfAwarePatterns” for several years, I have some thoughts on this.

Just like consciousness overall, I don’t think self awareness is a simple concept.  It can mean different things in different contexts.  For purposes of this post, I’m going to divide it up into four concepts and try to relate them to the layers above.

At consciousness layer 2, perception, I think we get the simplest form of self awareness, body awareness.  In essence, this is having a sense that there is something different about your body from the rest of the environment.  I think body awareness is phylogenetically ancient, dating back to the Cambrian explosion, and is pervasive in the animal kingdom, including any animal with distance senses (sight, hearing, smell).  As I’ve said before, distance senses seem pointless unless they enable modeling of the environment, and those models are themselves of limited use if they don’t include your body and its relation to that environment.

The next type is attention awareness, which models the brain’s attentional state.  I think of this as layer 4 modeling what’s happening in layer 3.  (These layers appear to be handled by different regions of the brain.)  This type of awareness is explored in Michael Graziano’s attention schema theory.  It provides what we typically think of as top down attention, as opposed to bottom up attention driven from the perceptions in layer 2.

The third type, affect awareness, is integral to the scenario simulations that happen in layer 4.  Affects can be thought of as roughly synonymous with emotions or feelings, although at a broader and more primal level.  Affects include states like fear, pleasure, anger, but also more primal ones like hunger.

Each action scenario needs to be assessed on its desirability, whether it should be the action attempted, and those assessments happen in terms of the affects each scenario triggers.  The results of the simulations are that some reflexes are inhibited and some allowed.  Arguably, it’s this change from automatic action to possible action that turn the reflexes into affects, so in a sense, affect awareness could be considered reflex awareness that enables the creation of affects.

The types of self awareness discussed so far are essentially a system modeling the function of something else.  Body awareness is the brain modeling the body, attention awareness is the planning regions of the brain modeling the attention regions, and affect awareness is the planning regions modeling the sub-cortical reflex circuits.  But the final type, metacognitive awareness, recursive self reflection, is different.  It’s the planning regions modeling their own processing.

Metacognitive awareness lives in layer 5, metacognition.  This is self awareness in its most profound sense.  It’s being aware of your own awareness, experiencing your own experience, thinking about your own thoughts, being conscious of your own consciousness.  But it’s more than that, because if you understand this paragraph, it shows you have the ability to be aware of the awareness of your awareness.  And if you understood the last sentence, it means you have the ability to do so to an arbitrary level of recursion.

This type of awareness is far rarer in the animal kingdom than the other kinds.  It requires a metacognitive capability, an ability to build models not just of the environment, your own body, your attention, or your affective states, but to build models of the models, to reason about your own reasoning.  This capability appears to be limited to only a few species.  But scientifically determining exactly which species is difficult.

Mirror test with a baboon
Image credit: Moshe Blank via Wikipedia

One test that’s been around for a few decades is the mirror test.  You sneak a mark or sticker on the animal where it can’t see it, then put them in front of a mirror.  If the animal sees its reflection, notices the mark or sticker and tries to remove it, then, the advocates of this test propose, it is aware of itself.  But this test seems to conflate the different types of self awareness noted above, so it’s not clear what’s being demonstrated.  It could be only body awareness, although I can also see a case that it might demonstrate attention awareness too.

Regardless, most species fail the mirror test.  Mammals that pass include elephants, chimpanzees, bonobos, orangutans, dolphins, and killer whales.  The only non-mammal that passes is the Eurasian magpie.  Gorillas, monkeys, dogs, cats, octopusses, and other tested species, all fail.

But testing for the higher form of self awareness, metacognitive awareness, means testing for metacognition itself, which more recent tests try to get at directly.

One test looks at how animals behave when they’ve been given ambiguous information about how to get a reward (usually a piece of food).  If the ambiguity causes them to display uncertainty, the reasoning goes, then they must understand how limited their knowledge is.  Dolphins and monkeys seem to pass this test, but not birds.  However, this test has been criticized because it’s not clear that the displayed behavior comes from knowledge of uncertainty, or just uncertainty.  It could be argued that fruit flies display uncertainty.  Does that prove they have metacognition?

A more rigorous experiment starts by showing an animal information, then hides that information.  The animal then has to decide whether to take a test on what they remember seeing.  If they decide not to take the test, they get a moderately tasty treat.  If they do take the test and fail, they get nothing.  But if they take it and succeed, they get a much tastier treat.  The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information.  The goal of the overall experiment is to measure how accurately the animal can assess its own memory.

Some primates pass this more rigorous test, but nothing else seems to.  Dolphins and birds reportedly fail it.  This type of self reflective ability appears to be restricted to only primates.  (There was a study that seemed to show rats passing a similar test, but the specific test reportedly had a flaw where the rats might simply have learned an optimized sequence without any metacognition.)

What do all these tests mean?  Well, failure to pass them is not necessarily conclusive.  There may be confounding variables.  For example, all of these tests seem to require relatively high intelligence.  I think this is a particularly serious issue for the mirror test.  What it’s testing for is a fairly straightforward type of body or attention awareness, but the intelligence required to figure out who the reflection is seems bound to generate false negatives.

This seems like less of an issue for the metacognition tests.  Metacognition could itself be considered a type of intelligence.  And its functionality might not be a useful adaptation unless it’s paired with a certain level of intelligence.  Still, as I noted in the panpsychism post, any time a test shows that only primates have a certain ability, we need to be mindful of the possibility of an anthropocentric bias.

Again, my own sense is that body awareness is pervasive among animals.  I think attention and affect awareness are also relatively pervasive, although as this NY Times article that amanimal shared with me discusses, humans are able to imagine and plan far more deeply and much further into the future than other animals.  Most animals can only think ahead by a few minutes, whereas humans can do it days, months, years, or even decades into the future.

This seems to indicate that the level 4 capabilities of most animals, along with the associated attention and affect awareness, are far more limited than in humans.  And metacognitive awareness, the highest form of self awareness, only appears to exist in humans and, to a lesser extent, in a few other species.

Considering that our sense of inner experience likely comes from a combination of attention, affect, and metacognitive awareness, it seems like the results of these tests are a stark reminder that we should be careful to not project our own cognitive scope on animals, even when our intuitions are powerfully urging us to do so.

Unless of course there are aspects of this I’m missing?

Panpsychism and layers of consciousness

The Neoplatonic “world soul”
Source: Wikipedia

I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe.  However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.

I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong.  The reason is that with a sufficiently loose definition of consciousness, it is true.  The question is how useful those loose definitions are.

But first I think a clarification is needed.  Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.

The first is one I’ll call pandualism.  Pandualism takes substance dualism as a starting point.

Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind.  Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind.  This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.

Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces.  This group seems to include people like David Chalmers and Christof Koch.

I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong.  There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation.  The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations.  It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.

The second panpsychist position is one I’m going to call naturalistic panpsychism.  This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious.  Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.

Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism.  Apsychists don’t see consciousness as actually existing.  In their view it’s an illusion, an obsolete concept similar to vitalism.  We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.

The difference between naturalistic panpsychism and apsychism seems to amount to language.  (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.)  Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language.  But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.

I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me.  But ultimately, I think both approaches are problematic.  Again, I won’t say that they’re wrong necessarily, just not productive.  But their unproductiveness seems to arise from an overly broad definition of consciousness.  As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.

Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t.  The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge.  Religion, biological life, and consciousness are all concepts that seem to fall into this category.

Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”.  But that apparent simplicity masks a lot of complex underpinnings.  Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them.  Without this ability, what makes experience “subjective” or “like” anything?

Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat.  If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.

So, metacognition seems to factor into our intuition of consciousness.  But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality.  My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.

The first layer, and the most basic, are reflexes.  The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes.  This could be considered the base programming of the system.  A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.

Perception is the second layer.  It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them.  This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.

Attention, selective focusing of resources based on perception and reflex, is the third layer.  It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain.  This layer is a system to prioritize what the reflexes will react to.

Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment.  Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet.  Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.

Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.

Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one.  It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.

Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful.  And introspection is the defining characteristic of human consciousness.  Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.

How widespread is metacognition in the animal kingdom?  No one really knows.  Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals.  A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).

But these tests are complex, and the animal’s overall intelligence may be a confounding variable.  And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias.  Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative.  It may be that only humans and, to a lesser extent other primates, have it.

So, which layers are necessary for consciousness?  If you answer one, the reflex one, then you may effectively be a panpsychist.  If you say layer two, perception, then you might consider some artificial neural networks conscious.  As I mentioned above, some autonomous robots are approaching layer three with attention.  But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.

And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify.  But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.

What about the common criteria of an ability to suffer?  Consider that our version of suffering is inescapably tangled up with our metacognition.  Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?

So what do you think?  Does panpsychism remain a useful outlook?  Are the layers I describe here hopelessly wrong?  If so, what’s another way to look at it?