Dogs have metacognition, maybe

Last year in a post on panpsychism, I introduced a hierarchy I use to conceptualize the capabilities of systems that we intuitively see as conscious.  This isn’t a new theory of consciousness or anything, just my own way of making sense of what is an enormously complicated subject.

That hierarchy of consciousness was as follows:

  1. Reflexive survival circuits, programmatic reactions to stimuli adaptive toward an organism’s survival.
  2. Perception, mental imagery, image maps, predictive models of the environment which expand the scope of what the reflexes are reacting to.
  3. Attention, prioritization of what the reflexes are reacting to.  Attention can be both bottom up, driven reflexively, or top down, driven by the following layers.
  4. Imagination, brokering of contradictory reactions from 1-3, running action-sensory simulations of possible courses of action, each of which is in turn reacted to by 1.  It is here where the reflexes in 1 become decoupled, changing an automatic reaction to a propensity for action, changing (some) reflexes into affects, emotional feelings.
  5. Metacognition, introspective self awareness, in essence the ability to assess the performance of the system in the above layers and adjust accordingly.  It is this layer, if sophisticated enough, that enables symbolic thought: language, mathematics, art, etc.

In that post, I pointed out how crucial metacognition (layer 5) is for human level consciousness and that, despite my own intuition that it was more widespread (in varying degrees of sophistication), the evidence only showed that humans, and to a lesser extent other primates, had it.  Well, it looks like there may be evidence of metacognition in dogs.

Dogs know when they don’t know

When they don’t have enough information to make an accurate decision, dogs will search for more – similarly to chimpanzees and humans.

Researchers at the DogStudies lab at the Max Planck Institute for the Science of Human History have shown that dogs possess some “metacognitive” abilities – specifically, they are aware of when they do not have enough information to solve a problem and will actively seek more information, similarly to primates. To investigate this, the researchers created a test in which dogs had to find a reward – a toy or food – behind one of two fences. They found that the dogs looked for additional information significantly more often when they had not seen where the reward was hidden.

I was initially skeptical when I read the press release, but after going through the actual paper, I’m more convinced.

The dogs were faced with a choice that, if they chose wrong, meant they didn’t get to have a reward.  A treat or a toy was hidden behind one of two V-shaped fences.  The dogs made their choice by going around the fence to reach the desired item, if it was there.  Each fence had a slit that the dogs could approach prior to their choice to see or smell if the item was present.  Sometimes they were able to watch while the treat or toy was placed, and other times they were prevented from watching the placement.

When they couldn’t see where it was placed, they were much more likely to approach the slit and gather more information.  In other words, they knew when they didn’t know where the treat or toy was and adjusted their actions accordingly.  In addition, they adjusted their strategy based on the desirability of the treat or whether the item was their favorite toy, indicating that they weren’t just reflexively following an instinctive sequence.

My initial skepticism was whether this amounted to actual evidence for metacognition.  Couldn’t the dogs have simply been acting on whatever knowledge they had or didn’t have without accessing that knowledge introspectively?  Honestly, I’m still a little unsure on this, but I can see the argument that the actual act of stopping to gather more information is significant.  An animal without metacognition might just guess more accurately when the have the information than when they don’t.

This gets into why metacognition is adaptive.  It allows an animal to deal with uncertainty in a more effective manner, to know when they themselves are uncertain about something and decide whether they should act or first try to gather additional information.  It’s a more obvious benefit for a primate that needs to decide whether they can successfully leap to the next tree, but it can be a benefit for just about any species.

That said, the paper does acknowledge that this evidence isn’t completely unequivocal and that more research is required.  It’s possible to conceive of non-metacognitive explanations for the observed behavior.  And it’s worth noting that the metacognitive ability of the dogs, if it is in fact metacognition, is more limited than what is observed in primates.  If they do introspect, it’s in a more limited fashion than non-human primates, which in turn appears to be far more limited than what happens in humans.

It seems to me that whether dogs have metacognition has broader implications than what’s going on with our pets.  If it is there, then it means that metacognition, albeit in a limited fashion, exists in most mammals.  That gives them a “higher order” version of consciousness than the primary or sensory version (layers 1-4 above), and I see that as a very significant thing.

Unless of course I’m missing something?

h/t ScienceDaily

This entry was posted in Mind and AI and tagged , , , , , , , . Bookmark the permalink.

66 Responses to Dogs have metacognition, maybe

  1. Hey Mike, you find good stuff to think about, and tweet about, which I really appreciate.

    I think you can get to this new data on dog behavior without putting them at level 5, depending on how you think goals and plans work. I think it could work like this:

    As soon as the treat situation is perceived, a goal is generated to get the treat. This is a level 2 response. Existence of this goal will drive attention (level 3). Attention to the goal can generate activation of memories of how the goal was achieved in the past. These memories constitute a set of competing plans. Examples would be “go left”, “go right”, “seek information”. Every time a plan was successful, that plan would be activated stronger next time. I expect the “time to reward” would also influence that value such that a more direct route (“go left”) would gain greater influence if successful as opposed to a more circuitous “seek information” route.

    So the ability to choose a circuitous route seems like level 4: imagination.

    While the plan is “seek”, the goal remains and the other plans are still competing. When new information comes in, new memories are activated and new plans may gain the edge over “seek”, one possibly becoming the new plan.

    I think this can happen without symbolic thought or metacognition. I don’t think the dog has to be aware of the plans as “plans”. It just executes the current plan, whatever that is.

    Whatcha think?

    *

    Liked by 2 people

    • Thanks James! We seem to find the same stuff interesting.

      Just to be clear, no one is saying that dogs have symbolic thought. That seems to be humanity’s special sauce. I think it definitely requires metacognition, but also seems to require specialized circuitry we don’t see anywhere else. Attempts to teach chimpanzees (our closest relatives) language have floundered, with the brightest chimps never getting past what a two year old can do. The chimps can’t seem to graduate to sentence structures or learn more than a few words.

      I appreciate you thinking this through in terms of the hierarchy I lay out. I would just note that formulating the original goal likely requires 4 (imagination), but the rest works for me. The crucial detail seems to be whether the dog decides to stop and gather information or not. If it was just operantly learning the best strategy, it seems like it would gather information at the slit every time, but it only seems to do so when it doesn’t know the location of its goal. Or more accurately, statistically it’s far more likely to seek information when it doesn’t know the location.

      Something I didn’t mention in the post though. Checking for additional information didn’t help the dogs much. When they had seen the original placement, they succeeded more than 90% of the time. But when they hadn’t seen, and had to check, they only succeeded in the 52-57% range, which seems only slightly better than random chance. The study authors hypothesized that maybe the dogs had an issue inhibiting their initial rush, even when they knew they didn’t yet know the answer. They note that similarly tested apes didn’t have that problem. I’m not sure what to make of that.

      Like

  2. Wyrd Smythe says:

    I’m with you and the authors of the paper only maybe even more skeptical this equates to meta-cognition. Which I take to mean some form of reflective thought that “I don’t have enough information; I should investigate.”

    As opposed to a more basic need for information to solve a problem. If you’ve ever watched a dog navigate an obstacle course, something like that. They need to get from point A to point B, and they seek a path.

    I’m not even sure how much meta-cognition it requires in humans to go seeking. Cognition, certainly, but meta? Feels like seeking is a more primitive cognition. The test might just be finding that dogs have really good seeking skills.

    That said, dogs are surprising sometimes. This post had me remembering a game Sam (my black lab) and I used to play.

    I’d tell her “Go hide!” and she’d go to my bedroom walk-in closet and wait. I’d hide several treats in out-of-sight places in the living room. Then I’d call her, she’d come, and I’d tell her, “Find it!” and she would start seeking for those treats (using scent). It was really fun to watch her zeroing in on them.

    What applies here is that she learned that game in a townhouse I had before I got married. After a brief disastrous marriage, and moving out to a rental condo, I bought this condo. We never played the game in the (marriage) house, nor in the rental condo (no walk-in closet). But I have one here, so one time I just gave her the “Go hide!” command to see what she’d do.

    I was astonished that she did go into the bedroom closet (from the living room). The two places had significantly different layouts (including rambler versus split-level), and we hadn’t played the game in a long time.

    But somehow she had an understanding of the bedroom and the walk-in closet that transcended the layout differences, and that seems like abstraction to me.

    She also seemed to have some crude classification abstraction. Telling her to find a “toy” versus a “ball” always worked (toys were anything that wasn’t a ball). I’ve read accounts of dogs able to distinguish hundreds of specific objects (and therefore specific words for those objects).

    OTOH, each ball was a distinct object. I’d often try to secretly swap tennis balls during a game, and she’d reject the swap every time the moment it came in range. Talking tennis balls that came from the same can and has (as best as I could manage) equal play time. I sure couldn’t tell them apart!

    Dogs are amazing, but I’m definitely skeptical they have much in the way of meta-cognition.

    Crows, on the other hand,… 😮

    Liked by 2 people

    • I can definitely see that skepticism. And I think if they do have metacognition, it’s far more limited than what humans have. In humans, metacognition seems to be a mostly frontal lobe event (although other regions are also involved), and primate frontal lobes are larger than other mammals, with humans being exceptionally large. But even if dog metacognition is only glimmers, that’s glimmers of an inner life I wasn’t sure they possessed. (Technically, I guess I’m still not sure, although perhaps I’m less unsure now.)

      Sam’s ability to preserve those physical relationship maps is interesting. It makes me wonder how she’s actually doing the mapping. I wonder if it’s in relation to where she knows you sleep. Does she actually sleep there too? If so, maybe it’s in relation to where she sleeps. Or maybe some other structural relationship(s) that persisted across the different homes.

      The abstraction thing is complicated. You could argue that any mental concept, even the non-language ones that animals hold, is a symbol of sorts. It’s a representation that symbolizes some external phenomena. And if they have any degree of metacognition, they have the ability to have concepts about their concepts. But I think the difference is that non-human animals, even apes, don’t seem to have the ability to mentally manipulate and map the symbols in recursive hierarchies. That seems to be unique to humans.

      Liked by 2 people

      • Wyrd Smythe says:

        “And I think if they do have metacognition, it’s far more limited than what humans have.”

        Yeah, and it raises the question: is higher cognition a continuum, or is there some gap that must be leaped. Why have no other animals come even close to human cognition?

        “But even if dog metacognition is only glimmers, that’s glimmers of an inner life I wasn’t sure they possessed.”

        One of my favorite questions about dogs involves what their inner lives are like. As with Nagel’s There is something it is like to be a bat, there is definitely something it is like to be a dog. It’s probably in my Top Five Questions I’d like answers to!

        The way my dog would remember meeting a dog on a new street and then look for that dog when we passed that house on a later walks said something. So did her clear desire to go one way versus another on walks (although I often could never figure out why).

        There is something it is like to be a dog, but I’m not sure it’s necessarily on a continuum with humans. Nagel’s point, in part, was that we can’t possibly imagine what it’s like to be a bat, and I suspect that’s true with dogs, too.

        I just saw a headline about a dog “guarding” its house that burned down in the California fires. Didn’t read the article, so no details, but I wondered if the writer was interpreting the dog, in an unknown scary situation, staying close to home as “guarding.”

        Part of the trick behind the dog’s evolution is becoming an inkblot humans see themselves in. It’s worked out really well for them. 😀

        “It makes me wonder how [Sam is] actually doing the mapping.”

        As you suspect, she slept on my bed, so definite connection. And both bedrooms had a walk-in closet; a place where my clothes hung. Shoes in particular, so it may even be scent based.

        What impressed me was how different the physical architecture was.

        A possible flip side: she rarely got that many treats at one time, so she really liked the “Find it!” game. Strong incentive towards cooperative behavior.

        “It’s a representation that symbolizes some external phenomena.”

        Totally. I see symbols and classification as different levels of abstraction. A symbol that stands for something is an abstraction of that something; the mapping is one-to-one. Classification is an abstraction that groups multiple otherwise separate objects under one symbol that stands for the class.

        The example that always struck me was an abiding certainty Sam saw every tree as an utterly distinct entity. She had no class of “trees.” As I mentioned, she did seem to have a class of “ball” and “toy,” which I found fascinating, because to me that a higher abstraction.

        As you say, it appears non-humans lack the ability to process symbols at higher levels. They can handle one-to-one mappings — like the dog that memorized hundreds of separate toys — but they don’t seem able to do much with those symbols.

        But are they just really far behind us on some spectrum, or must some gap be jumped?

        Liked by 1 person

        • When pondering the difference between us and dogs, I think it helps to remember just how different they are in terms of substrate. Our cerebral cortex has an average of 16 billion neurons, while theirs only has about 500 million, or about 1/32 as much. Of course, number of neurons in and of themselves isn’t everything, but when the difference is that large, particularly for the cortex, it matters. (80% of the neurons in our brain are actually in the cerebellum, which scales more directly with body size, but only seems to have modest effects on cognition.)

          Great apes, with 6-9 billion neurons in their cortex, are closer. And it seems meaningful that chimps are the only ones who’ve been able to pick up even a smattering of language (sign language), although not to the extent of being able to form sentences.

          There was a recent meta-study on the neural correlates of metacognition. One area that shows up prominently for it in humans is Broca’s Area, a region crucial for speech production. It seems like our capability for symbolic thought and metacognition have formed some kind of synergy that propel us into a category all our own.

          So what’s it like to be a dog or a bat? We have to remember that their sensory resolutions are lower than ours (except for smell or echolocation). But their mental depth, their ability to make meaning from their sensory input, is also much shallower. Their capability for it to be like anything “to be them” is far more limited than ours. All of which is to say, I think we’d fine their perspective, if we could ever somehow inhabit it, extremely limited, although experiencing their olfactory maps, or in the case of bats, their echolocation maps, would be fascinating.

          Liked by 1 person

          • Wyrd Smythe says:

            A strong correlation between language and intelligence seems necessary, although there’s the usual chicken-egg thing. Did growing intellectual capability find expression in language, or did growing language facility lead to greater intelligence.

            I once read an evolutionary pyschology theory that language grew from our desire to tell jokes. 🙂

            It would be amazing to be in the mind of a dog or bat or crow for a while. In Terry Pratchett’s Discworld novels, there’s a character, Granny Weatherwax, a witch, who can project her mind into animals, but coming back to human awareness is something of a trick that takes a pause. If she stays too long, it’s possible to be unable to return. Too much is lost.

            As you say, an animal’s mind would be a much simpler place, but likely very interesting in terms of smells or sounds.

            Liked by 1 person

          • On the language and intelligence thing, I’ve actually come to the conclusion that they underlying mechanisms that make symbolic thought, including language, possible are the special sauce of humanity. No other species shows anything like it. It seems like symbolic thought is what allows us to expand our imagination to years in the future (or years in the past), to ponder continents, planets, solar systems, economies, empires, and many other things chimpanzees never dream of.

            I read something the other day that we may have sang before we talked. Not sure I buy that, but it’s an interesting theory. Language is usually controlled by the left side of the brain, but apparently the same regions on the right side control singing.

            The biggest problem with being in the mind of a dog or bat, is that we would lose access to all the cognition that makes us who we are, meaning we wouldn’t be able to appreciate it in the moment. If we somehow had memories of it afterward, that we could use our full intellect to assess, it seems like it might manifest to us as a confused dream. Actually a dream seems like a good analogy, since when we’re dreaming, only parts of the brain are reportedly functioning.

            Liked by 1 person

          • Wyrd Smythe says:

            “It seems like symbolic thought is what allows us to expand our imagination…”

            Well, you know I’m certainly big on the idea of imagination being part of the secret sauce, so, yeah, absolutely. Symbol manipulating without imagination is possible, but imagination without symbols just isn’t.

            In my view, mathematics comes from symbolic thought; in particular, the classification abstraction I mentioned earlier as a higher form. Dog can’t count, because they don’t classify objects into sets, and math is founded on the cardinality of sets.

            “I read something the other day that we may have sang before we talked.”

            I can easily believe that. Vocalization predating speech makes a lot of sense.

            Very true about the disconnect between speech and singing. Speaker’s accents famous vanish when they sing, and I’ve heard of brain injuries that damaged speech but left singing intact.

            “If we somehow had memories of it afterward, that we could use our full intellect to assess, it seems like it might manifest to us as a confused dream.”

            Yeah, or a hallucinogenic drug experience!

            You definitely couldn’t take your human cognition with you, so it would have to be either an imported or remembered experience.

            Liked by 1 person

          • “Dog can’t count, because they don’t classify objects into sets, and math is founded on the cardinality of sets.”
            That’s interesting. I didn’t know that about dogs.

            I’ve never had a hallucinogenic drug experience. As someone who avoids mind altering drugs (except for caffeine), I’m unlikely to ever have one, although at times I wonder what I’m missing.

            Liked by 1 person

          • Wyrd Smythe says:

            “That’s interesting. I didn’t know that about dogs.”

            Mind you, that’s only my analysis of the situation. The logic being that counting requires the understanding that different members belong to a group. A family with five kids; each is a distinct person. Counting only occurs thru categorizing them under some common attribute.

            My observation (FWIW) is that dogs don’t categorize hardly at all. (Hence why Sam distinguishing “toy” and “ball” impressed me. Those are categories!)

            “I’ve never had a hallucinogenic drug experience.”

            I might compare it to skydiving in that one can certainly go through life without every trying it, there are some risks associated (but knowledge is a good shield), and if you do try it, it will almost certainly change your life in some way.

            What I find interesting (and very consistent with my own experiences) is how hallucinogenic experiences frequently offer a spiritual sense that abides long after — sometimes permanently. The phrase “mind expanding” in part refers to the sense of connectedness often felt.

            At the same time, the very famous story about an early LSD researcher who decided to experiment on himself. He experienced profound feelings, and at one point was moved to write down a key insight he’d had.

            The next day he saw his note, which read, “The room smells funny.”

            So a grain of NaCl is a good idea. That said, the positive sides are pretty interesting.

            And a lot of fun if you’re not discommoded by altered perceptions. First time I took mescaline I was laughing my ass off for hours because the sensations were so interesting. But I crave new experiences (which is why I disdain reboots), and not everyone is like that.

            Liked by 1 person

          • The “room smells funny” quote is one I came across decades ago. It was in the context of an author weighing in on the value of taking mind altering drugs (alcohol, marijuana, LSD, etc) before attempting to write, questioning the notion that it might provide artistic benefits. It largely caused me to be skeptical of the value of drug filled contemplation.

            I might be too narrow in my thinking, but it seems like the main value is entertainment, which I don’t begrudge anyone from having (provided they’ve taken reasonable precautions). Although I could see that being a benefit for some artists. Whatever brings inspiration.

            Liked by 1 person

          • Wyrd Smythe says:

            Depends on the person, perhaps. Given the way they work — altering synapse function — the mind is clearly in a place it couldn’t quite reach otherwise, so there is whatever value a different perspective might provide.

            Liked by 1 person

          • James Cross says:

            “We have to remember that their sensory resolutions are lower than ours (except for smell or echolocation).”

            Dogs apparently have less abilities in sight than humans but are much better in smell and hearing. They are worse in some senses, better in others, so not totally clear how their sensory resolutions are lower.

            BTW dragonflies probably have a much greater resolution of sight than humans. They can see a 360 degree range and see more colors, including ultraviolet.

            https://scienceblogs.com/grrlscientist/2009/07/08/30000-facets-give-dragonflies

            Cats, BTW, can also see into the ultraviolet spectrum and see in light 7 times dimmer than humans.

            Regrading metacognition and cats, a personal observation.

            My wife and I have two cats. Our daughter moved back in with us for a while and brought her cat. Our daughter doesn’t want her cat to go outside. Our two cats go outside regularly when the weather is good. Our daughter’s cat escapes to the outdoors on occasion.

            Our daughter’s cat is male and my wife and I have a male cat and they get along surprisingly well. My theory has been the cats carry our human scents and, hence, are recognized to each other as belonging to the same household and not be threatening.

            After the buildup, here’s story.

            Our male cat was outside. My daughter’s cat escaped and I was in the process of running him down to bring him back inside when our male cat spotted my daughter’s male cat. I’m certain that the sight of my daughter’s cat outside struck our cat as anomalous. He probably wasn’t certain whether it was my daughter’s cat or some other interloper. So what did he do? He went to my daughter’s cat to sniff him and verify his identity. In the face of uncertainty, he sought more information.

            Liked by 1 person

          • “Resolution” refers to the total amount of information coming in at a time. As you note, dog visual acuity is lower, and they only have two types of color receptors, which limits their perception to the green-blue range. This makes their overall visual resolution far lower than ours. Definitely the resolution of their olfactory and auditory senses are higher, but their sensory organs are smaller than ours, so the overall sensory information is lower.

            Dragonflies might be able to see more colors, and they may have broader breadth in their visual field, but insect eyes are tiny. They don’t have the number of photoreceptors of larger animals. That makes their visual acuity very low, although given their tiny brains and limited computational capacity, it’s generally more than adequate. For an idea of their acuity, checking out this ScienceDaily article on fly vision: https://www.sciencedaily.com/releases/2018/10/181025142010.htm

            On seeking more information, it matters why they’re seeking that additional information. Simply seeking more information could be a reflexive response to a lack of information. There needs to be some indication that they’re actually accessing their current knowledge and changing plans accordingly, and not in an automatic manner. That said, I’m still not sure the experiment in the study necessarily showed that for dogs, although I think it’s closer. Neither case seems as compelling as monkeys deciding whether or not to risk a test based on what they think they remember.

            Like

  3. Callan says:

    I’m not sure – seems the bottom floor to me. Like the dog is not B: thinking about how it does not know, it is simply A: responding in a search pattern to having very little information. ‘A’ is basically the ground floor responce to a lack of information, while I’d say B is more a metacognition.

    Really the whole thing is probably complicated because of humans likely using multiple nested recursions. Like a human can B: think about how they don’t know and then C: think about how they came to not know. There can be D and E and…so on and so forth. So at what point is any of those letters metacognition? Some philosophers might get up to D and E, some regular folk only get to B (and sometimes the reverse…).

    To me metacognition is probably a bit of mental blindspot thing – it seems meta/outside, because when you go through one recursion you are looking at the previous state (like ‘A’ above). But you can’t see yourself seeing, so it seems ‘outside’ cognition. Yet talking about that itself is to see that from the outside…so are we metacognising now? Or is it every time we take a mental step back we just lose track of the perspective we are looking at ourselves from?

    So I’m not sure metacognition as it’s known now really has earned the the whole ‘meta’ part.

    Liked by 1 person

    • To add additional complications, there are people who think that metacognition is an illusion, that we don’t actually have access to our own thoughts, only models we build about those thoughts. Similar to how we can never be sure we’re not a brain in a vat, we can never be sure we actually know our own thoughts in and of themselves.

      My own sense is that we do have some privileged access. For example, as I write this, I hear my own inner voice as I’m composing this comment. But it may be that level of access is only specific to language production. Psychological research does seem to show that our knowledge of our own mental states is far more limited than we intuitively feel it is.

      On the other hand, the amount of computational substrate necessary to create all those fake mental states seems wasteful. It’s a lot more efficient if the meta-versions are actually echoes of the original ones. Wasteful illusions don’t seem like something that would be naturally selected for.

      Liked by 1 person

      • Callan says:

        I’m not sure why it’d be fake mental states – why couldn’t it be like using a radio? Sometimes the transmitting station is just very faint and the radio has trouble picking it up and it’s got a high noise to signal ratio. The radio isn’t computing fake signals, it’s just working with what it’s got to work with.

        So as usual to complicate things more in an unsatisfying manner, there’s probably a spectrum of inner access. At one end of the spectrum it is fairly direct and at the other end of the spectrum it is just an attempt at self modeling – some of which may be pure confabulation.

        Liked by 1 person

        • I tend to think your radio analogy is closer to the truth than straight confabulation. I think the idea of pure confabulation comes from the theory that self reflection is our theory of mind turned inward. But a recent meta-study of metacognition (lots of metas) showed which brain regions lit up for each, mentalizing and metacognition. While there was some overlap, they weren’t identical, and large portions seemed independent of each other.

          All of which strengthens my feeling that pure confabulation wouldn’t be a very adaptive capability.

          Liked by 1 person

          • Callan says:

            Yeah, I’ve been leary of the idea we know ourselves only in the exact same way (through theory of mind) as we know others – like a really hollow understanding. I’m sure MRI’s will show when someone is thinking a sentence that you get certain parts of the brain lighting up – and indeed if someone reflects on their feelings on something and reflects deeply, you may get those areas for that feeling lighting up sans any actual outside stimulus.

            But I also think that there is indeed an amount of theory of mind element to knowing ourselves – I think it’s partially true. Just not the entire case. Consider – would it be all that adaptively useful for an animal to really 100% understand itself, or would a ‘close enough is good enough’ be enough for evolution? I think it was Scott Bakker who pitched that idea…evolution wouldn’t really care about giving us really good inner access. Just whatever is enough to get by.

            Liked by 1 person

          • Just in case you’re interested, here’s the scan showing mentalizing (theory of mind) and metacognition.

            The overall paper is at: https://journals.sagepub.com/doi/full/10.1177/2398212818810591

            Liked by 1 person

  4. Is it not possible that dogs or some other non human animals may have mental faculties which humans can not even conceive of ?

    Liked by 1 person

    • Sure, it’s possible. It was discovered a few years ago that the position dogs choose when defecating aligns with Earth’s magnetic field, which I’ll admit is something I never conceived of until then. And many birds can apparently sense magnetic fields in their navigation. And, of course, most animals have far stronger smelling abilities than humans (or primates in general).

      Like

      • ” which I’ll admit is something I never conceived of until then.”
        But now you can conceive it. What about faculties which humans can not conceive of because of the limitations of human mind? Or do you think that human mind is capable of conceiving of all that exists and all that is happening?

        Liked by 1 person

        • I don’t know if the human mind is capable of conceiving of all that exists and all that is happening. I suspect it can’t. I often wonder if we can ever truly understand things like wave/particle duality in quantum mechanics. We never get to know reality in and of itself, only what our senses reveal.

          But for any phenomenon, it doesn’t seem productive to just assume we can’t understand it. All we can do is make observations and attempt to build theories (models) that predict future observations, and then see how accurate those predictions are. In the end, that’s all we ever get, but that has provided far more than I think anyone before the scientific age could have imagined.

          Liked by 1 person

  5. James Cross says:

    I’m not really buying the idea that other animals “sensory resolutions” are lower than ours. I’m also not seeing how that affects your overall argument.

    I think the one thing that is most obvious about humans is that in sensory and physical capabilities we are probably below average compared to most other animals. Where we excel is in symbol manipulation, social interaction, and our ability to transmit learning to one generation to another.

    Regarding language and humans, you might find this interesting.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223784/

    Liked by 1 person

    • Wyrd Smythe says:

      Given the amazing skills bats and dolphins show with echo-location, perhaps “resolution” is the wrong word. Likewise with dogs and smell, which is very powerful sense for them.

      Overall data processing ability seems what’s much reduced in animals. And in some cases, larger parts of their brains are devoted to, for instance, echo signal processing in real time.

      Liked by 2 people

      • By “resolution”, I’m really just referring to the number of sensory neurons involved. Think of it like pixels on a computer monitor. Higher resolution provides more information, although we still had fun gaming experiences with a CGA monitor. Resolution changes the experience, but many capabilities work regardless. Think of the difference between Castle Wolfenstein and Quake; similar capabilities but different resolutions.

        In that sense, a dog generally will have fewer sensory neurons than we do, and an insect will have a very tiny fraction. But I have no reason to suspect that dolphin resolutions aren’t comparable to ours, or that elephant or whale resolutions aren’t higher.

        Liked by 1 person

        • Wyrd Smythe says:

          We must be talking about different things, because (per Wikipedia):

          “Dogs have roughly forty times more smell-sensitive receptors than humans, ranging from about 125 million to nearly 300 million in some dog breeds, such as bloodhounds. This is thought to make its sense of smell up to 40 times more sensitive than human’s. These receptors are spread over an area about the size of a pocket handkerchief (compared to 5 million over an area the size of a postage stamp for humans).”

          They also have better hearing, so I have the image of a better monitor (but an inferior operating system). Are you talking more about resolution in the OS?

          Liked by 1 person

          • I think you’re right that we’re talking about slightly different things. The difference is between focusing on specific modalities such as smell, where dogs do have higher resolution, or looking at the full suite, including sight, taste, and touch, which are lower compared to us. (Check out the discussion a little further down on that Wiki page comparing numbers of taste buds.)

            On OS, I’m leery of the word “inferior”. It’s really more a matter of being calibrated for different things. I would say that the depth of their nervous system, the number of neural layers, the amount of substrate between their sensory processing regions and motor regions, is far less than ours. That limits the amount of meaning they can extract out of their sensory input, or the planning they can do prior to initiating movement. (Not that it’s only about capacities.) But that depth still allows them to build sophisticated olfactory image maps that we can’t really imagine.

            But the capacities of their nervous system are orders of magnitude larger than what insects have. A fly’s vision is extremely blurry by our standards. (Of course, it isn’t blurry by the fly’s standards, which has nothing to compare it to, and its brain wouldn’t have the capacity to deal with higher resolution images anyway.)

            Liked by 1 person

  6. Regarding the above discussions on the symbolic capabilities of dogs and the “special sauce” that humans have, I expect that the difference is not simply the use of symbols. Clearly dogs can associate symbols like spoken words with specific actions. (Sit!). I also expect (without evidence) that dogs can recognize numerosity, by which I mean tell the difference between one and two, possibly three or even four things? I think that the “special sauce” humans have is to arbitrarily combine symbols into new concepts, and also to perform operations on those symbols, such as actual counting, i.e., keeping track of the current count and adding one. It is the combination of these abilities which affords language.

    *

    Liked by 1 person

    • James, sorry just realized I never responded to this comment. I think you’re right that what makes humans special is our ability to build recursive hierarchies of our symbols.

      On dogs and numbers, I’m agnostic. That seems like something that would be difficult to test. I do think dogs recognize differences in quantity, such as a bigger piece of meat compared to a smaller one, but I don’t know of behavior that gives clear indication specifically on whether they count.

      It seems worth noting that a dog’s ability to associate a word with something seems to require repetition. If I point to an object and say, “This is an ankh,” you have a reasonable chance of recognizing that object as an ankh in the future. I’m not sure a dog ever does.
      Of course, they can associate something like, “Get the ankh,” with retrieving that object. But that seems less about them accessing their sensory experience and associating the sound with it, than operant learning of a task triggered by the sounds.

      That said, I have to admit I haven’t done a whole lot of reading about the science on dogs, so I’ll defer to anyone who has.

      Liked by 1 person

  7. I’ll start some true commentary for this post next, though first I’d like to offer a potential community service announcement. There’s a tool that I began widely using five months ago and I think it has really upped my game, both for blogging and for being informed in general. It could be that many of you are already doing this sort of thing, though since I haven’t noticed hearing about this I’ll presume not, or at least that some may benefit.

    We must do a great deal of reading to get the information that we need. But reading requires a good bit of concentration for that specific task alone. Thus instead of taking the time to sit down in a quiet place with the articles and such that help educate us, if this material could instead be read to us while we’re doing other mundane things (and yes even driving!), then we should naturally be able to lead more interesting and educated lives.

    Well as it turns out, they’ve got an app for that! I use a free one on my phone called “Speechify”. Here I copy text, open the program, and then in seconds I’m listening to what I might instead have needed to actively read. I personally prefer the posh British voice option.

    For example when I select this post (905 words), it’s a 3 minute listen. Then the 36 comments above mine (6250 words), goes for another 21 minutes. (All depending upon the speed that you’re comfortable with.)

    Then beyond input there’s also the issue of output, which is to say effectively building our thoughts. I suppose that most everyone uses a laptop computer for this, whether sitting at a desk, sofa, or in bed. Perhaps iPads are used here and there. Does anyone use the on screen keyboard of their phone? Or at least with speech to text? Surely only in a pinch!

    My first smart phone in 2007 came with a slide out keyboard. So addicted did I become to this format that when they stopped making them I started carrying two phones — a modern one as well as one for writing. Then I realized that I could glue a metal plate to the back of a new phone and slide on a Bluetooth keyboard with a similarly fabricated metal plate that protrudes off its back. So now I’m all set — except apparently I’ve found the last good keyboard, which was a long retired machine called “Grandmax”. I’ve purchased several others, mostly designed for television input, but none work nearly as well. So I’m quite worried about what I’ll do when my good keyboard dies!

    If anyone has thoughts about the text to speech tool, or effective machines from which to write, well I’d love to hear!

    Liked by 1 person

    • Can’t say I use any text to speech software. I used to listen to a lot of Audible books, but too much of what I’m interested in reading isn’t available, and I’ve reached the point where I rarely check anymore.

      I do a good amount of online reading on my phone (an iPhone 7) or iPad, and many of the tweets you see come from them. But I rarely attempt to comment with them and have never tried to do a blog post from them. For that I’ve always used a laptop, currently a Microsoft Surface Book. I did try using a bluetooth keyboard with the iPad several years ago, but found it very limiting.

      I’ve historically listened to a lot of podcasts on my phone, although not as much in the last year or so. I also read a lot of Kindle books on my phone or occasionally on the iPad, and even do some book reading (mostly for work) on my laptop. Physical book reading is rare these days.

      I occasionally think about switching my phone to Android, but I’m nervous about whether there are decent podcast apps available. The cross platform ones I’ve tried haven’t been very good.

      Liked by 1 person

      • Well Mike, I think you’re going to thank me for this one. Whatever you can select and copy, can now become an audiobook for you. It doesn’t seem to matter how big. The most recent iPhone update made copying better for us. (Yes I did go apple, and specifically because they have one phone that isn’t obnoxiously big. And apparently phones only got obnoxiously big because people don’t have real keyboards and so must somehow both type on and look at the same thing!)

        Of course you also read lots of stuff on the web, some of which you might now listen to. Even your own comment section. I sometimes enjoy listening to our lengthy conversations from way back. Tools exist to be used as such, so accept them for their positives even if they do carry some negatives. A leaf blower must be used differently than a leaf rake, so don’t judge one by the standards of the other. Judge each by how well it can serve your purposes as such.

        Actually my phone word processing program (MS Word) does speech to text under “Review”, but apple’s most recent OS update turned a normal American woman voice into a computer generated sort of thing, so unfortunately about like Stephen Hawking!

        Liked by 1 person

        • Thanks Eric. I didn’t realize that Kindle now allows us to copy text, although doing large sections at a time seems like it would be painful. And it seems like you’d have to manually keep track of what you’ve copied and listened to so far. I don’t guess there’s a way to do it from a PC and then use it on the phone? I know Calibre and similar tools can convert Kindle books to PDF, but that only seems possible if they’re not using DRM.

          I guess the ability to change the voice requires the Pro version? The free one seems hard wired to the woman’s voice. Not that it’s a bad one.

          I remember when the early hardware Kindles had a text to speech feature. Amazon largely abandoned it after they purchased Audible since it would threaten that revenue stream. We’ve always had the option of turning on the iPhone’s accessibility features to use its built in text to speech, but it requires that you turn the page at the end of every screen, which isn’t exactly practical when you’re driving.

          Liked by 1 person

      • Mike,
        You’re right that it’s painful to manually copy on Kindle. I go one chapter at a time and then title what I’ve copied that way to keep track. I’ve noticed that some PDFs don’t allow any text selection. Apparently they don’t want their work copied at all.

        I’m pretty sure that I’ve still got the free version of Speechify. When you’re in a book you should be able to hit the three dots at the top right that gives seven voice choices and lots of different languages. And don’t forget to adjust the speed at the bottom. My only true complaint is that it only goes in portrait mode though my phone keyboard always has me in landscape. The last software update seemed to fix most half cutoff words, which was irritating.

        I’ve now put on the speak screen setting where a reader comes up with the two finger pull down. It seems to do far more than just the screen, though doesn’t follow along with the text of an email or web page. I may find this useful however.

        Liked by 1 person

        • Some PDFs are scanned images of paper documents, particularly for a lot of old stuff. To do anything with that, you’d need OCR software. But yeah, there are PDFs that have embedded DRM or other security features to lock down the content. I had to read PDF books last year that required access to a content licensing server, which was very slow to respond and made the books painful to work with.

          Playing around with Calibre this morning, Amazon appears to be monkeying with their format, making even DRM free books impossible to convert. Not good, particularly since this is content I’ve paid for and own.

          Found the voice selection. Thanks!

          Liked by 1 person

  8. Mike,
    I’d like to pair this dog metacognition study along with the study that you’ve recently linked to on twitter about how apparently they’ve found a fish that can recognize itself in a mirror.
    https://www.quantamagazine.org/a-self-aware-fish-raises-doubts-about-a-cognitive-test-20181212/

    I’m wondering if you’d take a moment to think about the sorts of things that I’d say about these studies given my own “dual computers” model? This is the model by which a vast computer that is not conscious, outputs a second computer that is conscious (or essentially the sorts of things that we’re aware of). Here I identify an input that drives this tiny form of computer, or “valence”, and input which provides pure information, or “senses”, and an input by which past conscious experience may also be taken into account somewhat, or “memory”. These three forms of input are then interpreted and scenarios are constructed about how to promote valence welfare by means of “thought”, with the only non-thought output as “muscle operation”.

    So what might I say about evidence of metacognition in dogs, as well as fish that pass the mirror test? I realize that this is a tough one, though I’d be very interested if some ideas nevertheless come to you. I’ll of course provide my own such interpretation regardless.

    Liked by 1 person

    • Eric,
      Your model seems primarily focused on sentience which I think everyone agrees both dogs and cleaner wrasses have. And I personally think they both have body self awareness. (Otherwise they might try to eat themselves, which no animal I know of with any kind of brain does.) The question is to what degree they might have introspective self awareness.

      It’s worth noting that the cleaner wrasse results remain very controversial, with the originator of the mirror test not buying them, and even the study author thinks his study actually means that the mirror test doesn’t show what many think it shows. (I actually agree with this.) And I myself am still not sure about the dog study results, although they strike me as more plausible than cleaner wrasse, although far less plausible than the more rigorous metacognition experiments on primates.

      But based on our prior conversations, I think you see metacognition and the associated metacognitive self awareness as something any sentient creature can do if the concept of a mind is introduced to them. If I’ve got that right, you probably would accept the results of these studies at face value.

      But I’m interested to see your own thoughts on this.

      Liked by 1 person

      • Mike,
        Well as a true solipsist I don’t accept any results just at face value, though I will say that these particular results don’t seem to conflict with my own dual computers model. Thus if corroborated they wouldn’t force any revisions to my theory. Apparently many others can’t say the same. Furthermore for the most part I’m very impressed with your answer! Let’s disregard the “if the concept of a mind is introduced to them” part however, which to me seems like something that’s extremely advanced. (That is unless you can correct my perception and so render this quite basic?) But then yes, introspective or metacognitive self awareness is something that I consider any functional sentient creature to have in at least some capacity (pending definition). So let’s begin breaking this down.

        I’m also in agreement that the mirror test has been steeped in anthropocentric biases. How could it not be given that we’re all merely human, as well as that no functional brain architecture has yet gained any general acceptance? So I applaud the author for checking to see if only certain big brained mammals have the capacity to grasp reflected visual information. Whether or not these specific results stand the test of time, it’s clear to me that science needs correction in this regard. I’m happy that you and the author feel similarly.

        My main issue for this one is indeed this metacognition business. Let’s first get our definitions straight however. In it’s most grand sense “thinking about thought” would seem to mandate something such as a relatively educated human. This is because in order to think about the concept of thought, one must at minimum have a term for the “thought” concept to ponder. This is in the vein of “I think therefore I am”. Of course this dog study wasn’t testing for anything nearly that advanced.

        For this study apparently “metacognition” is evident whenever a creature isn’t entirely sure about something, but then has the ability to questions its suspicions and so look for more evidence. As I understand it we’re also calling this “reflection” and “introspection”.

        I’ll now stop briefly to see if that’s your understanding of what is meant by “metacognition” for this study? Otherwise apparently I’ll instead need to account for a somewhat more complex idea.

        Liked by 1 person

        • Eric,
          My understanding of metacognition is that it’s cognition about cognition, awareness of our own awareness, knowledge of our own knowledge.

          The problem is that testing for this capability is extremely difficult, at least in a manner that unequivocally demonstrates its existence. We can accomplish it in humans by interviewing them about their mental state. Obviously that isn’t an option with animals. (And I suspect language, and symbolic thought overall, requires a particularly sophisticated form of metacognition.)

          The test I described last year for primates involved having them decide whether they knew enough to succeed at a test. Success at that pretty clearly showed a form of metamemory. But it’s a complex test and its complexity means failure for less intelligent species doesn’t necessarily indicate absence of metacognition.

          There have been experiments that simply observed whether the animal acted uncertain. The idea was that if they showed uncertainty, that meant they knew they didn’t have good information. The problem is that a show of uncertainty can simply be indecision arising from a lack of knowledge. I can’t see that it necessarily requires metacognition.

          The dog test is in between these. The key thing is that the dogs seemed to seek information when they didn’t know something, but not when they did, and their tendency to check varied according to what kind of prize they were expecting. The question is whether that specific behavior could happen without some form of metacognition.

          My own suspicion is that, like imagination, metacognition is not an all of nothing thing. There can probably be simple primitive forms of it, and more sophisticated versions. For example, knowing whether I know where a piece of food is well enough to warrant the energy of retrieving it, compared to assessing my performance in a social situation, to pondering my place in the universe. Indeed, I think affect consciousness could itself be considered a very primitive version of metacognition. In this lineup, social species have major advantages over non-social ones, and species that have to navigate complex environments (such as primates in trees) have even more.

          Liked by 1 person

      • Mike,
        Before we start baking in the presumption that testing for metacognition is extremely difficult, let’s first establish the basics for this project itself. It remains to be seen how difficult such testing happens to be. Instead I’m asking for you to provide me with a single definition for the term, as well as to realize that your definition shall be true by definition and so could just as easily be defined in any number of ways. “Metacognition” does not exist out there to discover. This is an arbitrary term which exists for you to define for me so that I can continue illustrating my perception of how things work.

        I believe that science today fails largely given that epistemology today fails. Apparently one way in which epistemology fails is by permitting scientists to go off looking for what “time”, “space”, “life”, “consciousness”, and so on “truly mean”. But you and I, we’re not going to fail just because they fail. We’re instead going to use my EP1 to start anew. So now define “metacognition” however you like (but specifically), knowing full well that you could just as easily go a different way that would force me to thusly revise. Then I’ll use your definition to the extent that I understand what you mean by it. (I did propose a definition above, but just give me anything reasonably specific and we’ll take it from there.)

        Liked by 1 person

        • Eric,
          I gave you a definition of metacognition at the top of my last comment. Or we can use the dictionary version: awareness and understanding of one’s own thought processes.

          But you seem to be using the word “definition” to mean a standard by which metacognition’s presence or absence can be tested. But that’s precisely the hard part. If we can all agree on how to test for it, the actual logistics of the experiment can be worked out.

          The standard that the primates passed was successfully choosing whether to bet on making a potentially costly decision based on their assessment of the quality of their own knowledge. Dogs haven’t been able to pass this standard.

          The standard the dogs did pass was deciding to seek additional information if they didn’t know something prior to making a potentially costly decision. The question is whether they had to have knowledge of their own knowledge to do this, or were just doing what they’d do without the knowledge.

          Liked by 1 person

      • Okay Mike, let’s take it from this angle. We could define metacognition as “Awareness and understanding of one’s own thought processes such that one could potentially decide not to decide”. Or “…decide to seek more information”. Or maybe just “…decide”. Or whatever else seems useful. Note that this makes no difference to me given that I do not use the term in my own models. But consider how helpful it should be for anyone who does use the term in their models, such as yourself, to get specific about what they mean. You could even label different varieties of metacognition if you like. The only thing that actually troubles me here is when science goes looking for something without bothering to specifically define what’s being looked for.

        With that said however I’ll get into the brain architecture that I’ve developed. The model states that all sentient life (whether a human or even a fly if it’s sentient) interpret conscious inputs (which come in flavors of valence, senses, and memory), and then build associated scenarios about what to do to promote its present valence based interests. The theory is that we all seek to make ourselves feel better each moment (and for something as advanced as the human this should be heavily influenced by its hope and worry about the future).

        Thus I’m not surprised that dogs seek more information when they’re less certain. I’ve actually noticed this while playing fetch with them. At times I’ll be deceptive and motion like I’m throwing the ball, though actually hold on to it. Then the dog may start off to get it, but hesitantly since it’s still searching for the ball. Theoretically here the dog is interpreting its valence (desire to get the ball), its senses (sight), and its memory (wasn’t the ball just thrown?) in order to construct scenarios about how it might feel better. I suspect that most fetch playing dogs have memory of false throws and so keep this scenario handy for such play. Like the human, dogs seek information. In fact all functional conscious life should do so in at least some capacity.

        Do dogs ever settle on any scenarios where they decide that it’s best not to decide? Do they ever run through enough scenarios and so decide (like those chimps) that they aren’t going to play? Hmm…. I suppose that wouldn’t surprise me too much either.

        So if I place all conscious life in the same essential category, with sliding differentiation regarding more advanced “thought” (or interpreting conscious inputs and constructing scenarios), is there anything that I consider truly special about the human? Yes there is one thing. It’s that we evolved to speak natural languages. I consider this tool to constitute a powerful second variety of thought.

        Liked by 1 person

      • Good one Mike! I enjoyed Corey Mohler’s provided explanation:
        “Frege was an early philosopher of language, who formulated a theory of semantics that largely had to do with how we form truth propositions about the world. His theories were enormously influential for people like Russel, Carnap, and even Wittgenstein early in his career. They all recognized that the languages we use are ambiguous, so making exact determinations was always difficult. Most of them were logicians and mathematicians, and wanted to render ordinary language as exact and precise as mathematical language, so we could go about doing empirical science with perfect clarity. Russell, Carnap, and others even vowed to create an exact scientific language (narrator: “they didn’t create an exact scientific language”).
        Later on, Wittgenstein and other philosophers such as J.L. Austin came to believe that a fundamental mistake was made about the nature of language itself. Language, they thought, doesn’t pick out truth propositions about the world at all. Speech acts were fundamentally no different than other actions, and were merely used in social situations to bring about certain effects. For example, in asking for a sandwich to be passed across the table, we do not pick out a certain set of facts about the world, we only utter the words with the expectations that it will cause certain behavior in others. Learning what is and isn’t a sandwich is more like learning the rules of a game than making declarations about what exists in the world, so for Wittgenstein, what is or isn’t a sandwich depends only on the success or failure of the word “sandwich” in a social context, regardless of what actual physical properties a sandwich has in common with, say, a hotdog.”

        And indeed, my EP1 lies squarely between each of these positions. The “make English exact” side obviously couldn’t, as this hotdog scenario so humorously demonstrates. But then I’d say that these “ordinary language” philosophers have failed just as miserably by rendering the most fundamental tool of science, or its terms, instruments of popular conception. That’s trouble, and certainly on the soft side.

        What “is” life? What “is” time? I consider this standard mode of query (today just as common in academia as the street I think) to be tremendously problematic. It’s as if our terms harbor truth behind them for us to potentially discover. So instead I ask “What’s a useful definition for….” time, life, thought, and all the rest. I believe that the theorist must be permitted to define a given term in the exact way that his or her work requires. Here a theorist will not be burdened by existing definitions which aren’t set up right. A given position would then succeed or fail based upon apparent usefulness. Of my four principles of philosophy I believe that this one would have the widest impact.

        For this strip I hope to some day see a middle position sequel!

        Liked by 1 person

        • Thought you might enjoy it. As always, when I share these, it isn’t to make any particular point, just for amusement.

          I’ve been thinking about the standard for testing metacognition, and it occurs to me that a case can be made that it probably should focus on what capability metacognition brings, in other words why it evolved. At a primal level, that capability seems to be the ability to assess our own performance on some task, or potential performance if we attempt it.

          The primate test seems pretty solid on this. The dog test in this study seems far shakier. I’m now leaning more towards thinking that this study didn’t demonstrate that dogs have this capability. Of course, that doesn’t necessarily mean they don’t have it, just that if they do, a way hasn’t been found to demonstrate it yet.

          Liked by 1 person

      • Mike,
        Yes let’s consider your provided definition for metacognition to try to get a feel for it. Apparently anything would have this capacity if at a primary level it’s capable of assessing its own performance on some task, or potential performance if it’s attempted. From here I can see how you wouldn’t find the provided dog test as conclusive as the primate test, since the dog test simply had them seeking more information while the primate test led some subjects to decide that they were better off abstaining. But if the dog test wasn’t structured to test for potential abstentions, perhaps we can design some tests that are?

        It seems to me that many or most dogs love to run and jump, and that they all love respect from others for their abilities regardless. So let’s say that we put together a couple of platforms with a padded moat in between for them to jump adjusted distances. The punishment of coming up short would not be getting injured, except for the dog’s pride in relation to the dog’s master and a cheering or jeering audience.

        Would you say that if some of these dogs were to decide not to jump at longer distances, that this would likely be because they could foresee that they probably wouldn’t make it and so would be humiliated, whereas not jumping wouldn’t be nearly as stigmatic? Or if not, what might better test the potential for dogs to assess their own performances, or potential performances if attempted?

        I’ve been going through Wikipedia’s cognition and metacognition articles in order to hopefully get a better sense of the things that people mean by these terms. (Way confusing!) Something occurred to me here. If you’re going to designate metacognition as a fifth layer of consciousness, for coherence shouldn’t your hierarchy also identify cognition somewhere previously? Either way could you fit the term in there for me?

        Liked by 1 person

        • Eric,
          On your proposed test, I think we would have difficulty showing that the dog was motivated in its decision by complex emotions like respect or shame, particularly in eliminating alternate explanations that are more primal. Even then, I’m not sure if that necessarily would demonstrate metacognition, at least in the sense of being more than mere affect awareness.

          The test would have to be something where success required that the dog accurately assessed its own knowledge or (even harder) regulated its own cognitive processes. The trick is devising a test that the dog doesn’t fail from lack of intelligence, and where non-metacognitive capabilities aren’t a plausible alternate explanation. The intelligence issue is why the primate test I’ve described before doesn’t work. Of course, metacognition is actually a type of intelligence, so the trick is isolating the specific type of intelligence we’re after. As I noted above, if it was easy, it would have been done decades ago.

          On the layers, I’ve actually alternatively referred to them before as “layers of cognition.” You could equate cognition with layers 2-4. Although I actually think of metacognition as an advanced form of cognition (hence my alternate name for the layers).

          I think the difference between cognition and metacognition is that in cognition, the concepts are about outside entities, the environment, external objects, one’s body, etc. Even affective feelings could be considered concepts about a different subsystem in the brain. But metacognition involves concepts about other concepts, or “second order” concepts. And it can involve concepts of the concepts of the concepts, to arbitrary levels of recursion, at least in humans. (Which is how we can have this conversation.)

          Liked by 1 person

      • Mike,
        The only reason that I made the punishment for that test something like “humiliation”, was so that this might be an experiment that wouldn’t horrify people in general about what scientists would be doing to dogs. It slipped my mind that many soft scientists today consider very little regarding human mental function to exist in the social forms of animal that the human evolved from. I take the opposite position for a couple of reasons. The main one however is that when standard anecdotal evidence strongly supports them as having rich emotional existence, with virtually all animal professionals using this perspective in order to effectively do their jobs, strong evidence should be demanded to believe otherwise. Strong contrary evidence today is the last thing we have. As you’ve said, just because these scientists haven’t conclusively found that dogs do feel higher order emotions through their tests, this doesn’t mean that such emotions aren’t felt. I’ll go with the professionals on that one. I suspect that science in general will come around to this view as it improves.

        So to remove this concern the experiment can be altered in a way that hopefully won’t be attempted. We could encourage dogs to jump across an adjustable pit that would cause them tremendous pain and injury for falling short. Howls of pain from failed attempts would make it clear to all about the consequences of failure. The question now becomes, is the dog capable of assessing its ability to jump a dangerous ravine? Is it capable of pondering the sorts of things that it might feel if it comes up short?

        Yes of course it can. This would seem to satisfy the definition which you’ve presented earlier, or a capability to assess its own performance regarding some task that it has done, or potential performance for a task that its considering doing.

        Your last reply seemed to present a higher standard for metacognition however, where a subject must accurately assess its own knowledge. (I’ll take it that knowledge of how far it can jump, or how fast it can run against a known competitor, doesn’t suffice.)

        Let’s say that there is some game that the dog can play with a human that yields a clear winner and loser between them. Perhaps the dog gets a treat for winning and is given a mild shock for losing. Then let’s say that there is a person which the dog commonly beats as well as a person that the dog commonly loses to. If the dog chooses to no longer play this game with the person that it generally loses to, this would suggest that it understands that it’s not quite as talented. Would this satisfy you that it has an ability to assess its own ability? Or an ability to assess the ability of another? Would this be a concept (the ability to effectively play) about a concept (the game)?

        I went back to that Metcalfe paper to consider the Rhesus monkey test. These monkeys could choose to take the test when they were more confident that they remembered rightly for a good reward, or could choose to not take the test when they were less confident for a lesser reward.

        If the tests of science do not yet demonstrate that dogs can do this sort of thing, then I’m more inclined to question its ability to effectively develop such tests, not to question whether or not dogs can be uncertain enough about their memories to show restraint in that regard. To me this seems essential for effective conscious function itself. What use would memory be to a dog if it can’t use such understandings to consciously modify its behavior?

        On cognition, I was wondering if you were going to define this as your 2, 3, and 4 layers of consciousness. But notice that this would once again leave your layer 1 as the odd man out. Perhaps it would be useful to not classify 1 as a layer of consciousness at all? You’d then have four layers of cognition, with that final one as the meta form.

        Liked by 1 person

        • Eric,
          I did leave off the word “mental” above in that I should have stipulated “mental performance”. Sorry. My bad. I took that as given and led you down a dead end. As a result, as you noted, the jump test, even the potentially painful one, doesn’t get us there. It’s really just testing for (first order) imagination and affects.

          I’m afraid the game test doesn’t either. The dog can operantly learn which human is easier and which is harder without metacognition. Don’t feel bad. Eliminating solely operant learning as a possibility appears to be one of the hardest things to accomplish in constructing these tests. As I noted above, this is hard.

          I understand your strong intuition about what dogs can do. But if we’re going to be scientific about this, you can’t use only those intuitions in your reasoning. Rigorous, repeatable, verifiable evidence is necessary. It’s far too easy to fall into the trap of projecting our own mental scope on animals. Remember, dogs have 1/32 the amount of cortical substrate we do. There will be differences in that scope. It’s just a question of what is different. They may be our cherished friends, but they’re not little humans.

          I keep the reflexes as layer 1 for two reasons. First, a lot of people assert that anything that responds to the environment is conscious, or at least anything that responds with the motivations of living things. I think both standards are too broad for an effective definition of consciousness, but many panpsychists use them, and that hierarchy was originally presented as part of a post criticizing panpsychism, and to show just how much distance there is between those definitions and human consciousness.

          But the second reason I keep it is that reflexes are a necessary foundation. Without them, we’d be just empty prediction and reasoning mechanisms. You can’t have sentience without them. When layer 4 decouples them, changing automatic action to a propensity to action, they become the foundation of affective feelings.

          Liked by 1 person

      • Mike,
        Ah… good clarification. By deciding to continue on with a computer game because it knows it’s able to remember the correct answer, or not to because it’s less certain, the Rhesus monkey demonstrates that it can assess its own mental performance. Conversely a dog attempting or not attempting to jump a dangerous pit does not seem mental. (Well maybe a bit, since here it must remember how far it can jump, though I certainly wouldn’t use a fancy term like “metacognition” for that.) And even if a dog were to commonly beat one person in a game that just happens to concern memory, though not another person, we might explain the dog’s preferred opponent in other ways. In order to demonstrate metacognition it might instead be given information, and then decide to continue or not based upon the confidence it has that it’s retained this well enough to answer an associated question.

        I do still suspect that the dog can do this however. This sort of thing should address why memory exists at all. And then once scientists are able to demonstrate such behavior in dogs, I predict that this criteria for “thinking about thought” will be taken away from not only them, but the Rhesus monkey as well. And I’m fine with such a restriction of the term. Science must define metacognition however it’s considered most useful to define, and it seems to me that many scientists today would like to reserve this term to describe human function exclusively.

        Note that none of these tests out there place a limit on what’s actually felt or thought. Even that 2009 Horowitz dog/guilt experiment, which I consider quite biased, presented such a disclaimer. These studies simply indicate that perhaps what seems to be the case, isn’t quite true. Technically the evidence is for agnosticism here, though apparently many scientists presume more.

        My own perspective does not emerge from a need for animals in my life. I have no use for what pets provide in an emotional sense, given that my wife and son provide me with this. Or it might be that I’m just not a pet person. Looking after any animal would be virtually all burden without reward. But then if that’s the case, why do I feel so strongly about this particular issue? Because the dynamics of how life functions, and certainly sentient life, is what interests me more than anything. This stuff is my passion.

        Notice that if you want to effectively work with dogs, cats, horses, or intelligent social animals in general, whether professionally or casually, your ability to comprehend their function will depend upon your ability to accept the view that these are essentially strange and non-lingual forms of “human”. Given their behavior it only makes sense that they can feel all sorts of “higher level” things. This might be jealousy when a new member comes along that seems favored. Or a cat might pridefully display a kill to a horrified vegan owner. It also seems quite clear to me that they have some ability to grasp what others feel and so use their theory of mind skills to shape their decisions. I’m saying that the wolf must have needed such skills, as I’m sure you’ll find that any wolf documentary suggests. The human has altered the dog extensively in its outside appearance, though perhaps has tinkered with its mind far less. Both its empathy and sympathy are displayed strongly in the wolf.

        In order to not make so many mistakes with our animals we must naturally understand something about how they function. I suspect that any person who takes the first order emotional line on advanced social animals, will be scratching their heads again and again about how to explain what they witness. There should be far more potential for such a job to be done effectively by people who model such creatures as strange forms of non-lingual “human”. This person might predict things like “If I add this animal to the group, then that one should tend to display jealousy”, and so on. I suspect that if so inclined a professional animal handler could make a mockery of the position that dogs lack theory of mind skills, as well as set up some effective scientific experiments to demonstrate that sort of thing. But then where are these experiments? I don’t know. Perhaps they’re coming. Science doesn’t always work effectively.

        On your first layer of consciousness, I cannot accept that it needs to be there because some people talk about how everything that lives and moves is useful to call “conscious”, or certainly not because panpsychists exist. You’ve mentioned this model as how you personally consider it useful to think about this stuff. But since you do not endorse such views, why honor them with a seat at your table? In the comments I’m sure that I’ve noticed others perplexed by what you’re calling your first layer of consciousness.

        Then for your second explanation, well I yes agree. You can’t have consciousness exist without any supporting structure at all. But you don’t need to call the supporting structure “conscious”. Instead you could have one layer of non-consciousness that supports four layers of consciousness. Then as you ponder this sort of thing further, perhaps you’ll reduce your model down to a parsimonious “force = mass x acceleration” type of model that our soft sciences so desperately need today. In that case apparently you and I would become competitors! But perhaps our models would actually become one and the same. Here we’d be collaborators. Regardless of our fates, there’s work to be done!

        Liked by 1 person

        • Eric,
          I should note that I’m very much a dog person. I’ve had several in my life time and thought of each of them as a cherished friend. When the last one died, I grieved intensely and buried her in my back yard (actually her back yard more than mine) with her favorite toys. Just letting you know that I don’t come at this from a disinterested clinical perspective.

          On the layers of consciousness, I think you might be looking at it as which layer actually is conscious. I actually don’t look at it that way. I don’t think there’s a fact of the matter on which one is conscious, just where we decide to draw the line.

          And we tend to be inconsistent in how we do it. If we see an awake animal displaying behavior that can be explained solely by 1-3, many of us will say that it’s conscious. We do the same thing with humans who are brain compromised, such as the hydranencephalic children who are missing most of their cerebrum. But when talking about ourselves, we tend to relegate anything outside of 5 to the subconscious or unconscious. (Sorry, I know you don’t like that last word, but it’s the one commonly used.)

          In my view, it’s meaningless to talk about consciousness as though it’s a thing in and of itself that’s either entirely present or entirely absent. I see that as dualistic thinking. It’s a collection and hierarchy of capabilities. When enough of those capabilities are present, we say the system is conscious. We generally don’t require all of them. I realize that’s not a popular view, but it’s where my studies have brought me.

          Don’t worry. We won’t be competitors. I view my layers as solely pedagogical, not any kind of new scientific theory. I’ll use them to the extent they help me keep things straight, and that they help convey complex concepts, but I have no intention of trying to sell them as anything but my own repackaging of mainstream neuroscience.

          Liked by 1 person

      • Mike,
        I suspected that you were a dog guy. It’s amazing how perfectly these animals appeal to us. And then given your understandable bias to want to believe that these animals felt something like what you felt towards them, I can see how you’d fight those biases when science suggested that you had actually been duped about their apparent affections and theory of mind capabilities. So you’d now try to believe that you were simply a meal ticket for them, and regardless of how convincing their displays seemed to be. Well I think that your dogs had rich emotional lives, with sympathy was well as empathy, and so felt about you something similar to what you felt about them. And even given one thirty second of the cortical neurons. That’s certainly what the professional would tell you. And the professionals prove their understandings every day by being able to do the practical jobs that they do.

        Then as far as there being no true definition for consciousness, it’s good to hear that we’re on the same page with that, or my first principle of epistemology. But just like it was necessary in the days of Newton to develop a useful definition for “force” (so useful that accelerated mass would become widely accepted as such today) science is in need of a definition for consciousness that’s useful enough to become widely accepted. Soft science will need to graduate from its “sort of like this…” definition.

        I have a sentience based consciousness definition that I’m quite proud of, but helping others practically test out its implications has been challenging. In many respects I consider the physics community back in the days of Newton far more healthy than soft science is today. Indeed, it may be that our soft sciences will need effective principles of metaphysics, epistemology, and axiology in order to become healthy enough to do what’s needed. So I propose theory to potentially help on that front as well.

        The key to my consciousness model is not functionality, but rather the potential to feel bad or good. If you were instead the computer that you’re looking at, could existence be bad or good for you? Probably not. But if you were a hydranencephalic child, could existence be bad or good for you? Probably. Thus here the functional machine would not be conscious while the incomplete machine would be conscious. I theorize the human brain as a vast non-conscious computer that outputs the conscious form of computer by which you experience existence. It contains inputs of “valence” such as pain, “senses” such as sight, as well as a proclivity for degraded past conscious experiences to be accessed, or “memory”. For functional consciousness such inputs are interpreted and scenarios are constructed with the purpose of promoting instant valence. I call this conscious processor “thought”. The only non-thought thing that this form of computer can effectively do is operate certain muscles.

        What I need is for people like yourself is to give this model a try in various applications to see where improvements are needed, and whether useful understandings result. And indeed, I’ve developed an entire suite of supporting models as well. Without a solid foundation from which to build, as in Newton’s contribution to physics, our mental and behavioral sciences should continue to suffer.

        Liked by 1 person

        • Eric,
          “So you’d now try to believe that you were simply a meal ticket for them, and regardless of how convincing their displays seemed to be. ”
          I actually don’t see any reason to go that far. Wolves are social animals, so it stands to reason they can bond with others. We’ve selectively bred a clade that extends their social circle to us. That leaves plenty room for genuine affection.

          The scope of their theory of mind is limited by our standards, but it is there. They can have genuine sympathy for us when they perceive we’re suffering. They often don’t understand the complex reasons we suffer, but they usually understand the primal state well enough to offer comfort.

          But the data indicates that a theory of mind doesn’t necessarily equal metacognition. I shared this image from a meta-study on metacognition somewhere else on this thread. It shows some overlap in brain regions between the two, but very distinct networks.

          Liked by 1 person

      • Mike,
        It’s good to hear that you see both care and theory of mind in the dog and perhaps social creatures in general. But how about what I call “theory of mind sensations”? This is not just theory of mind, such as I think that you think that I’m awesome. It’s that because I think that you think that I’m awesome, I feel good about this perception. Or it could be the opposite. In that case I’d feel bad about what I think that you think about me. I believe that this sort of thing evolved into social forms of animal long before the human came along, and did so for clearly adaptive reasons. If evidence suggests that the pack reveres a given member, then it may be adaptive for the perhaps revered member to feel good about its perceptions of the pack’s perceptions of it. Thus here we have “higher order” sensations in some capacity, such as respect, humiliation, and so on. While I don’t think that a person can effectively work with social animals while denying that these animals feel such things somewhat, apparently the very same person could be welcomed into an Ivory Tower which supports the position that only the human can feel such higher order theory of mind sensations.

        I went through that paper from the image above where they found only some neural correlation between theory of mind and metacognition. Though quite clinical I’m not entirely sure what to think about it. It’s been my understanding that finding broad neural signatures for things as basic as feeling sad or even feeling pain has been extremely elusive for neuroscientists. Can they really differentiate when I’m thinking about England (cognition) and thinking that I think therefore I am (metacognition)? I wouldn’t think so. And this would supposedly be a very strong example of metacognition that’s clearly reserved for a lingual creature. What about when a person is deciding whether or not he/she remembers something well enough to gamble about that memory, or the sort of thing that a even Rhesus monkey seems able to do? I’d be surprised if they can tell whether he/she is doing something like this rather than perhaps eating lunch. But then if they can, well that’s wonderful!

        Note that regardless of how broadly or narrowly it’s defined, my own consciousness model addresses brain function at a more fundamental level than metacognition. Therefore the model does not reference the term whatsoever. Whether the metacognitive proclamations of René Descartes, or the figuring of a Rhesus monkey deciding if its memory is sufficient to accept a bet, each are first theorized as the interpretation of conscious inputs (valence, senses, memory) and construction of scenarios about what will cause it to feel better from moment to moment. So I’m providing a quite reduced model, or the kind which tends to be most valuable in science, or at least when validated by enough evidence.

        Liked by 1 person

  9. The topic of dog knowledge always piques my interest! I am, of course, fairly biased about it, given Geordie’s influence on me. 🙂

    Also, I haven’t read all the comments, so I apologize if I’m repeating someone’s point.

    My first reaction was this: How is it possible to conduct an experiment to determine metacognition in animals? Or I could phrase it this way—How can behavior reveal the difference between simple knowing or not knowing vs. knowing that/what you know or knowing that/what you don’t know?

    Anyway, a recent example from the chronicles of Geordie: leading up to Christmas, Geordie lurked around the presents under the tree as if sizing them up, circling the area multiple times a day. His presents always have a treat wrapped inside each one, and this operates as a sniffable gift tag so he knows which ones are his. One day while I was watching him, he carefully pulled out one of his presents, dropped it maybe an inch or so from where it was originally, then looked up at me with what I’ll call a questioning look. In any case, it was obvious to me that he was asking for permission to open it. Why was it obvious to me? Well, if he wanted to be a bad boy, he could open his presents while everyone was away, but he’d never do that—a treat is a really big deal for him, but not worth getting into trouble for. He was also extremely careful about pulling out the package so as not to tear it—really in stark contrast to his behavior on Christmas day which certainly didn’t involve waiting his turn or anything civilized like that—and he only did this while I was standing beside him and watching him. This was his way of making it clear he wasn’t trying to pull a fast one on me, but merely communicating his desire to open the present. (Alas, I made him wait until Christmas. I know. Mean Mommy!) So, was he exhibiting metacognition?

    Another recent example: “Daddy” dropped a pretty sizable chunk of a chocolate cookie on the floor, and Geordie was the first to notice it. Instead of simply scarfing it down—as he would if it were a piece of cheese or something he’s been given in the past—he asked me for it. Of course, I had to say no and take it away, but I the fact that he didn’t just eat it shows some sort of insight, I think, maybe. Then again, I’ve often told him when some particular food is “not for doggies,” and he might’ve learned from me that he’s not supposed to eat chocolate, which I imagine is pretty easy for dogs to identify. If this is the case, it might not be metacognition he was exhibiting, but just a solid understanding of the rules.

    What do you make of this?:
    https://link.springer.com/article/10.1007/s10071-017-1082-x

    Liked by 1 person

  10. “How can behavior reveal the difference between simple knowing or not knowing vs. knowing that/what you know or knowing that/what you don’t know?”
    The answer is that it’s really difficult. It comes down to what adaptive benefit we think that capability brings. In humans, this is easy: get them to talk about their mental states. The ability to associate word sounds with perceptions, actions, or feelings is a major benefit and requires detailed metacognition. But in species without language? We need to identify behavior that can only happen if the animal can assess its own mental knowledge.

    One pretty rigorous experiment starts by showing an animal information, then hides that information. The animal then has to decide whether to take a test on what they remember seeing. If they decide not to take the test, they get a moderately tasty treat (think peanut). If they do take the test and fail, they get nothing. But if they take it and succeed, they get a much tastier treat (such as a grape). The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information. The goal of the overall experiment is to measure how accurately the animal can assess its own memory. Only primates pass this test. But the fact that only they pass is a bit suspicious. Maybe the test is anthropocentric?

    The question is whether the dogs in the study in the post could have done what they did without evaluating their own knowledge. On the one hand, the dogs stopped to gather more information more often when they didn’t know. But did this require them to think about what they knew? Or were they simply reacting to what knowledge they had? If this is metacognition, it’s a weaker form than what the primates demonstrated.

    Geordie is an impressive dog! I’m pretty sure none of my dogs would have abstained when I wasn’t around from accessing any treat they knew how to get. One of them once went so fast for spilled Oreo crumbs that we didn’t have a chance to stop him, or the heart to pull him away.

    The problem with evaluating behavior of a dog in front of its owner is that we give dogs unconscious cues all the time. They are remarkably attuned to our reactions. The question is, did Geordie really stop to ask permission? Or did he see your facial reaction and body language and hold off because he might get into trouble? Scientists who put controls in place to see if dogs actually have guilt or similarly complex cognition weren’t able to confirm it.

    That study on theory of mind is interesting. Thanks! The dogs are making decisions based on what they think each person knows. It’s hard to imagine they can do that without having at least some kind of similar insight for themselves. The question is how much are mentalising (theory of mind) and metacognition the same thing. I linked to a meta-study elsewhere in the thread that seemed to show they have overlap, but have a lot distinct activation. So, maybe?

    The overall paper is at: https://journals.sagepub.com/doi/full/10.1177/2398212818810591

    Liked by 1 person

    • That peanut-grape test (or so I’m calling it) sounds really hard! I’m not sure I’d pass it. How do the animals know they’ll get a better treat (a grape) if they take and pass the test? I might prefer the guaranteed peanut!

      “The problem with evaluating behavior of a dog in front of its owner is that we give dogs unconscious cues all the time. They are remarkably attuned to our reactions. The question is, did Geordie really stop to ask permission?”

      It’s definitely true that he reads me like a book. I don’t think he was feeling guilty so much as just wondering if he could eat it. As in, “Is this edible?”

      I understand the skepticism, though. Most dogs would go for it without hesitation. He seems awfully good at holding back from new types of food he finds, especially for a terrier who loves food as much as he does. I sometimes wonder whether the rattle snake incident had something to do with his reluctance to trust himself the way most terriers would.

      The truth is, I’m embarrassingly lax with him. He’s allowed on all furniture, he sleeps with us every night, he usually gets a bite or two of whatever I’m eating, so long as it won’t make him sick, of course. It’s a bit egalitarian around here. So maybe he understands that I’m not arbitrarily forbidding him from having things? He takes me seriously and backs off when I say “be careful”, or “watch out” or “look out for the pokie”, as though he knows it’s for his own benefit.

      “The question is how much are mentalising (theory of mind) and metacognition the same thing.”

      Yeah, I’m not sure what metacognition really is, at least not when it comes to animals. But yeah, knowing others does make it seem one must know oneself, right?

      Liked by 1 person

      • “How do the animals know they’ll get a better treat (a grape) if they take and pass the test?
        I might prefer the guaranteed peanut!”
        As I understand it, there’s a training period where the animal learns how the test works. And the treats are adjusted for whatever the particular species likes. Monkeys love grapes over peanuts, but a cat wouldn’t be interested. And there are variations on performance per individual, even of the same species, so it has to be measured statistically.

        The thing being measured is how likely they are to take the test when they have good information vs how likely they are when they have bad information vs how likely they are when they have ambiguous information. Passing means taking it more often when they have good info and less often when they have bad. The idea is that you can only do that if you can examine your own knowledge.

        As I discussed with Eric, designing a test where success is only possible with metacognition is actually very hard. Most who attempt it, like the people in the post study, have trouble ruling out alternate simpler explanations.

        “But yeah, knowing others does make it seem one must know oneself, right?”
        You would think so. It seems like a theory of mind is difficult if one doesn’t have at least some access to one’s own mind. But that may just be the way it is in humans. Our theory of mind may be more powerful, allowing much larger social groups, because we have that kind of access. Projecting that relationship to other species may be a mistake. Maybe one of the reasons metacognition evolved was to enhance a theory of mind that existed before it.

        Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.