Frans de Waal on animal consciousness

Frans de Waal is a well known proponent of animals being much more like us than many people are comfortable admitting.  In this short two minute video, he gives his reason for concluding that at least some non-human animals are conscious.  (Note: there’s also a transcript.)

de Waal is largely equating imagination and planning with consciousness, which I’ve done myself on numerous occasions.  It’s a valid viewpoint, although some people will quibble with it since it doesn’t necessarily include metacognitive self awareness.  In other words, it doesn’t have the full human package.  Still, the insight that many non-humans have imagination, whether we want to include it in consciousness or not, is an important point.

As I’ve noted many times before, I think the right way to look at this is as a hierarchy or progression of capabilities.  In my mind, this usually has five layers:

  1. Survival circuit reflexes
  2. Perception: predictive sensory models of the environment, expanding the scope of what the reflexes can react to
  3. Attention: prioritizing what the reflexes react to
  4. Imagination / sentience: action scenario simulations to decide which reflexes to allow or inhibit, decoupling the reflexes into feelings, expanding the scope of what the reflexes can react to in time as well as space
  5. Metacognition: theory-of-mind self awareness, symbolic thought

There’s nothing crucial about this exact grouping.  Imagination in particular could probably be split into numerous capabilities.  And I’m generally ignoring habitual decisions in this sketch.  The main point is that our feelings of consciousness come from layered capabilities, and sharp distinctions between what is or isn’t conscious probably aren’t meaningful.

It’s also worth noting that there are many layers to self awareness in particular.  A creature with only 1-3 will still have some form of body self awareness.  One with 4 may also have attention and affect awareness, both arguably another layer of self awareness.  Only with 5 do we get the full bore mental self awareness.

It seems like de Waal’s point about observing the capabilities in animal behavior to determine if they’re conscious will also eventually apply to machines.  Although while machines will have their own reflexes (programming), those reflexes won’t necessarily be oriented toward their survival, which may prevent us from intuitively seeing them as conscious.  Lately I’ve been wondering if “agency” might be a better word for these types of systems, ones that might have models of themselves and their environment, but don’t have animal sentience.

Of course, the notion that comes up in opposition to this type of assessment is the philosophical zombie, specifically the behavioral variety, a system that can mimic consciousness but has no inner experience.  But if consciousness evolved, for it to have been naturally selected, it would have had to produce beneficial effects, to be part of the causal structure that produces behavior.  The idea that we can have its outputs without some version of it strikes me as very unlikely.

So in general, I think de Waal is right.  Our best evidence for animal consciousness lies in the capabilities they display.  This views consciousness as a type of intelligence, which I personally think is accurate, although I know that’s far from a universal sentiment.

But is this view accurate?  Is consciousness something above and beyond the functionality of the system?  If it is, then what is its role?  And how widespread would it be in the animal kingdom?  And could a machine ever have it?

41 thoughts on “Frans de Waal on animal consciousness

  1. Although I agree with Frans, in that it is clear that animals are conscious, I disagree with him in that “planning” is neither necessary nor sufficient for consciousness; my computer can plan (eg a root around a maze) but certainly isn’t conscious. Conversely, my computer cannot see “the ineffable red if a rose”; whereas I am pretty sure many animals can …

    Liked by 1 person

    1. Hi Mark,
      Actually, you may be saying the same thing but from a different angle. The redness of a rose seems to involve more than just the information of an object with a certain shape reflecting light of a certain wavelength that strikes the retina. It feels like something to have that particular sensory experience. But what adds the feeling? Why is the feeling added?

      The answer, I think, is imagination. (Which is why I grouped imagination and sentience together.) We feel something about the redness of the rose because that feeling is information, information we can use in our planning. The plan may be nothing but to stoop down and smell it, or to pick it, or to simply ignore it, but it’s still information.

      Of course, it is ineffable. But shouldn’t we expect that any mechanism that animals share to be pre-language and pre-symbolic thought?

      Like

      1. “The redness of a rose seems to involve more than just the information of an object …” Trivially, if you mean more than mere Shannon information, then yes; not least because there is more to an object than its mere pixels, as Shannon [1948] specifically foregrounded in the introductory paragraph to his seminal paper, “A Mathematical Theory of Communication”, (The Bell System Technical Journal 27, pp. 379–423 & pp. 623–656), “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem”. Shannon information relates to the realms of [mere computational] planning but not, as Shannon highlighted, to the realms of meaning and understanding as realised in humans and animals.

        Indeed, as Jackson so neatly illustrated in Mary, there is specifically the [phenomenal] visual, scented sensation of the rose … I have no doubt my cats have been conscious of sight and smell in this way; a way my computer can never be, even if it can successfully plan a path to winning at Go or driving a robot around a room; such planning is irrelevant to the fundamental question of animal consciousness; or so it appears to me ..

        Like

        1. I think the difference between us is I’m not satisfied to draw a line between the information and the phenomenal aspects of consciousness. To me, those phenomenal aspects are just more information. When a robot is able to extract associations from visual information that are similar to the ones we can, I think they’ll be having a similar phenomenal experience.

          It will never be exactly like ours, because their base programming won’t match our reflexive survival circuits (at least unless we go out of our way to make them match), but the mechanisms will be similar.

          Like

  2. “There is something it is like to be a bat.” Yes, absolutely animals have some form consciousness. Mammal brains have so much in common with ours the question may be more why they wouldn’t be conscious in some form. I’ve been around dogs most of my life; they definitely have an inner life, there’s no question in my mind about that.

    Not going to re-open the machine thinking question here. I do wish I’d seen this post before I wrote the Mandelbrot comment. It would work equally well here and would be a fresh start. 😀

    Liked by 2 people

  3. Thinking about this in terms of your levels, my guess with dogs is they are on the foothills (so to speak) of your level 4. They certainly are sentient, but I don’t know how much imagination they have (some, but not much). Primates probably get further up the level 4 hill, I’d guess, but don’t reach its heights.

    The planning criteria is an interesting one! It let’s us consider the distance between planning to use a stick and planning a party. (I’d say there was a fair distance.) I’d be really interested to know if chimps collect sticks for use the next day or after a time delay doing something else. Or is it more direct: Go get a stick; go use the stick.

    Liked by 1 person

    1. From what I understand, the ability to plan ahead varies tremendously in the animal kingdom. Fish can typically only do it a few seconds into the future. Most land animals can do it further, the next few minutes.

      The extent to which animals can use sticks with forethought is astounding. Chimps have been seen to break a stick off, rip off any side branches, and “sharpen” the edge, before using it to do things like spear bushbabies hiding in trees and then eat them off the stick. Several animals reportedly will take burning sticks and use them to start fires in new places, then wait for animals to run away from the fire and catch and eat them. (No animals have been observed starting fire from scratch, but many seem comfortable using an existing one.)

      But I’m not sure how far in advance any of this is. If you think about it, the ability to plan days, weeks, or months in advance requires having some kind of structured understanding of time. That seems like something that requires symbolic thought (in the anthropological sense), which no non-human animals seem to have. (Although we keep finding evidence for things in animals thought to be unique to humans, so who knows.)

      Liked by 1 person

      1. Good point, a structured conception of time is an important one. (In the past, we’ve touched on the question of how structured their local geography is.)

        Another limiting factor that strikes me is the lack of improvement of the tools they use. They learn use from each other, although it had to originate somewhere. A genius critter or accidental, who knows.

        Our imagination lets us conceive of improving a tool. It’s that “standing on the shoulders of giants” thing.

        Like

          1. Keep in mind another element. The more complex the tool the greater the requirement to be able to teach the skill of making it and pass it to the next generation. That is certainly part of the big change from Homo erectus to human.

            There is good chance the mastery of fire goes back quite a ways, perhaps a million or more years, although it is almost impossible prove directly. The main evidence is the progressive shrinking of digestive tract that probably would only have been possible if humans were cooking food (tubers) in addition to eating meat. Chimps and other apes eat mostly foods that require a large and long digestive tract to assimilate the calories from it.

            Like

        1. Interestingly, some of the aborigines in Australia think their people may have long ago learned to use fire by watching firehawks, a bird that actually uses sticks on fire to spread fire in ways to flush prey out. So learning can actually be cross species.

          And remember the chimps stripping and sharpening the stick to skewer the bushbabies.

          But definitely none take it as far as humans do. Shaping a stone into a spearhead takes a lot of forethought and possibly hours or days of effort. As does initiating fire, cooking food, and many other things humans have been doing for a very long time, long enough that it’s affected our evolution, such as we aren’t really equipped anymore to eat raw food.

          Liked by 1 person

          1. “The problem is that not even the zombie would know.”

            Right. (I was being a little tongue-in-cheek. If they were zombies, they wouldn’t know, but if they weren’t, they’d know they weren’t. 😀 )

            “Shaping a stone into a spearhead takes a lot of forethought and possibly hours or days of effort.”

            Right, which I see as more on the planning side. I’m also impressed by the imagination side, that improves on tools, adorns them (and bodies) with ornamentation, and does cave paintings.

            (I love the fanciful idea that those cave paintings we see as so significant were actually teenage graffiti that pissed their parents off. “No dinner for you until you wash that crap off the walls!”)

            Liked by 1 person

  4. In my opinion whether you care to debate the exact level of animal consciousness or not your ought to act like Pascal and make a wager. Whether they are or not they ought to be accorded the protection we with higher sentience mostly enjoy. Mostly because of course we too inhabit a shitty world where all is not as it should be. The day will come when animals will cease to be eaten, kept as pets, exoerimented on or maltreated. Sooner rather than later I hope.

    Liked by 1 person

    1. I agree it’s best to assume they have feelings and treat them accordingly, but the Pascal’s wager take on this is interesting, particularly when we think about their use in scientific research. A lot of animal research is used to help find treatments for human diseases and conditions. If we completely forego that kind of research, it would cut off avenues for scientific progress. Some people are willing to bite that bullet, but we should be honest about what it would entail.

      In any case, they definitely should be treated as humanely as possible.

      And it may not be long before you can buy lab grown meat. Indeed, I suspect once it’s widely available, the cost of the natural variety will quickly climb until it mostly dies out. It will mean a population crash for many domesticated animals, similar to the one for horses in the early 20th century, but arguably a lot less suffering.

      Liked by 1 person

  5. “…there are certain things that we humans cannot do without consciousness. If we find these kind of actions also in other species, we must assume that they also involve consciousness. ”

    Could this same approach be used but substitute “metacognition” for “consciousness”?

    There is some evidence scrub blue jays have a theory of mind since they will retrieve and re-hide food that others spotted them hiding. The crows (one of them at least) I feed in my backyard on paper plates will flip the food off the plate (all of it at once) onto the ground. It might be because the crow just wants it on the ground but my thought has been that crow reasons that other birds and animals will have a more difficult time spotting the food (brown dry cat food) on the ground and hence it is more likely the crow will be able to eat it. I certainly doubt it is instinctive to flip cat food off a paper plate. At any rate, if my theory is even partially correct, the crow would need to realize something about the cognitive capacities of other birds and animals.

    I am not sure if de Waal is “equating imagination and planning with consciousness”.

    It seems he is just saying we can observe behavior in animals that require imagination and planning. We have something we can objectively observe that we know as humans requires consciousness. Behavior indicating imagination and planning is more of a indicator of consciousness than a definition of it.

    Liked by 2 people

    1. From what I’ve read, establishing metacognition is devilishly hard. When you construct an experiment that remove other possible explanations for the behavior, it only shows up in a few limited species. As I understand it, the most rigorous tests only show it in some primate species.

      Looser tests have show it more broadly, but these test can’t eliminate confounding explanations. In the case of your blue jays, you’ve identified one explanation, but you’d have to structure things so that others weren’t possible, such as just wanting it on the ground for access, as you mentioned.

      It’s also not completely established that a theory of mind and metacognition are one and the same. I linked them in the post but that was a slip-up on my part, forgetting that the link between them seemed weakened in brain scan study on metacognition.

      You might be right about de Waal’s attitude on imagination and consciousness, but accepting it as evidence for consciousness does seem to entail that he doesn’t see metacognition as crucial. I generally agree with that.

      Like

      1. Article I linked above probably should be here.

        The other link was paywalled; however, it looks like this one can be downloaded.

        https://www.researchgate.net/publication/240237830_Avian_Theory_of_Mind_and_counter_espionage_by_food-caching_western_scrub-jays_Aphelocoma_californica

        Instead, the finding that tactics are employed flexibly, as a consequence of
        the cacher’s social relationship with the competitor, the likelihood of cache
        theft, and the specific caching episodes the observer has witnessed, suggests
        that the birds’ behaviour is grounded in social cognition. Moreover, the
        finding that jays appear capable of experience projection, using their own
        past experience of being a thief to predict how another individual might
        behave in the same situation, provides evidence for a form of Theory of
        Mind yet to be demonstrated in any of the great apes, other than humans.

        Like

    2. It’s also possible there’s no connection between determining that your environment has certain characteristics — that other “things” can steal your food — and associating those other “things” as having minds like yours.

      It only requires recognizing the consistent behavior of those other things to devise strategies to oppose them. There’s no requirement to assign consciousness to them.

      Like

  6. Mike,

    You made this comment on the previous post: “But I see consciousness as something that only exists subjectively.”

    I need to understand what the term “subjectively” means to yourself and others. I find the use of the term and the term itself, specifically in the context of your statement to be confusing. For example: In the context of your statement, my understanding of subjectivity means; that consciousness is subordinate to what I, as a solipsistic self-model says it is. Is that what you mean when you use the term? Or am I missing something?

    Thanks…

    Like

    1. Lee,
      I just gave a response on that previous thread to a similar question, but rather than link to it, and shunt the discussion to an already huge thread, I’ll just paste it here:

      The problem is that consciousness is a pre-scientific term that doesn’t map well with scientific understandings of the brain. In that way, it’s similar to love, beauty, or life. In a deflated sense, we can say that it maps to some combination of objective cognitive capabilities. The problem is no one seems able to agree on which ones, which I why I usually describe a hierarchy.

      Then there is the more inflated conception of something above and apart from the functional capabilities.

      Michael Graziano reported a story of a patient with a delusion, that he had a squirrel living in his head. The patient’s doctor told him they needed to figure out why he thought there was a squirrel in his head. The patient disagreed. What needed to be figured out, according to the patient, is how the squirrel got there.

      For the patient, the experience of the squirrel exists subjectively, but the squirrel itself doesn’t exist objectively. When it’s said that consciousness is an illusion, the reply is often that if so, the illusion is the experience. I have sympathy with this view. It’s why I don’t say consciousness is an illusion. But our experience is built on a simplified model that implies that there is something above and beyond the functionality, but it’s a model of something that isn’t there.

      Like

      1. Thanks Mike, I appreciate the response. Correct me if I’m missing it, but from what I can ascertain from the use of the term “subjectively”, when it comes to the internal experience of consciousness itself, all roads begin and end with solipsism.

        So in conclusion, the internal experience of consciousness is whatever we, as a solipsistic self-models say it is, regardless of what is actually taking place in terms of an underlying objective reality that we are not aware of. Is that a fair assessment of your usage of the term subjectively, that the experience revolves around and is contingent upon solipsism?

        Like

        1. It depends on what you mean here by “solipsism”. My understanding of that word is concluding that there is only one mind, that all others are an illusion or aspects of our own mind.

          To me “subjective” simply means content inside a mind, content that may or may not correlate with the world outside of that mind. “Objective” just means facts independent of any mind. That distinction, as far as I can tell, is separate and apart from the solipsistic proposition. Unless I’m missing something?

          But definitely it entails that the contents of our mind can simply be wrong about what exists outside of it. Ultimately the only measure we have about the objective world is our ability to predict future conscious experiences. Although if you can think of another, I’d be very interested to see your reasoning!

          Like

          1. Wikipedia: “As an epistemological position, solipsism holds that knowledge of anything outside one’s own mind is unsure; the external world and other minds cannot be known and might not exist outside the mind. As a metaphysical position, solipsism goes further to the conclusion that the world and other minds do not exist. This extreme position is claimed to be irrefutable, as the solipsist believes himself to be the only true authority, all others being creations of their own mind.” Hence my hypothesis for a delusional reality that is predicated upon a circle of mutual definition and agreement.

            (To me “subjective” simply means content inside a mind, content that may or may not correlate with the world outside of that mind.)

            Fair enough, I can relate to that statement. So in this context, subjective would be an all inclusive condition, regardless of whether that “subjective” content correlates with the world outside of that mind or not. That definition still leaves one with an inexplicable conundrum that you attempted to address.

            (Ultimately the only measure we have about the objective world is our ability to predict future conscious experiences.)

            That’s a tough one Mike. It doesn’t really give us much because all discrete systems possess that ability to one degree or another. In conclusion: if what you stated is indeed a fact, and I believe it to be our condition, I do not see how that explanation is in any way divorced from the model of solipsism because the ability to predict future conscious experiences is predicated solely upon “ability”. So as far as I can determine, for any discrete system, all roads begin and end with solipsism.

            Thanks…

            Liked by 1 person

          2. (Although if you can think of another, I’d be very interested to see your reasoning!)

            Felt I owed you a response on this one Mike…. Personally, I reject everything. I do not trust any “thing” that my senses perceive. Conscious experience is real enough within its own context, but that real-ness is strictly contextual. All of the patterns and constructs that make up our phenomenal world is just background noise, one giant distraction; and neither my interpretation, nor anyone else’s interpretations of those distractions can be trusted. I choose to listen to the “still heart of persuasive reality” and then be persuaded by that reality. Because for its own reasons, that underlying reality is eerily quiet and still… And I’m not talking about meditation and things of that nature. The experience itself is very similar to spooky action at a distance and is a continual state of “being”.

            I guess my approach is driven by the question David Chalmers raised well over twenty years ago, a question that doesn’t really get much attention in the forum of consciousness. He asked the compelling question: “Why conscious experience in the first place?” It’s a compelling question, one that most people are not really interested in. And I get that, so everything is cool….

            Thanks…

            Like

          3. Thanks for the response.

            I’m interested in the why myself, although unlike Chalmers, I don’t see it as a hopelessly intractable question. I think there are multiple possible answers rooted in evolutionary theory, although which one depends on the particular version of consciousness we’re discussing, or more precisely, the specific capability we’re discussing.

            Like

      2. Mike, I’m going to go with Lee here. At this point the question is not what do you mean by consciousness. The question is what do you mean by subjective. What is a subject?

        Here’s my answer. Something can be said to “physically” exist if it interacts with its environment. All such interactions can be modeled Input -> [mechanism] -> Output. In this case the subject is the [mechanism]. So to say something is subjective is to say that you need to take into account the [mechanism] in question.

        A subjective “feeling” is an interaction (I->[M]->O) in which the subject (the mechanism) can be said to “interpret” the input. So what does “interpret” mean? That depends on the teleological/teleonomic purpose of the mechanism. And note: this purpose is not a statement as to some physical property of the mechanism but instead is an explanation of how the mechanism came to be. Bottom line: the objective explanation of the subjective feeling will come down to the best explanation of how that mechanism came to exist.

        *
        [feeling much better]

        Like

        1. James,
          In summary, “subjective” to me just means the contents of a mind, whether or not that content correlates to things outside of that mind. “Objective” simply means facts independent of any mind. We only ever have theories (models) about what exists outside of our mind, so every objective fact is ultimately provisional.

          “the objective explanation of the subjective feeling will come down to the best explanation of how that mechanism came to exist.”

          I don’t necessarily disagree, but how do you measure which explanation is best? I think the answer, ultimately, comes down to which one more accurately predicts future subjective experiences. If you can see another standard, I’m interested to see it.

          Like

  7. Mike, I need you to be as rigorous as possible. What exactly is a mind? What exactly is the content of a mind? I’m trying to see how your answers match with mine. As stated so far, they could match up, depending on what you mean by those terms. (Except I’m pretty sure “subjectivity != contents”).

    *

    Like

    1. James,
      In terms of the subjective / objective divide, from a phenomenological perspective, it’s hard to define a mind as more than the mechanisms that enable us to think, feel, perceive, remember, will, and engage in reasoning. The contents of that would be all the perceived targets of that thinking, perceiving, feeling, etc.

      From an objective perspective, I think the mind is what the brain does, particularly the midbrain and forebrain regions. The brain builds models of the world and aspects of its own processing. These models build our subjective understanding of reality, but the efficacy of the models is related to how well they predict future experiences in everyday life. They may or may not accurately match up with actual reality, including the reality of the brain’s own processing.

      I can definitely see that lining up with your views since these systems have inputs and outputs and definitely count as mechanisms. But the main thing to understand is that just because we have a model that works, it doesn’t mean the model is accurate outside of the narrow parameters in which it works.

      Like

      1. Mike, here’s what I mean by rigorous. In the first paragraph, you say a mind is “mechanisms that enable us to think, …”. In the second, you say “mind is what the brain does”. Mechanisms are not what the brain does. Mechanisms are physical things. “What something does” is an abstract concept, so, not a mechanism. So I’m going to assume you would agree with this: A mind is a collection of particular potential processes, capabilities. Those processes include thinking, feeling, perceiving, etc., however you want to define those particular capabilities. A mind is “realized” by a mechanism, such as a brain.

        So then you say the brain builds models of the world. What is a (rigorous) definition of a model? How does a brain build one? Or are you just saying that “whatever the brain does to predict to predict future experiences, we’ll call that making a model”?

        *

        Like

        1. James,
          That’s a narrower conception of “mechanism” than I was using. The word can mean that, but it can mean other things too. (Ex “The mechanisms of democracy”) And ultimately every physical thing is a process anyway. But for purposes of discussion, I’ll equate your use of it with “physical mechanism”.

          Anyway, the way I use “model”, I mean representations, neural image maps, which I take to be prediction frameworks, data, patterns that are in some way isomorphic with reality, in a way that enhances a system’s ability to make accurate predictions of potential future sensations.

          So when an organism perceives that something is food, it’s a prediction about what will happen if the organism attempts to eat it. If the organism perceives a predator, that’s a prediction about what might happen if it doesn’t flee.

          One way to view the models is as an enhancement of somatic survival reflexes, enhancements that widen the scope and efficacy of those reflexes.

          Ok, I’ll stop and hand it back over to you.

          Like

          1. Mike, I think we need to get to the narrower concepts to make progress.

            So now I want to narrow “model”. What’s the bare minimum to count as a model? Does a thermostat set to 72 degrees have a model? When the thermostat perceives the temperature has reached 73 degrees, does it predict the temperature will be too high if it doesn’t turn on the AC?

            Before you answer you might want to read this: Robotics: Philosophy of Mind Using a Screwdriver. You should read that regardless. It talks about how you don’t “necessarily” need representation for a mind to work.

            *

            Like

          2. James,
            I don’t think a (traditional) themostat has a model on its own. It operates according to a model, one held by the designer and the user. But I think it itself simply reacts to stimuli.

            A simple model might be an organism with chemoreception noticing (predicting) that it’s on a gradient, and then predicting what direction it should head in. (It might be too much to say it’s predicting where food is, since it might just be satisfying a reflex to move in the direction that the gradient is increasing in.)

            Obviously there’s a lot of gray area here. At what point do we cross from a complex reflex to a prediction? I think the prediction requires using multiple stimuli (data points) and associations to come to a conclusion.

            It’s worth noting that this simple model is static, not one the system itself is creating, such as associating a particular smell with a predator. Creating new models is something that I think is required to trigger an intuition of consciousness for most people.

            I’ll add that paper to my reading queue. The models can be very sparse, far sparser than we might intuitively believe.

            Like

          3. Ah, I think this is telling:

            ”I don’t think a (traditional) themostat has a model on its own. It operates according to a model, one held by the designer and the user. But I think it itself simply reacts to stimuli.”

            But is this model extant if the designer is long gone? I’m pretty sure what you just said is this: the model is Aristotle’s final cause, i.e., the reason (best explanation why) something exists.

            In that case a prediction is simply a mechanism functioning the way it was intended. Note: “intention” can be teleonomic or teleologic.

            Have I lost you?

            *

            Like

  8. Excellent! Let’s talk about that.

    as long as we’re clear when we’re talking about teleonomic vs teleological purpose.

    So what’s the difference? First, let’s talk about what’s the same. Both are an explanation of how a mechanism comes to be. Both will reference a goal, a situation where something is trying to (or tending to) make the world a certain way and is capable of generating a mechanism that can measure the world and respond by driving the world in the direction of the goal.

    So back to … what’s the difference? I propose that a goal is teleologic when it is represented as a concept, as opposed to being “hardwired”. Things like chemotaxis and thermostats are hardwired in that the “goal” is separate from the mechanism and plays no additional part once the mechanism is created.

    [interlude] Crap, I’m rethinking this mid-post. [/interlude]

    Scratch that last part. The difference between chemotaxis and the thermostat is that the latter is teleologic. The goal for the thermostat was a concept (keep the room at a certain temperature), but that concept now plays no part in how the thermostat functions. I think I need a new word. Specifically, I need a word for when the goal is not only a concept, but that concept plays a part in the functioning of the mechanism. More specifically, the mechanism measures the world and responds appropriately.

    [interlude] more rethinking [/interlude]

    Wait. That would be a separate mechanism. The conceptual goal would be input for a mechanism that creates a mechanism. Using I->[M]->O we have:

    [goal concept] —> [mechanism 1] —> [mechanism 2]

    Data from world —> [mechanism 2: compare against goal concept] —> action

    Okay, now my brain hurts.

    [Looks back at beginning of this part of discussion]

    Would you say “to have a model” is to generate and use goal concepts as described above?

    *

    Like

    1. LOLS! Hmmm, well, you thought “out loud” so I’ll try to return the favor.

      A model is in service of a goal, to make predictions. The predictions themselves are in service of goals.

      But here’s where it gets tricky. Goals can be primal reflexive ones, like the desire to have sex. The reason we have the desire to have sex is because it promotes reproduction. But once we have the desire, we just have it, so it would be wrong to attribute any predictions made in service of getting sex to be in service of reproduction.

      In the case of our simple organism with chemoreception, when it encounters a pattern to cause it to predict that it’s on a gradient, we could say that prediction is in service of finding food to satisfy the desire to eat. It might be, but if the organism is simple enough, it might just be in service of a reflex to head in the direction of the gradient’s increase.

      This is similar to the pondering why squirrels store nuts. At one level, they store it so they’ll have food later. But I seriously doubt they’ve thought that far ahead. The behavior is too consistent among members of the species. It seems much more likely that they store nuts in order to satisfy a desire to store nuts. A desire they evolved because it ensures they’ll have food later.

      Of course, for a sufficiently complex creature, there will be intermediate goals, goals in service of more primal goals. In the case of humans, the number of levels here can get to be so deep that we lose track of the original desire. Sometimes this may be because one of the intermediate goals just happens to satisfy a more primal desire, so it becomes a goal in and of itself.

      So it seems like goals can fall into three categories.
      1. Externally imposed goals, such as reproduction
      2. Primal internal goals, such as the desire to have sex
      3. Intermediate internal goals, such as the desire to have a cool sports car

      In the case of a thermostat, the externally imposed goal of maintaining a temperature range comes from some intelligence’s internal goal(s), making it teleological. In the case of our desire for sex, that comes from our selfish genes, but “selfish gene” is a metaphor for naturally selected molecular replicators, who aren’t intelligent and whose “purpose” is in appearance only, a teleonomic one.

      But the models are there to make predictions. The prediction may be in service of recognizing something such as food, a predator, or a potential mate, or it may be the much more sophisticated variety of predicting the outcome of various action scenarios.

      It also seems like models can exist in multiple levels, with models using models in complex hierarchies.

      Ok, I’ll stop now and see if there’s anything in the above that satisfied your question 🙂

      Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.