To perceive is to predict

Daniel Yon has an interesting piece at Aeon on how our brains predict the outcomes of our actions, shaping reality into what we expect, and why we see what we believe, rather than the other way around.

This idea is part of a growing sentiment in the cognitive science community that prediction is at the heart of what brains bring to the picture.  The thing to understand is that this isn’t about conscious prediction (although that’s part of it) but about pre-conscious predictions that enter into our awareness as perceptions.  Put another way, perceptions are predictions.

This can be a little clearer if we back up from an evolutionary perspective and consider the nervous system of an early chordate worm.  Such a creature has a nodocord, a central nerve running along its length, where its sensory data is integrated and which serves as a central pattern generator, its main source of motor action.

This creature endogenously generates rhythmic movement and responds to stimuli with fixed action patterns, meaning that its behavior is largely set by its genetics.  Although it’s capable of some classical conditioning, its genetic programming doesn’t provide much flexibility.

In time, the creature’s descendants will develop sensory apparatus to do things like detect light, vibrations, and low concentrations of chemicals.  What will these capabilities provide to those descendants that the early creature lacks?  We can’t reference their eventual evolution into sight, hearing, and smell, because natural selection doesn’t act with foresight.  Every mutation has to be adaptive if it’s going to propagate and be enhanced.

The answer is prediction.  Initially these predictions were very simple.  Sensory data simply provided earlier triggering of reflexes that might previously not have been triggered until the organism came into direct contact with the stimulus.  The early reactions provided a survival advantage.  But over time, as the sensory data increased in resolution, the predictions became more detailed, until we get a fish that can see and flee from a predator before the predator has a chance to bite into it.

As we follow evolutionary history, the predictions become progressively more sophisticated, until we arrive at us predicting not only our spatial and temporal environment, but social situations, as well as our own actions.

As Yon describes in his article, perception being prediction means that sometimes we mis-perceive, that is, we predict wrong.  Being that the prediction is pre-conscious, it often isn’t something we can guard against.  We can only be sensitive to the error signal when it arrives.  Put another way, people’s worldview has a powerful effect on what they perceive.

This is one of the reasons why all observation in science is theory laden.  We can attempt to make pre-theory observations, as some philosophers have urged, but ultimately our perceptions come to us embedded in our existing beliefs.  All we can do is see how accurate, or not, our predictions are, and be open to revising our beliefs when those predictions fail.

I think the prediction paradigm has a lot of power, although I do sometimes worry that maybe it’s getting overused to describe everything about the brain.  The question is, are there counter-examples out there?  Or any other data that can’t fit into it?  Is the brain basically reflexes plus predictions?

This entry was posted in Zeitgeist and tagged , , , , , . Bookmark the permalink.

34 Responses to To perceive is to predict

  1. Steve Ruis says:

    I once read the observation that the brain was a pattern recognition machine. Patterns are of little use other than identification and prediction. I think imagination fits the same description. And all of these are survival based. The longer you survive, the greater the chance of reproductive success, etc.
    Consciousness is harder to connect to anything concrete, but we do seem to be making some progress. I am constantly astonished that we believe we can reach knowledge though cogitation. (I am a philosophy buff, a bewildered one.) It seems we have a capacity to delude ourselves that is greater than any of the other mental faculties we can define.

    Like

    • I was once listening to an interview of a psychologist (I think Robert Kurzban) who had some interesting speculation about self delusion. The thinking is that some self delusion is actually adaptive. We know people who are irrationally optimistic seem to have a higher chance of succeeding (as long as they go too overboard). But sometimes a delusion actually endears us with other people. Sometimes we’re even admired for it.

      Of course, self delusions that might have been adaptive during the paleolithic can be very detrimental in the modern world.

      Like

  2. Wyrd Smythe says:

    I think there’s no question the brain uses predictive modeling in many situations. (Just think of all the prediction involved in throwing and catching a ball.)

    My sense is that consciousness also expresses itself, and I think there is a difference between prediction and expression. The desire to communicate, for example, doesn’t seem predictive, although the use of language is. Even the outcome of a conversation involves prediction, but does the desire to talk in the first place also involve it? I’m not sure.

    Bottom line, I’d agree it seems overused. As with IIT, it’s certainly an important aspect, even necessary, but doesn’t seem like the whole picture.

    “Put another way, people’s worldview has a powerful effect on what they perceive.”

    I think you know by now I’m also not a fan of overuse of this idea. There is an effect, and we all suffer from powerful sensory filtering (blind spots, optical illusions, etc.), but the whole point of rational thought, of the dialectic, is to reduce the effects of our worldviews. Bias can’t be eliminated, but one can highlight it and try to see around it.

    Case in point: I have what is probably an irrational dislike of a certain Minnesota Twins player, let’s call him Fred. It’s possible I’m picking up on subtle clues, and my dislike will prove to have a basis (rather than a bias) — it often works out that way. But at the moment there isn’t strong support in the data for disliking Fred, and things could go either way.

    So when I think about Fred, I highlight my dislike and try to focus on the data right now. In particular, I have to beware of my wanting to hang the team’s recent slump on Fred — certain things coincide making it tempting. But correlation isn’t causation!

    Synchronicity,… I used this quote, a favorite, just the other day: “An intellectual is someone whose mind watches itself.” ~Albert Camus

    The Republicans often accuse their critics of being highly biased due to their worldview (which would seem, if true, to apply equally to the Republicans, wouldn’t it?), and it’s a tactic there to avoid having to discuss the issues rationally. So I’ve come to kind of disfavor the whole idea that we all can’t see straight because of our biases.

    Liked by 2 people

    • “but does the desire to talk in the first place also involve it?”

      Good question. We could see it as more of a reflex, but it wouldn’t be one of the fixed action pattern reflexes, but more of a habitual impulse, which can be thought of as a learned reflex. I did forget to mention habitual actions in the post. (I often forget about those.)

      On our ability to see through biases, I think there’s definitely a lot to be said for developing the habit of trying to catch and correct our own biases. That works for biases we can detect, like you catching your dislike of the player. And I agree it’s the mark of mature thinker.

      The problem is that the history of science has demonstrated that our ability to self detect that sort of thing has limits. In my mind, the only way to ensure we’re not in our own closed loops is to expose our conclusions to others and see what faults they can find. Even that doesn’t always work, since often we’re inclined to subconsciously rationalize away objections, although again good habits can help against that too.

      And of course if our conversation partners share our biases, then we just end up reinforcing each other. This can be a particular problem for cultural level biases. I’m thinking of the racist 19th century anthropologists, who seemed completely unaware of how mired they were in a particular ideology. It always makes me wonder what biases we all hold and reinforce in each other, that a 22nd century historian will look on with amusement (or disdain).

      Like

      • Wyrd Smythe says:

        “In my mind, the only way to ensure we’re not in our own closed loops is to expose our conclusions to others and see what faults they can find.”

        Absolutely. It’s all part of comparing our internal model of reality to the actual thing. Part of that feedback comes from others expressing their internal model.

        (As I may have mentioned before, my definition of sanity is based on the accuracy of one’s internal model to empirical reality. A part of self-awareness is reconciling that model with experience.)

        “It always makes me wonder what biases we all hold and reinforce in each other, that a 22nd century historian will look on with amusement (or disdain).”

        No doubt! It often turns on how culture is seen in retrospect as having taken an idea too far or as ignoring obvious problem aspects through self-serving rationalizing. I’m pretty sure we’re doing some of that in this era. (Too much political correctness, for instance.)

        Or perhaps the whole idea of politics turning on emotions and tribes rather than truth and the idea of a society proceeding in a way best for all involved including future generations.

        If we survive at all, I suspect the future will look at us with considerable disdain. (I already do, and I’m not at all alone in that.)

        Like

        • I think the self serving rationalization point is a key one. For centuries, people rationalized slavery by developing racism, a concept that doesn’t appear to exist prior to about 1500 or so. (Although ancient societies usually found other rationals to deny full humanity to slaves.) And a lot of 19th century material rationalized sexism, rigid social classes, colonialism, and a lot of other viewpoints that, in retrospect, are clearly self serving.

          The question is, what self serving rationals are we as a society engaging in? When doing this, we have to be careful about just pointing fingers at our political opponents or falling back to our own agendas. Typically a lot of the rationals are deeply embedded in the cultural consensus, the zeitgeist, with people speaking out against them widely seen as crazy radicals.

          Like

          • Wyrd Smythe says:

            “Typically a lot of the rationals are deeply embedded in the cultural consensus, the zeitgeist, with people speaking out against them widely seen as crazy radicals.”

            Yep. Welcome to my world. 😉

            Liked by 1 person

    • paultorek says:

      I’ll see your desire, and raise you with action. Neither desire nor action seems like a prediction, despite the fact that with action, you get a free bonus prediction every time you make a decision: “I’m going for a walk.” “I’ll take the blue one.” Etc.

      Liked by 1 person

  3. I feel morally compelled to express my disdain as to the terminology. Interpretation is a much more apt term. A prediction is a statement now about the future. For most of perception prediction only enters into the process of creating the mechanism of interpretation. By creating a mechanism that responds to a certain input with a particular valuable output, the system is “predicting” that the output is a valuable response to that particular input. But in the case where these mechanisms are generated by natural selection, the “predicting” happened long ago. What’s happening now is interpretation.

    *
    [okay, maybe i should read the article now]

    Liked by 1 person

    • I’d forgotten about your aversion to the p-word.

      But what you might think about is, why does interpretation happen? What adaptive benefit is it providing?

      Like

      • Not sure what you’re asking here. Interpretation mechanisms are created because they are at least potentially valuable. That value can be adaptive or associated with some higher level goal. If the mechanisms are not valuable, they tend to go away due to negative selection or influence of feedback mechanisms.

        So again, not sure what you’re asking?

        *

        Like

        • I’m trying to get you to see the viewpoint that prediction lies at the heart of that value to the goal. For example, say you’re a wolf on the hunt (goal: sustenance). You smell deer, with the smell increasing in a particular direction. Maybe you also smell deer blood.

          We can describe what happens in terms of interpretation. The deer lie in a particular direction. One may be injured and therefore vulnerable. But can you see how that interpretation is also a prediction related to the goal?

          Like

          • And I’m saying that the prediction was made at the time of the making of the mechanism, not at the time of smelling the deer. At the time of the smelling of the deer, (assuming sufficient hunger) the valuable output is a decision/plan to go find the deer. This plan will promote deer finding activities and suppress non-deer finding activities.

            So I’m saying the interpretation is not a prediction. It’s the result of a prediction that happened before.

            *

            Like

          • I think I see what you’re saying. The reaction might be an instinctive one adaptive to the situation rather than a prediction in the moment. In truth, this will always be some blurry and messy combination of reflex (instinctive or habitual) and actual prediction.

            Still, while deciding which direction to head in for the deer definitely involves a lot of fixed action processing, it also involves a definite volitional component. I think there is some prediction involved. But if you want to call it interpretation, I can live with that.

            Like

          • There is definitely prediction involved, and volitional prediction at that, but it happens in the creation of mechanisms of interpretation. Once the wolf has smelled the deer and has plans of finding the deer, that creates new, short term mechanisms. A lot of things will look more deer-like, at least at first until additional information adjusts.

            Over the last few days (4th of July holiday) we went backpacking on the Olympic Peninsula and did a day hike up to Enchanted Valley. It’s a beautiful valley with high mountain walls and waterfalls on either side, and we were told their might be mountain goats up on those walls. We looked, and finally my wife said “There’s a goat”. It was pretty much a white spot up on the wall. We had a monocular and looked, and sure enough it looked like a goat with its back to us, munching on a bush. We had our picnic lunch, and strangely, the whole time the goat never moved. Bottom line, I’m pretty sure it wasn’t a goat.

            *
            [dont tell my wife I said that]

            Like

          • I’m jealous. With the temperatures around here (south Louisiana), very few people are going hiking, although there were a lot of barbecues.

            Someone in the neighborhood reported a snake trying to get into their house. Now everyone is seeing snakes all over the place. But most of them are like your goat.

            Like

          • But if you want to call it interpretation, I can live with that.

            James,
            I think this is about as much as we’re going to get from Mike here — he’s now going to accept the “interpretation” term as well as “prediction”. You know how it goes. We is what we is, non-conscious and all.

            We were just north of your end over the week by the way. A few days in urban Vancouver, then Sechelt, then a mere day in Whistler. My own wife was desperate to see exotic wildlife, with a bear highest on the list. Fortunately nothing came up for her to even imagine about that!

            Like

  4. Thomas White says:

    Interesting idea but possibly flawed: what has not yet been perceived cannot be predicted since the sentient agent would not know ‘what’ in order to predict ‘it’. Imagining a skittish nerve cell (to simplify to the extreme) which predicts that which it has not yet perceived. If ‘it’ is never perceived, then every ‘response’ is not such but instead spontaneous and unsolicited action. Do you see what I’m trying (and failing) do get at?

    Liked by 1 person

    • I think I do. What you might be missing is the error-correction step. (Which I admittedly only mention in passing in the post.) If a prediction fails to be accurate, the system reacts to the discrepancy and modifies its predictive model. We probably start with evolved innate fundamental models at birth that are gradually fine tuned enough to get us through daily life.

      Like

  5. James Cross says:

    Actually prediction networks on top of prediction networks. Our view of the world is several levels removed from the world and is based more of predictions about the lower levels tuned for adaptive advantage than it is about the actual world. Which is why our view of reality may be actually very different from the way the world actually is.

    Like

    • Definitely. We perceive the world in a way that is adaptive rather than accurate. A lot of scientific methodology is geared toward hacking past that. It doesn’t always succeed, and we may not even know all the place where it fails. It’s one of the reason that, while emotionally I’m a scientific realist, intellectually I’m an instrumentalist.

      Like

  6. It puts a lot of those optical illusions into a new light too. I think those are useful in this context because they really drive home how low-level, or pre-conscious this predictive perception is.

    Liked by 1 person

    • Definitely. The powerful thing about those illusions is that we can’t not see them, no matter how hard we concentrate or attempt to introspect. They are unconscious constructions, the backstage mechanics of which we never get access to. There’s no reason to think this is only true for the illusions. It applies to all our perceptions.

      Liked by 1 person

  7. I’m going to second JamesofSeattle’s friendly amendment for Daniel Yon to use “interpretation” rather than “prediction”. I have my own way of making the point however. My essential concern is that the “prediction” term is too narrow to represent all it is that a “brain” might do. What about postdiction? What about nowdiction? Conversely the “interpretation” term could reasonably be said to cover it all.

    Given this clerical adjustment (thanks James), now observe the status quo “mono-computer” model.

    As we follow evolutionary history, the predictions become progressively more sophisticated, until we arrive at us [interpreting] not only our spatial and temporal environment, but social situations, as well as our own actions.

    The implicit implication here is that the more involved a brain (or technological computer) becomes, the more human like it becomes as well. This single computer perspective implicitly implies that it’s just a matter of degree.

    Instead I think it’s useful to say that there are two fundamentally different types of computer to distinguish in this progression. First there is the non-conscious form which encompasses the entire brain. Then secondly, one output of the brain (somewhat like heat is an output of the brain), is to produce an entity which actually experiences existence. Functional consciousness will have an informational input, a memory input, and be motivated to function by means of a sentience input. In this second computer inputs are interpreted (not merely predicted!) and scenarios are constructed for its only form of non-thought output, or muscle operation. Whatever number of computations that the non-conscious computer does, I theorize that the tiny conscious computer produced by it should do less than 1000th of 1%.

    Think about it this way. If you were to build a vast supercomputer that has a minuscule conscious component providing agency (needed for autonomy), wouldn’t you have that vast non-conscious part take care of as much as possible so that the tiny conscious part (which can really only interpret one thing at a time), would be more free to do the “expensive” stuff? Of course you would. And this is exactly what we see in ourselves. Our endless pre-conscious biases and so on are essentially heuristics which the vast supercomputer takes care of automatically.

    Liked by 1 person

    • “What about postdiction? What about nowdiction?”

      My response here is similar to the one I gave to James. What is the purpose of postdiction or nowdiction? And consider what you mean by “nowdiction”. For example, if I’m perceiving a red apple in front of me, that could be described as a nowdiction, but it’s also a prediction about what will happen if I touch the apple, about whether it is an apple, and a number of other things.

      Remember, prediction here isn’t like predicting tomorrow’s stock market, but our nervous system predicting future sensory input.

      Liked by 1 person

      • I think I see what happened there Mike. You were presuming that I have a problem with the “prediction” term in this context. I don’t. I think it’s great! I just don’t consider it big enough. So maybe I part ways with James here?

        Notice that in the English language, all “predictions” will effectively reside under the domain of “interpretations”, but not all “interpretations” will reside under the domain of “predictions”. I can interpret the word “grass” without any prediction. This is simply an interpretation. I grasp the word.

        If you (or Daniel Yon) would like to argue that my friendly amendment expansion is not useful however, that would be fine. Here the task would be to show that all neural activity can only be predictive rather than the larger subset, or interpretive.

        (And of course here I don’t mean “interpretive” in the conscious sense by which I interpret the “grass” term. I mean in the informational sense by which machines like the brain accept inputs. Is all accepted information to a brain, always predictive? I’d expect not! And yet the brain does something with such information anyway. In all cases I’d say that it “interprets”.)

        Liked by 1 person

        • As I indicated to James, I’m fine if you want to use the word “interpretation”. At the level of the reflexes, it seems under-determinate to me. But as I noted in the post, I’m not sure about the blanket use of “prediction” either. I think its applicability is broader than you and James do, but not as broad as many of the predictive coding people seem to think.

          Liked by 1 person

          • Wyrd Smythe says:

            I think, at least for human consciousness, I’d agree with the broader perspective. Our capacity for imagination and creativity is surely predictive based on our interesting ability to create mental models of things that don’t exist.

            Liked by 1 person

  8. Swarn Gill says:

    Well argued. As I think I said to you before the relationship between prediction and consciousness governs a lot of my thoughts on the subject. As humans we have the best ability to predict which is maybe not surprising then that we have a higher level of consciousness. But could it be that this ability to predict is a continuum rather than you have it or your don’t? I guess this is part of why I don’t think it’s insane to think of plants to be conscious. I mean it could simply be that prediction for such life is only a few microseconds. If prediction exists in all life, but on such short time scales, could we even measure it? It just seems at least possible that our ability to distinguish between prediction and reaction becomes less and less possible just based on the instrumentation we’d have to make such measurements.

    In your post previous to this I think you made an important point about how much of our standards for consciousness are based off of humans, and it’s not to say that there isn’t something special about our level of consciousness, and maybe it takes neurons and a central nervous system to achieve that higher level of consciousness, but it certainly doesn’t mean there aren’t other systems that can produce some level of consciousness, no matter how minute.

    Liked by 1 person

    • Thanks Swarn!

      “But could it be that this ability to predict is a continuum rather than you have it or your don’t?”

      I think there’s a lot to be said for viewing it along a continuum. Someone (maybe in this thread) pointed out to me that some of the simpler predictions are essentially made by evolution. If a squirrel saves nuts for winter, it isn’t because the squirrel itself is predicting it will need the nuts later, but because it has a strong instinctive impulse to save nuts and remember where they are.

      In that sense, the earliest predictions were really just new reflexes reacting predictively. Eventually though, we reach the point where an organism can predict what will happen if it takes action A vs action B. A squirrel seeing a dog just past a large nut has to decide its next action based on what it predicts might happen. At that point, we’re to what most of us would accept as primary consciousness.

      I’m not aware of plants exhibiting anything like that, but I admit I haven’t studied the question closely. And I totally agree we should be open to the possibility of alternate implementations of consciousness.

      Liked by 1 person

      • Swarn Gill says:

        Agreed. I am not aware of plants exhibiting anything as advanced as you describe. But I like the idea that simpler predictions being part of evolution. I wonder if even the reproductive process itself might be consider a sort of prediction. I hesitate to say all life is aware of it’s mortality, but there are various strategies organisms take to not only ensure survival of their DNA, but also to survive adverse conditions increasing the likelihood of them passing their DNA on. The maple in my back yard produces so many seeds (many of which, annoyingly take root in my garden) but clearly not all of them will turn into maple seeds. Their design allows them to be carried far by the wind and there has to be on the order of 1000s. In the evolutionary sense as you describe this would seem like a behavior that, in a way, is because of a prediction the organism makes about it’s likelihood of reproduction given variations in climate, herbivore populations that might eat the saplings, competition among other plants and tress, etc.

        Liked by 1 person

        • Related to that, I remember when I learned that fruit is a reproduction mechanism. The plant packs plenty of energy (sweetness) so that various animals eat it, including the seeds that are mixed in, then defecate them in some far away place, some of which (hopefully) will be fertile ground.

          That’s actually an elaboration of the flower mechanism, where a plant displays vivid colors to flag the attention of insects for nectar or pollen, so that they visit and some pollen clings to them, distributing it to other flowers, resulting in cross-pollination and sexual reproduction.

          Lots of organisms co-evolving together, often in synergy with each other.

          Liked by 1 person

          • Swarn Gill says:

            In that sense it’s also not hard to see how someone came up with the Gaia Hypothesis. Imagining the Earth as a gigantic self-regulating organism, and although I don’t know if Lovelock said this outright, but I imagine if one were going to make that claim one would think Earth itself had a consciousness. I am not sure if I’d go that far, but it is fascinating how ecosystems our balanced and co-evolve together. I actually find that conceptually the Gaia Hypothesis is useful when talking about the weather, because that really is all weather is, the Earth trying to redistribute energy and self-regulate.

            Like

          • I don’t know about the Gaia Hypothesis, but there is something to be said for the Gaia philosophy, the view that Earth can be viewed as one large integrated organism. That’s really just another way of looking at the biosphere and its heavily integrated ecology.

            It also demonstrates the profound challenges we face with human space exploration. The biggest problem is that separating humans from the Terran biosphere creates immense challenges. People have a tendency to dramatically underestimate how difficult it would be to recreate a separate independent biosphere on, say, Mars. The reality is that colonies there would be crucially dependent on a tenuous supply line from Earth for the foreseeable future.

            Like

Leave a Reply to James Cross Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.