Daniel Wolpert: The real reason for brains

I came across this old TED talk today and decided to share it because it’s relevant to the previous post on consciousness and simulations.  Daniel Wolpert’s talk doesn’t address consciousness specifically, only the overall role of the simulations, but it’s still a fascinating exploration of what we’re doing when our attention is focused on a task.

The key is understanding that the evolutionary purpose of brains is to make movement decisions, and then execute on those decisions.

Of course, my speculation in the previous post was that consciousness is the simulation mechanism Wolpert discusses.

13 thoughts on “Daniel Wolpert: The real reason for brains

  1. I watched this video a while ago and I was impressed by the fact that an animal digests its own brain once it attaches itself to a rock and, essentially, turns into a plant. It’s amazing how machines fail to complete movement tasks trivial for humans

    Liked by 1 person

  2. I did enjoy this one, and especially since I’m able to agree with Daniel Wolpert that movement happens to be the end product of consciousness. Similarly I’m pleased to agree with the last post that consciousness simulates and predicts. From the view of my own such model however there’s a much larger picture to provide. I’ll briefly go through this right now, as well as provide greater detail if there are any questions.

    One significant issue that I think holds many modern researchers back, is the anthropocentric notion that brains are for consciousness. Instead I’d say that they’re for vast supercomputers. Or perhaps in the case of ants, brains are for less advanced computers. Regardless I consider consciousness to exist as a second form of computer that functions through the first one, and ascribing consciousness to constitute one percent of it might be generous. Daniel seemed to have made a blunder when he mentioned that the one other thing beyond muscle movement that brains do, is cause us to sweat. There are surely countless outputs associated with what the non-conscious computer does. It quite obviously regulates the heart, as well as body temperature, though the list should go on and on. (When you consciously close your hand, there must be a tremendous amount of things that are happening that we have no conception of. Here it should be the non-conscious mind that’s essentially making this happen, though since the conscious mind represents all that we know of existence, it therefore takes the credit.) Regardless, like all computers the non-conscious mind should take in inputs, processes them through algorithms, and then provide associated outputs.

    I consider the minor conscious computer which resides on top, to function through three separate varieties of inputs, possesses a single form of processor, and provides only one pure variety of output. The first input may be termed “utility,” or “happiness,” though the defining point is that it provides a punishment/reward that effectively transforms personally irrelevant existence, to personally relevant existence. In the end I call this stuff “self,” and it motivates the function of this type of computer. Then the second form of conscious input that I identify is “senses,” like sight or smell, though this must be confined to information only. Scent gives us information through a chemical analysis of the air, though associated utility, such as “stinky,” will go back to the first conscious input. Then the final form of input that I identify is “memory,” or “past consciousness that remains.” Past consciousness seems to provide a great source of information, so it seems quite effective to somewhat remember past consciousness for later input uses.

    I call the single conscious processor “thought,” and consider there to be two forms of it. It 1) interprets inputs, like “That’s a fast car coming towards me,” and 2) constructs scenarios, like “I’ll be hurt if I stay here.” The theory is that it does 1) and 2) in order to decide what to do given the punishment/reward associated with the utility input. So 2) corresponds with the simulation engine point of Todd Feinberg and Jon Mallatt from the last post, though I obviously consider consciousness to exist as something more.

    Then finally there is the only non thought form of output associated with the conscious mind that I know of, and this is “muscle operation,” or the point of Daniel Wolpert’s presentation here.

    Liked by 1 person

    1. Thanks Eric.

      I don’t know that I would say that brains are for consciousness. Along with Wolpert, I think they’re for making movement decisions, and consciousness is for making better decisions. But I think this remains compatible with your theory.

      I’m curious though how you arrive at consciousness being less than 1% of the system. Of course, a lot depends here on what we label “consciousness”, but it seems like most of the cerebrum participates in it or provides support for it, and the cerebrum is a pretty hefty portion of the overall brain. This fits with the idea that consciousness is a simulation engine, which seems like it would require a lot of resources.

      The non-conscious autonomous nervous system aspects seem to exist sub-cortically. Granted, the cerebellum has most of the neurons and doesn’t appear to participate in consciousness, but the cerebrum still appears to have 20 times the neurons of all the other sub-cortical regions combined, given it roughly a quarter of all the neurons in the brain.

      Anyway, those anatomical issues aside, your theory does seem very similar to the framework Feinberg and Mallatt described, notably the part on affect consciousness. What you call “utility”, they call “affect”, which supplies all the punishment and reward signalling, which comes from the limbic system in the lower cerebrum and mid-brain regions.

      The question I would have is, while the limbic system probably has an innate core, it also appears that it can be conditioned (such as learned fear responses). This conditioning, it seems to me, along with innate differences from varying genetics, is what makes just about everyone’s visceral response to the same stimuli at least a little different from each other. I’m wondering how your theory deals with that.

      Liked by 1 person

    2. Mike I agree with Rosenberg that studies of some things should have pronounced “arms races,” given that publication should naturally alter what’s being studied. The stock market should be a great example of this. Of course you’ve implied that he went too far for claiming that psychology is mere “entertainment.” (This reminds me of categorizing philosophy as “art,” which I hate, but sadly agree with in practice.) I go further with the belief that psychology can become relatively “hard.” 500 years ago, physics seemed pretty “soft” of course.

      Regarding your interest in my models, thanks! I’m really just playing this by ear. I enjoy public communication, but worry about theory of mind impediments given the extra eyes. And though your site may be a very civil environment, we do all still have normal human interests to tend. My current plan is to continue informally displaying what my themes suggest about your posts, while waiting (or hoping) for good opportunities to have full sequential discussions with those who are so inclined, and preferably in private.

      Regarding your next post on the illusion of PE, that seems quite clear to me, though I suspect that most agree. But then as for the following one regarding your potential “hard problem of consciousness” answer, I might be a bit more provocative there by taking your answer further…

      Liked by 1 person

      1. Eric,
        I think any comparison with of a modern scientific field with anything 500 years ago has to acknowledge that science as we conceive of it really didn’t exist yet back then. The printing press was still groundbreaking technology and was being used more to print copies of the Bible and start religious controversies than scientific ones. In 1516, Copernicus’ and Vesalius’ works were still a few decades in the future, and it would take time for the “mechanistic philosophy” to take hold.

        Incidentally, the social sciences didn’t start becoming sciences until people like Comte and Durheim pioneered positivism in the 19th century, the belief that their subject matter should be studied using the methods of the natural sciences. The adoption of positivism was controversial, a controversy that occasionally resurfaces, but most of the practitioners beat it back, not wishing to fall back into straight philosophizing. (Note: ‘positivism’ is not to be confused with its more extreme and now discredited cousin ‘logical positivism’.)

        “But then as for the following one regarding your potential “hard problem of consciousness” answer, I might be a bit more provocative there by taking your answer further…”

        Excellent. Looking forward to it!

        Like

  3. Forgive me for fawning Mike, but I’ve been yearning for these sorts of conversations for the past three years. It does now feel special to have them. Of course I can’t say that I would have been able to engage you back then quite as well as today, since I do believe that I’ve learned a few things.

    The “less than 1% conscious” thing was indeed just a matter of definition. While I’m “consciously” writing to you right now, I consider there to be two separate forms of computer at work, with the non-conscious part responsible for the vast majority of it. Apparently my fingers know how to do what I want them to, and I suspect to a far greater extent than I’m educated enough to appriciate in a biological sense. Thus I seem to be a facade that takes the credit, though the non-conscious computer should be responsible for the vast majority of this writing. In the model I handle this by means of what I call a “Learned Line,” whereby the conscious processor passes off tremendous amounts of tasks for the non-conscious processor to take care of. Consider my general diagram below. (BTW, the “Sensations” input should now read “Utility” or “Affect.”

    Without the ability to have the non-conscious mind handle so much of what we do, the relatively small conscious processor should quickly become overloaded.

    Regarding those anatomical terms that you’ve mentioned, I’m happy that you have such an education, though I believe that it’s been best for me to remain relatively ignorant in that regard. In my own writings I don’t even permit myself to use the “brain” term. In effect I consider myself “architect” rather than “engineer,” and so believe that my models could be corrupted if I were trying to account for all sorts of functional biological mechanisms. But now that I have completed this human mind model, it should be interesting to see how “the engineers” account for it through biology.

    So you say that the limbic system accepts conditioning? Sounds good to me! Regarding your question about how my own model deals with conditioning however, this seems inately handled through the “memory” input to the conscious mind. Everyone is going to have their own unique memories from which to work, and therefore go about their business in unique associated ways.

    Liked by 1 person

    1. Eric,
      One of the reasons I blog is to have these types of conversations. I had them occasionally in various forums prior to starting this blog, but they were often tangled with more or less thoughtless arguing, and moderators were often heavy handed and biased. I definitely recommend blogging for anyone who wants to discuss topics they’re interested in.

      Nice diagram. I just did a post on subjective experience being communication between subsystems of the brain, and you’re making me realize I probably should have done something similar for it. Oh well. Anyway, I tentatively agree with the flows you have there, although I think there are many more.

      “Everyone is going to have their own unique memories from which to work, and therefore go about their business in unique associated ways.”

      I think this gets to how you hope your theory will add to science. It’s all these unique factors that I perceive makes the social sciences so soft. It seems like it would be a lot easier if human beings were as consistent as electromagnetism or gravity. How does your theory overcome that?

      Liked by 1 person

  4. Mike,
    After three years blogging I haven’t yet started writing posts at my own site. My commentary at other sites has been less than popular, though rarely challenged. Conversely your openness to engage me, as well as pleasant temper, has been quite refreshing! The implication is that I may not be “the bad guy” after all, or at least not around here. I always knew that I wasn’t going to be popular with those who are more invested in the status quo, though I didn’t know that I’d be left alone!

    I love that you’re thinking in terms of my above diagram! To be clear, on the non-conscious side I place no limit on the flows. I suspect that there are countless sources of inputs and outputs associated with this computer, and that many separate processors may be implemented as well.

    On the conscious side however, I’m fairly certain that that’s about it. Daniel Wolpert seems to agree with me on the unique status of muscle operation output (except for his strange “sweat” comment, which suggests that he hasn’t yet considered non-conscious function). Similarly there does seem to be only one conscious processor. We can’t hold conversations while silently reading books, for example. Yes we can drive and talk, but I think only to the extent that the non-conscious mind takes over the driving part — a hard earned skill associated with my “Learned Line.” This can be dangerous of course.

    I suspect that you’re thinking about extra flow on the conscious input side, and would certainly like to consider any ideas that you have. I currently believe that there’s nothing beyond the phenomenal in an information sense (senses), a punishment/reward sense (utility), and a recording of past consciousness sense (memory).

    Yes I do consider the human to be just as explorable as physics is, though we simply haven’t gotten there yet. For an analogy it might be said that people are different just as planets are different. But this doesn’t mean that studying planets must be “soft,” and this is because modern theories in physics, chemistry, biology, and so on, help us make sense of what’s going on. The study of people should theoretically be no different, but why have things gone so slowly?

    The study of ourselves, unlike everything else, seems to have too many personal implications that naturally bias us. As a kid I was able to realize that reality needn’t be moral, unlike what we’re naturally encouraged to believe. Thus I went on to develop a separate “repugnant” approach that’s founded upon “value” rather than “values.” To demonstrate its merits I’ve gone on to develop what I consider to be an extremely useful model of the conscious mind.

    Liked by 1 person

    1. Eric,
      Actually, I think if I were doing the diagram, I wouldn’t give consciousness any independent streams separate from non-consciousness. In other words, all the sensory information and emotional reactions would go into non-consciousness first, and any output from consciousness would go out through non-consciousness.

      In other words, to me consciousness is a tool of non-consciousness which pulls information from non-conscious models. I come to that conclusion because, in addition to being able to drive without conscious control, we can also reply in split second emergency situations without much, if any, conscious thought.

      Consciousness seems to be primarily about dealing with novel or trade off situations, in other words, with situations that require that simulations be run. The simulations pull information from the sensory and affect models that are built unconsciously. I say unconsciously for both, because we can successfully walk somewhere without being aware of our surroundings, and we can be upset or happy about something and not know why.

      On studying planets versus people, I think there is a significant difference. Planets don’t have intentionality or agency, whereas animals and people do. Animal watchers go to a lot of trouble to hide so that the animals aren’t aware that they’re being watched, because knowledge of being watched changes their behavior. People are the same way, except much worse, because they can read news stories about studies of people and the knowledge from those reports changes their future behavior. I can’t see that a planet will change its behavior, present or future, by us looking at it.

      I think I mentioned before that Alex Rosenberg refers to this as the “arms race” that will forever rob the social sciences of being “hard”. For example, economists identify a pattern in economic systems and publish their results. Those in the systems then take those results into account and it affects their behavior, altering the pattern, which requires new studies, and so on and so on.

      In Rosenberg’s opinion, that makes social sciences nothing but entertainment, a view I find extreme. But it does seem to indicate that there is a permanent limit on certainty for any science that tries to study people’s behavior. It seems like any attempt to make those sciences hard has to grapple with this issue.

      Liked by 1 person

  5. Great observations Mike! I think you’re entirely correct that the conscious mind isn’t independent, and even though this wasn’t made clear in my diagram. In my writings however I do state that each of the inputs to the conscious mind occur through output of the non-conscious mind, such as “pain” for example. Then output of the conscious mind doesn’t just occur as such (as if humans understand things like the mechanical intricacies of operating countless individual muscles) but rather conscious output goes to the non-conscious mind for seperate input, processing, and output function.

    Your observations here demonstrate why I’m so interested in expanding the “mind” term to be interpreted more like “computer,” than “consciousness.” Sure we could still talk about “speaking our minds” and so on, but in academic discussions like this there would be “conscious mind,” “non-conscious mind,” and for aspects of reality which do not run inputs through algorithms for output, there would be the term “mechanical.” (I actually consider the vast majority of the human to function through mechanics rather than through a central mind.)

    I interpret the point of Alex Rosenberg to be that if you discover something about someone, as well as teach it to the person, then he/she should become somewhat different in a way that causes your discovery to become obsolete. Infinite regressions of progressively obsolete discoveries could then be the fate of associated sciences. But even if true (and I doubt this in practice), the perspective itself does seem flawed to me.

    Let’s say that we’re able to develop some useful models regarding the nature of consciousness. The point of them should be to educate people so that they can effectively use what we discover in their lives — behavior modification itself would be the end goal rather an externality that needs to be accounted for once again. But would these initially useful models actually be rendered obsolete to the extent that they’re understood? If so then observe how arbitrary the stuff that we know of as “consciousness,” would need to be!

    Just because Daniel Dennett and countless other prestigious people have failed to develop useful models of consciousness and so on, does not mandate that associated dynamics exist as fluctuating conundrums destined to confound worthy scholarship. Instead the true source of their failure, I suspect, has been an inability to begin from basic enough positions — I think you’ve mentioned platforms built on fluctuating sand. (I’d say that “values” fluctuate, but “value” does not.)

    Mike, at this pace in the months ahead you should understand quite a bit about my models. In that case you’ll be able to decided if they’re generally not useful, or if they instead have a fleeting usefulness, which evaporates as your behavior thus becomes altered, or if they’re useful in a way that continues to be the case even after you accept them into your life, and so helps us go on to discover a great deal more about how to harden our soft sciences.

    Liked by 1 person

    1. Thanks Eric.

      On the Rosenberg issues, I guess the only thing at this point is to wait until you discuss your models. As I said, I think Rosenberg does overstate the case, but it does seem like one is there to greater or lesser extent depending on the actual field. But I’m open to being convinced otherwise.

      Liked by 1 person

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.