David Eagleman: Can a computer simulate a brain?

The other day, I highlighted the article by neuroscientist Kenneth Miller on the possibility of mind uploading.  Miller saw it as possible, but thought it might be thousands or maybe even millions of years before we could do it.  Here’s a take by another neuroscientist, David Eagleman, being a bit more optimistic, and discussing the simulation hypothesis.

It’s worth noting that both of these guys see the timeline envisioned by singularity enthusiasts (that it will happen in the next 20 years or so) as untenable, although Eagleman’s idea of when it might happen is much closer to the singularity timeline than Miller’s.

Eagleman also highlights an aspect of this discussion that I think is worth noting, the effect this capability would likely have on space exploration.  Currently, our robots are all over the solar system, and a few have even left it and entered interstellar space.  But humans, after a brief sojourn to the moon, have basically stayed just above Earth’s atmosphere.

Our society has long discussions about gathering the political will for human space exploration, but the lack of a viable economic incentive has pretty much kept it to just that.  Robots pretty much own space, and I often tend to think that’s unlikely to change.  So, a solar system spanning civilization, and later an interstellar spanning one, is likely to be a robotic one.  It might only be a human one if we can figure out a way to make humanity transcend our biology.

Remember that ‘The Martian’ is primarily about the challenges a biological human faces in surviving in a Martian environment.  While an exciting story, it highlights the chief problem for humans in space, that we’re essentially fish leaving the ocean of our biosphere, with the necessity of bringing enough of that ocean with us to survive.  Imagine if we didn’t need to bring any of it.

38 thoughts on “David Eagleman: Can a computer simulate a brain?

      1. I just watched one on PBS the other night. I’m afraid you might find it old news, but visually it’s entertaining.

        I found it a bit incoherent, perhaps because there were a lot of arguments/ideas crammed into one episode. Plus, there were “maybe” claims that turned into firm ones, and that irked me. There are a lot of people who know nothing about these things, and they might have gotten the wrong impression.

        Like

        1. I agree. It was pretty basic stuff. Still, I did pick up a new point here or there. For instance, that the number of synapses increase until around age 2, then gradually go down as our memories, skills, and personalities develop.

          Like

          1. I must’ve missed that point. (It was way past my bedtime, but I managed to pry my eyelids open to watch.)

            I’d never heard the ant example before…have you? I thought that was interesting. I also liked the illustration of the Chinese room experiment (although I’ve never really liked that example).

            Like

          2. I have heard the ant thing before. (David Sloan Wilson got many of his ideas from his work with ants.) An ant colony seems very much like a super-organism, perhaps a group mind. Is it conscious? A lot depends on how we define “consciousness”.

            I think it helps to narrow a bit and ask, does an ant colony have awareness? Do the components of the colony have access to a model of its state? I tend to doubt it, but that might be me being biased toward the time scales that human awareness works at.

            Like

          3. I doubt many of us would call the entire ant colony “conscious”…or at least that would require some explaining.

            “Do the components of the colony have access to a model of its state?”

            I don’t know much about ants, but they don’t seem intelligent enough. They seem “made for the task,” but no more. Like cogs in a machine they don’t understand. But who knows.

            Like

          4. An advocate for the idea of ant colony consciousness might argue that an individual ant is far more intelligent than a neuron. But I agree that the colony overall doesn’t seem to display that much intelligence. It certainly doesn’t seem conscious to me.

            Liked by 1 person

  1. Interesting thoughts. Certainly our physicality is limiting. I wonder how limiting we would find our brains to be once the limitations of body are done away with. Would we find our minds as gross a prison as we do our bodies? Perhaps their lack of speed and creativity would be just as maddening as our current biological shortcomings.

    Like

    1. Interesting point, although if we’ve moved our minds to an alternate substrate, we’d be in a position to improve their functionality. The question is, how far would we be willing to go? How much away from a primate brain would we want to go? I suspect different people would answer that question in a variety of ways.

      Like

      1. Absolutely. If we were in a position to manipulate neural networks similarly to the way we currently do so with artificial neural networks in a computer, we could theoretically remove these brains as far from the primate brain as we wished. Compared to the technological advancement necessary to upload a brain into a computer, it would be decidedly trivial to, say, upload the entire internet into one’s memory for fast access, or upload the subroutines for instantaneous mathematical calculation. The concern at that point I think would be homogeneity, in the sense that brains backed by the internet would function at such a high level compared to those without that it would necessarily become the backbone for all memory..

        At the end of the day I agree it would be a problem of how far we are willing to go. If perhaps not from our primate brains, than certainly from our egos.

        Liked by 1 person

    1. I understand the concerns. However, by the time we get to humans, we will already have done it with numerous animals. For better or worse, we’ve already begun: https://selfawarepatterns.com/2014/12/16/worm-brain-uploaded-into-robot-which-then-behaves-like-a-worm/
      The TL;DR is that the connectome of the c elegans worm has already been modeled and loaded into a robot, which proceeded to behave in a very wormlike fashion. Of course, with its 300 neurons and 7500 synapses, c elegans isn’t conscious, just a bundle of reflexes. But as we move up in animal complexity, concern about the welfare of the uploaded mind will increasingly become an issue, but I don’t think it will stop the research.

      Liked by 1 person

  2. One of the big problems for robotic spacecraft is that they require a lot supervision from mission control back on Earth. Our datalink with them suffers from a speed of light delay, and if a spacecraft happens to pass behind a planet or the Sun, we lose contact with it, sometimes for a few days or even a few weeks.

    If we developed robots that could take care of themselves, solving problems on their own without having to contact Earth, and doing science even when we can’t keep tabs on them… that would dramatically change the way we explore the Solar System. At that point, we might not even need to bother with human space exploration, aside from the fact that it would still be really cool.

    Liked by 1 person

    1. “At that point, we might not even need to bother with human space exploration, aside from the fact that it would still be really cool.”

      There are actually two levels of coolness. Coolness in reality and coolness in SF stories.

      For reality coolness, the data sent back by the robotic exploreres could be used to construct virtual reality settings of other worlds, which all of us would get to explore (virtually). It would democratize the space exploration experience. Instead of an elite few astronauts walking on Mars or Europa, all of us would get a chance to stroll or swim around.

      Of course, for SF coolness, that won’t suffice. It won’t be a compelling story if our characters aren’t physically there in person facing real jeopardies.

      Liked by 1 person

  3. Actually, Descartes didn’t stop at “I think; therefore, I am.” (I wish he had.) Instead he went on to say that he has a perfect idea of God, yadda yadda. [Insert circular argument here.] He concluded that the mind-body problem wasn’t really a problem because God wouldn’t deceive us. He did not conclude that “we can never really know.” (You could argue that Descartes really did believe “we can never really know” and that his circular argument was intentional, but that would take some work.)

    Plus, he never said, “What if I’m a brain in a vat and scientists are engaging my brain to make a simulated reality for me?” The brain would’ve been tossed into the ontological abyss as well. The brain, for him, would’ve been considered an external object, no different from a hand or foot (to be clear, here we’re talking about the Metaphysical Meditations, not his later stuff about the pineal gland.) To throw everything under the wheels of the doubt bus except for the brain would’ve seriously undermined his point.

    Descartes’ skepticism was much stronger than the modern “brain in a vat” recasting, and his conclusions were altogether different from what’s being put forth here. Eagleman is taking the modern version of Descartes and attributing it to him.

    I hope this point doesn’t seem too petty. It just annoys me to think that some people will watch this and think they’ve learned what Descartes’ mind-body problem was all about.

    Like

    1. Yeah, I caught the “brain in a vat” anachronism too.

      He also seems to have an overly rosy assessment of Moore’s Law, projecting that running a zettabyte sized model won’t be a problem in 20 years. Maybe it won’t, but Moore’s Law appears to already be losing steam, so assuming it provide that much horsepower in the future should be done with caveats.

      His zettabyte estimate also seems to be based on molecular resolution imaging of sectional slices of brain tissue. If we have to have that for a successful emulation, it may never be practical. A feasible emulation, it seems to me, would require us to have some understanding of what’s going on so we know what is and isn’t crucial to the model.

      Like

      1. I’ve been wondering about Moore’s Law. I don’t know much about it, but the first thing that popped into my head was the disclaimer you get when you invest money in the stock market. “Past performance is not necessarily indicative of future results.”

        “A feasible emulation, it seems to me, would require us to have some understanding of what’s going on so we know what is and isn’t crucial to the model.”

        That’s a really good point. It seems that knowledge, if attainable, would bypass a lot of problems.

        Like

        1. Moore’s Law, by Gordon Moore, one of the founders of Intel, is the observation that the number of transistors on an integrated computer chip doubled every two years. It’s basically about the increasing miniaturization of chip transistors. It started out as an observed trend, then was adopted as a planning timeline by the industry, and eventually people like Ray Kurzweil started treating it like a law of nature.

          What most people miss about it is that eventually that trend has to hit fundamental barriers, at the atomic level if not before. Current mass produced transistor component width is down to 14 nm. The width of the silicon atom is 0.2 nm. That means the current width of the components is about 70 atoms. From what I read, electrical leakage is becoming an increasing issue in chip designs. Quantum tunneling is predicted to be an issue under 5 nm.

          Many people think some other paradigm, like quantum computing, will come to the rescue, but given all the issues researchers are having keeping qubits in superposition, that’s more a statement of faith than extrapolation.

          I do think the end of Moore’s Law will force the industry to look at alternate hardware architectures, which it hasn’t really had to do for a long time.

          Liked by 1 person

          1. That really puts things into perspective. It sounds like we might be able to make leaps and bounds in progress now, but at some point there could come a leveling off. Who knows how long that will last. And here we are talking about what it would take to store one human brain. Imagine what it would take for multiple brains!

            Liked by 1 person

          2. That’s why I think an understanding of how the mind arises from the brain is crucial. We have to make a distinction between information about the brain and information in the brain. Depending on our resolution, the first could be anywhere between dozens of petabytes to the zettabyte Eagaleman talks about.

            But when we talk about information in the brain, estimates are much lower, ranging from as low as a few tens of terabytes to 2.5 petabytes. That’s a lot less info to deal with, but we can only deal with it if we understand how the brain itself deals with it, instead of just aping the brain’s physical processes.

            Liked by 1 person

          3. I’ve never understood what Moore’s Law has to do with this. It applies to how much can be fit on a single chip, but has nothing to do with the size of the system you can build.

            It might mean we’ll have trouble building a brain-sized mechanical brain, but so what? So we don’t have Commander Data running around with his positronic brain.

            And as you point out, there are other technologies that can be investigated. The brain is just three pounds and a certain size, so it’s just an engineering problem!

            I did find the zettabyte estimate interesting (and it is based on the extremely fine resolution scan used to build the section of brain shown in the PBS series — that was one of my favorite parts, in fact). You’ll recall my crude minimal estimate weighed in at around 25 petabytes, and that involved a simplistic connectome model with only 64 bits for synapse or neuron states.

            Ultimately it’ll depend on how far down we have to go for a system that emerges consciousness.

            Like

          4. I think Moore’s Law might be relevant for us ever being able to execute a zettabyte model. But as I’ve noted before, I’m doubtful ML has enough life left in it to get us there. I personally don’t think we’ll need that much data if we understand neurons, synapses, glial cells, and their interactions sufficiently. But we won’t know for sure until we achieve that understanding.

            Like

          5. Eh? It reduces the size of machine that would be necessary. Right now, processing a zettabyte model might take the whole internet running as a ginormous parallel cluster configuration. Not exactly practical. Or am I missing something in your question?

            Like

          6. “It reduces the size of machine…”

            Exactly. But I can build the machine as big as I want.

            Large data centers now are in the dozen exebyte range with tens of thousands of processors. The amount of data we generate grows constantly.

            If ML peters out, it just means we end up building bigger machines. It doesn’t stop us from amassing that kind of computing power.

            Like

          7. Seems like we’re diverging on practical versus possible in principle again. In principle, we could use all the world’s computers to emulate my brain, but I’m not hopeful of convincing the world to do that 🙂

            Like

          8. Could be. I was responding to this comment:

            “He also seems to have an overly rosy assessment of Moore’s Law, projecting that running a zettabyte sized model won’t be a problem in 20 years. Maybe it won’t, but Moore’s Law appears to already be losing steam, so assuming it provide that much horsepower in the future should be done with caveats.”

            To me, Moore’s Law doesn’t really “provide horsepower” so much as speak to how much horsepower you can put on a single piece of silicon. Data centers in the last two decades have gone from terabytes to petabytes now into the exebytes range. Is it really that much of a stretch to imagine single data centers reaching zettabyte range in the next two decades?

            I’m saying I don’t think it is, so I think it will be possible to run a zettabyte model in 20 years or so.

            If you’re saying they won’t be rolling off the assembly line like cars, I totally agree on that! 🙂

            Like

  4. I’m still a bit talked out on the AI topic. I’ve seen episodes #2 and #4 – #6 of the PBS show, and it was okay to watch. As you and Tina mentioned, not much that’s new. (In fact, it touched on several points that I touched on in my series, so that was kinda cool.)

    The bit about disgust being such an accurate predictor of political affiliation was interesting! I can totally see it. 😀

    I wonder, as our robots explore space, if bio-humans will start to expand out at some pace behind them. There are certainly those who’ll want to go. The ones that climb mountains just to get to the top or who gladly sign up for a one-way trip to Mars. The question is whether the economics will ever allow it.

    My guess: The solar system, yes, we’ll get our butts out there. Some local stars? Maybe. Especially if we find any truly Earth-like planets. (The urge to stand under an alien sun is very strong in some.)

    Sleeper ships? Generation ships? Who knows. (Thing is, the clock is running. Only five billion years or so left on the local star…)

    Like

    1. On disgust, I’m actually skeptical. Yes, people’s disgust reflex does get appropriated by sacred notions, but I think that liberals just have different sacred notions than conservatives. (I say this as a moderate liberal.) My observation of many environmentalists, for instance, is that their disgust reflex gets appropriated by violations of how they think the environment should be treated. From what I can see, most people’s ideologies have a lot more to do with peer pressure than anything else. There are a lot of conservatives where I live; most of them are conservative because they’re surrounded by conservatives.

      I agree with your remarks about bio-humans following robots. I’m not sure the economics will ever support colonization, but it may support glory missions akin to mountain climbing, along with some scientific outposts.

      On sleeper / generation ships, have you seen this article by Kim Stanley Robinson?

      The TL;DR is that, after a lot of research, KSR is pretty pessimistic on the idea of human interstellar travel.

      Like

      1. “On disgust, I’m actually skeptical.”

        I was at a party last Saturday night, so I ran this past several people there, and the reaction was universally like yours. That surprises me, since I can see why it would correlate.

        Keep in mind, this result is based on testing and apparently does show a very strong correlation. This isn’t something someone is guessing.

        “…people’s disgust reflex does get appropriated by sacred notions,…”

        As I understand it, it’s the other way around. You’ve written often about our hard-wiring, and this seems to be a case of that. People have an inborn disgust “threshold” that affects their tendency to perceive disgust.

        The correlation is that that stronger the disgust “threshold” the more likely the person is to have conservative values. I think the common perception that liberals are also disgusted, just by different things, is incorrect. I think maybe what liberals feel is often more outrage or anger.

        I think homosexuality and gun use (e.g. CCW laws) might offer two indicators. The former is typically something conservatives oppose; the latter is typically something liberals oppose.

        I’ve never really understood the opposition to homosexuality or gay marriage. Those things don’t affect me negatively at all. They don’t affect the lives of anyone except those directly and willingly involved, and there’s no public safety issue. A strong opposition has to be based on highly negative perception. Disgust.

        Gun use does directly affect me. There are clear public safety issues. There’s no disgust involved. Anger, sure, but not that hard-wired stomach-turning irrational feeling.

        I think how liberal politicians speak versus how conservative ones speak does reflect this divide. (Certainly conservatives tend to appeal to our sense of fear more than liberals.)

        “…glory missions…”

        Yeah, that’s a good way to put it. Glory missions.

        Or, maybe within the Solar system, economic reasons.

        Like

        1. “You’ve written often about our hard-wiring, and this seems to be a case of that.”

          I have but I’ve never argued that culture isn’t crucial, particularly for cultural issues like political ideology. When I first heard about this a few years ago, I expected to be easily convinced. But when I read Chris Mooney’s articles about the actual studies, the interpretation of the results seemed strained to me, like just-so stories.

          I started to read his book on the subject, but had technical difficulties with the ebook formatting, put it aside until they fixed the issues, and never got back to it. Maybe I’ll have to take another shot at it at some point.

          Like

          1. I don’t doubt the correlation exists in the subjects they studied, but correlation by itself doesn’t prove causation. Saying a natural reaction to disgusting images causes political alignment assumes that an adult’s “natural” reaction in this regard isn’t itself influenced by their political and cultural views. I’d also want to see larger samples sizes with different age groups and including subjects outside of WEIRD (western, educated, industrialized, rich, democratic) demographics. I think to definitively demonstrate causation here, you’d have to test young kids and then check back 20-30 years later to see if their political ideology met predictions.

            I also think any theory along these lines has to account for why residents of large cities and coastal regions tend to be more liberal while rural and interior residents tend to be more conservative. Saying that the south is genetically different from the northeast is a statement for which I’d need to see extensive evidence before believing.

            Like

          2. I think you’re keying off my having said, “[T]his seems to be a case of [brain hard-wiring].” That may or may not be true. The real point is what I said at first, “The bit about disgust being such an accurate predictor of political affiliation was interesting! I can totally see it.”

            It’s the surprisingly strong correlation that I find fascinating.

            Whether it’s more hard-wired or cultural is a much harder problem and not one the study tries to address. They’re interested in the neural correlates and drawing connections between how people think on seemingly unrelated subjects.

            Like

          3. I was actually reacting to the implied notion (which it doesn’t sound like you are advocating) that genetics determines political ideology. I think the relationship is far too complicated to make that assertion. Most social psychologists, to their credit, have become very careful not to oversell the implications of their findings. Unfortunately, news reporters often aren’t so careful, and it’s the kind of thing that makes people skeptical of the whole idea of evolutionary psychology, or psychology and the social sciences in general.

            Like

          4. Sure. Genetics may turn out to be a factor, but there are obviously many others. It’s the correlation that’s fascinating.

            They reported that a person’s (fMRI) reaction to a single image can be a highly accurate predictor. Especially interesting is that the fMRI reaction and the reported reaction did not always agree. The reported reaction didn’t show the correlation, but the fMRI reaction did.

            (FTR: You can number me among those who feel some degree of skepticism is entirely appropriate with psychology and social sciences and especially with evolutionary psychology.)

            Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.