Worm ‘Brain’ uploaded into robot, which then behaves like a worm

Steve Morris clued me in to this article: Worm ‘Brain’ Uploaded Into Lego Robot | Singularity HUB.

Can a digitally simulated brain on a computer perform tasks just like the real thing?

For simple commands, the answer, it would seem, is yes it can. Researchers at the OpenWorm project recently hooked a simulated worm brain to a wheeled robot. Without being explicitly programmed to do so, the robot moved back and forth and avoided objects—driven only by the interplay of external stimuli and digital neurons.

The article comes with this accompanying video:

Now, the C Elegans worm has about the simplest central nervous system in nature, with only 300 neurons and 7000 synapses (compared to human’s 86 billion neurons and 100 trillion synapses).  Still, the fact that putting that connectome (the map of a brain’s connections) into a robot produced behavior that resembles what an actual C Elegans would do is intriguing.

The article ends by asking the obvious question:

In this example, we’re talking very simple behaviors. But could the result scale? That is, if you map a human brain with similarly high fidelity and supply it with stimulation in a virtual or physical environment—would some of the characteristics we associate with human brains independently emerge? Might that include creativity and consciousness?

There’s only one way to find out.

Of course, many will insist that we shouldn’t even try.  But I suspect that train will leave the station regardless.

31 thoughts on “Worm ‘Brain’ uploaded into robot, which then behaves like a worm

  1. I’m not at all surprised that a simulated worm brain would behave exactly like a real worm. I’d be interested in viewing a video of a real worm for comparison.

    I have a feeling that simulations and experiments like this will shortly start to have a large impact on the ethics of how we treat both AIs and animals.

    Like

    1. It’s a pity they can’t make a robot worm yet so we could see them side by side and see if they could pass as worms.

      You may be right on ethics. But I wonder if it will amount to more than the current animal rights activism, which doesn’t really stop most of the animal testing that goes on.

      Like

      1. The robot worm would definitely be tricky, but what if they could build some sort of sensory deprivation chamber for a real worm dosed with a paralytic, use electrodes for input of sensory information from the robot, and then use electrodes or EEG or something like it to track the worm’s actual motor neural responses and output those to the robot? Sure, it’s only 1 mm long, but, from what I read in the article, you’d only need two motor outputs (left and right) and however many sensory inputs (three were listed in the article) to have a connection with the physical world analogous to the sim’s. Then you could compare emergent behaviors. You could also take the sim’s neural activity and render it into a simulated brain scan and compare it to brain scans of real worms and compare which clusters are firing in response to which stimuli.

        This is a really cool article, though. Thanks for sharing!

        Liked by 1 person

  2. Toot! Toot! <— (train leaving station) 😀

    Doesn’t surprise me at all. There are earlier experiments that didn’t use the connectome, but fairly simple algorithms intended to mimic basic insect behavior. The resulting “roach robot” had some eerie similarities to real roaches. Lower animals are almost entirely pre-programmed instinct, so the more we replicate their wiring, the more our robots will resemble them.

    Of course, I don’t have any problem at all killing roaches… 😮

    Like

    1. I don’t really have any problem with killing roaches either, although I did find the roach robot experiments mildly disturbing.

      I think the main point on this one is that this is the first time that someone put a connectome into a machine and then let it run that machine without any programming. (Well, aside from the simulation software itself.)

      Like

      1. Yeah, it’s a really interesting step. Combine that with the latest techniques for scanning the human brain’s pathways, and the engineering problem of building a “brain box” seems closer than ever.

        Liked by 1 person

    1. True. It’s even more daunting than that, because a neuron is more like a capacitor, and a transistor more like a synapse, except that a synapse can vary smoothly in strength, dramatically increasing the information they can hold. A synapse may be more like a byte or even two or four bytes. And we have 100 trillion of them.

      Now, the brain operates far more slowly than an integrated processor chip. And hundreds of terebytes of solid state secondary storage is very achievable. And a large cluster of processors is probably more analogous to what goes on in the brain. Michael Graziano, a neuroscientist, claims that the largest clusters now rival the processing capacity of a human brain.

      Of course, simulating an actual brain, rather than just matching its capacities, is far more difficult, and requires far more processing power.

      Like

      1. Matter has a tendency to self-organize. The high level of organization of live matter seems like deliberately designed or a result of almost non-existent probability. But, as I think of it, it is not more surprising than billions of randomly moving atoms forming a perfect crystal lattice. Considering the vast abundance of the universe, the almost non-existent probability becomes a virtual certainty. Consider the level of cooperation exhibited by humans. As I mentioned in my recent post, no human comprehends a complete supply chain of a simplest product we use every day. Although each person may directly deal only with his closest neighbors, connections form networks across the globe without individuals even realizing it. It is quite likely that I could find a close friend of, say, queen Elizabeth within about 10 links between people in my network.

        The social networks have life of their own. I think, cmoputers have already started forming neural networks which act independently from humans. One of the signatures of intelligence is the ability to generate unexpected ideas and solutions – something that it was not programmed to do. That’s when it stops being just a machine. So, I expect the emergence of AI to be “unexpected” and unintended. It may sound like a paradox, but it isn’t. People will find AI somewhere where they did not intend to create it..

        Like

        1. I tend to be skeptical that technology will self organize into any kind of independent intelligence. I think we’re going to have to painstakingly work to build what intelligence they’ll have. It might be that, given a large enough time scales, a technological intelligence could evolve, but I doubt we’re going to willing to wait that long.

          On the other hand, an uploaded mind might well start to evolve in ways we couldn’t predict. If it’s an uploaded human, then at least their humanity would, to at least some degree, inform their attitudes and actions. If it’s an uploaded shark mind, we might want to be very careful what kind of access we give it.

          Like

    1. They are impressive, but the point of the robot in the post is that its actions are being guided by a copy of a worm’s central nervous system, and that it’s responding more or less like a worm would to stimuli. It’s not the robot itself, but what’s guiding it.

      Like

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.