Steve Morris clued me in to this article: Worm ‘Brain’ Uploaded Into Lego Robot | Singularity HUB.
Can a digitally simulated brain on a computer perform tasks just like the real thing?
For simple commands, the answer, it would seem, is yes it can. Researchers at the OpenWorm project recently hooked a simulated worm brain to a wheeled robot. Without being explicitly programmed to do so, the robot moved back and forth and avoided objects—driven only by the interplay of external stimuli and digital neurons.
The article comes with this accompanying video:
Now, the C Elegans worm has about the simplest central nervous system in nature, with only 300 neurons and 7000 synapses (compared to human’s 86 billion neurons and 100 trillion synapses). Still, the fact that putting that connectome (the map of a brain’s connections) into a robot produced behavior that resembles what an actual C Elegans would do is intriguing.
The article ends by asking the obvious question:
In this example, we’re talking very simple behaviors. But could the result scale? That is, if you map a human brain with similarly high fidelity and supply it with stimulation in a virtual or physical environment—would some of the characteristics we associate with human brains independently emerge? Might that include creativity and consciousness?
There’s only one way to find out.
Of course, many will insist that we shouldn’t even try. But I suspect that train will leave the station regardless.
That’s pretty incredible! Maybe a little creepy, but amazing progress.
LikeLike
I agree that it is a little creepy. I strongly suspect that a worm has no real consciousness, but as we move up in animal complexity, ethical considerations will increasingly become a concern.
LikeLike
I’m not at all surprised that a simulated worm brain would behave exactly like a real worm. I’d be interested in viewing a video of a real worm for comparison.
I have a feeling that simulations and experiments like this will shortly start to have a large impact on the ethics of how we treat both AIs and animals.
LikeLike
It’s a pity they can’t make a robot worm yet so we could see them side by side and see if they could pass as worms.
You may be right on ethics. But I wonder if it will amount to more than the current animal rights activism, which doesn’t really stop most of the animal testing that goes on.
LikeLike
The robot worm would definitely be tricky, but what if they could build some sort of sensory deprivation chamber for a real worm dosed with a paralytic, use electrodes for input of sensory information from the robot, and then use electrodes or EEG or something like it to track the worm’s actual motor neural responses and output those to the robot? Sure, it’s only 1 mm long, but, from what I read in the article, you’d only need two motor outputs (left and right) and however many sensory inputs (three were listed in the article) to have a connection with the physical world analogous to the sim’s. Then you could compare emergent behaviors. You could also take the sim’s neural activity and render it into a simulated brain scan and compare it to brain scans of real worms and compare which clusters are firing in response to which stimuli.
This is a really cool article, though. Thanks for sharing!
LikeLiked by 1 person
Thanks DS. Interesting ideas.
LikeLike
This is the best title any blog post about complex technology has ever had. Also, worms are a little boring – I propose they try a ferret brain next. Much more exciting.
LikeLike
Haha. Ferrets are actually pretty complex creatures. I think they’ll have to graduate through jellyfish and insects first.
http://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Whole_nervous_system
LikeLiked by 1 person
Oh, realism. I was speaking strictly from an entertainment perspective. 🙂
LikeLiked by 1 person
Reblogged this on CauseScience and commented:
This is awesome!! CauseScience loves C. elegans!! …. and robots!
LikeLiked by 1 person
Toot! Toot! <— (train leaving station) 😀
Doesn’t surprise me at all. There are earlier experiments that didn’t use the connectome, but fairly simple algorithms intended to mimic basic insect behavior. The resulting “roach robot” had some eerie similarities to real roaches. Lower animals are almost entirely pre-programmed instinct, so the more we replicate their wiring, the more our robots will resemble them.
Of course, I don’t have any problem at all killing roaches… 😮
LikeLike
I don’t really have any problem with killing roaches either, although I did find the roach robot experiments mildly disturbing.
I think the main point on this one is that this is the first time that someone put a connectome into a machine and then let it run that machine without any programming. (Well, aside from the simulation software itself.)
LikeLike
Yeah, it’s a really interesting step. Combine that with the latest techniques for scanning the human brain’s pathways, and the engineering problem of building a “brain box” seems closer than ever.
LikeLiked by 1 person
Reblogged this on Confessions of a Geek Queen.
LikeLiked by 1 person
This is really cool! OpenWorm looks like something that could really change the face of science and technology as we know it.
LikeLiked by 1 person
Reblogged this on Religion erased and commented:
A very interesting (and a little scary!) advancement. I wonder what this will eventually lead to…
LikeLiked by 1 person
Also reblogged. I don’t know if this excites me or scares me. I think it is both.
LikeLike
Agreed, and thanks for the reblog!
LikeLike
Most processors these days have fewer than 10 billion transistors. The biggest FPGA system has over 20 billion transistors. Still way to go to 86 billion.
https://en.wikipedia.org/wiki/Transistor_count
LikeLike
True. It’s even more daunting than that, because a neuron is more like a capacitor, and a transistor more like a synapse, except that a synapse can vary smoothly in strength, dramatically increasing the information they can hold. A synapse may be more like a byte or even two or four bytes. And we have 100 trillion of them.
Now, the brain operates far more slowly than an integrated processor chip. And hundreds of terebytes of solid state secondary storage is very achievable. And a large cluster of processors is probably more analogous to what goes on in the brain. Michael Graziano, a neuroscientist, claims that the largest clusters now rival the processing capacity of a human brain.
Of course, simulating an actual brain, rather than just matching its capacities, is far more difficult, and requires far more processing power.
LikeLike
Matter has a tendency to self-organize. The high level of organization of live matter seems like deliberately designed or a result of almost non-existent probability. But, as I think of it, it is not more surprising than billions of randomly moving atoms forming a perfect crystal lattice. Considering the vast abundance of the universe, the almost non-existent probability becomes a virtual certainty. Consider the level of cooperation exhibited by humans. As I mentioned in my recent post, no human comprehends a complete supply chain of a simplest product we use every day. Although each person may directly deal only with his closest neighbors, connections form networks across the globe without individuals even realizing it. It is quite likely that I could find a close friend of, say, queen Elizabeth within about 10 links between people in my network.
The social networks have life of their own. I think, cmoputers have already started forming neural networks which act independently from humans. One of the signatures of intelligence is the ability to generate unexpected ideas and solutions – something that it was not programmed to do. That’s when it stops being just a machine. So, I expect the emergence of AI to be “unexpected” and unintended. It may sound like a paradox, but it isn’t. People will find AI somewhere where they did not intend to create it..
LikeLike
I tend to be skeptical that technology will self organize into any kind of independent intelligence. I think we’re going to have to painstakingly work to build what intelligence they’ll have. It might be that, given a large enough time scales, a technological intelligence could evolve, but I doubt we’re going to willing to wait that long.
On the other hand, an uploaded mind might well start to evolve in ways we couldn’t predict. If it’s an uploaded human, then at least their humanity would, to at least some degree, inform their attitudes and actions. If it’s an uploaded shark mind, we might want to be very careful what kind of access we give it.
LikeLike
In any case, we need to control the power supply.
LikeLike
Agreed. Except that the military is working on robots that feed on biomass. :-\
LikeLike
Nice. As long as they are programmed not to feed on human flesh, it might be OK.
LikeLiked by 1 person
This video impressed me a lot more. Watch what this thing does on ice.
LikeLike
This is mind-blowing as well
LikeLike
They are impressive, but the point of the robot in the post is that its actions are being guided by a copy of a worm’s central nervous system, and that it’s responding more or less like a worm would to stimuli. It’s not the robot itself, but what’s guiding it.
LikeLike
Thanks ‘SAP'(and Steve), pretty amazing – a worm brain, Darwin would have loved it!
LikeLiked by 1 person