Michael Graziano has an article at the Atlantic looking at the plausibility of mind copying. He doesn’t beat around the bush, going all in with the title: Why You Should Believe in the Digital Afterlife, although the actual text of the article is more nuanced, and echoes what I hear from most neuroscientists.
As a neuroscientist, my interest lies mainly in a more practical question: is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.
He proceeds to give a basic overview of how the brain processes information, which I highly recommend reading if you’re skeptical that the mind is essentially information processing. He doesn’t shy away from noting the enormous difficulties.
To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.
And then there’s the largest difficulty, would the resulting software mechanism be conscious? This may always be a metaphysical debate, even if or when minds start being uploaded, but Graziano makes this point.
But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.
Graziano goes on the discuss the difficulties inherent in the fact that brains don’t exist in isolation, but are integrated in a tightly coupled manner with the rest of the body, including the peripheral nervous system, glandular system, and other aspect of the body. Any successful simulation would have to deal with all of that complexity.
He is actually optimistic that computational capacity will continue to increase, enabling the complexity of a simulation, noting that he thinks quantum computing will open up possibilities. But I don’t have his certitude on this.
The main problem is that it’s not enough simply to do the information processing that the brain does, but a computer would have to simulate the hardware. If you’ve ever run software engineered for a different hardware platform in a hardware emulation program, you’ll know that such emulation typically comes with a severe performance penalty. The partial emulation of biological neural processing that has happened so far has required immense processing power.
Moore’s Law is usually cited as an argument that any computational capacity issue will eventually be solved. However, the actual Moore’s Law is an observation of a trend that the number of transistors on an integrated circuit chip doubles every two years. Gordon Moore, the originator of that observation, noted early on that eventually the trend would end. Recent industry reports are that the trend is approaching its end, with progress now coming slower, probably to peter out in the next few years.
Transistors can only get so small. Currently their features are scaled at 14 nano-meters. It’s generally recognized that silicon will reach its limit at around 5 nano-meters as quantum tunneling becomes an issue. Graphene may extend that down somewhat further, but we appear to be nearing the limits of easy capacity increases in classic computers. Some researchers have managed to scale logic gates down much further, but it’s not clear how commercially viable those implementations might ever be.
Quantum computing may dramatically increase capacities for certain types of processing, but I’m not sure simulating a biological neural network would fall into that category, but I’ll admit I could be underestimating the possibilities of quantum computing. The big problem right now is that quantum processors have to process in a near 0 Kelvin (absolute zero temperature) environment, something that is unlikely to happen on your desktop computer.
Still, there’s room for optimism. The brain itself operates at 37 degrees Celsius and has a very modest power requirement of about 25 watts. While the processing of any one neuron is very pokey by electronic standards, the brain more than makes up for it with its massively parallel architecture.
All of which indicates that we’ve likely only scratched the surface of possible information processing architectures. The end of Moore’s Law will likely force a type of innovation that simply hasn’t been necessary for several decades: looking at alternate ways of processing information. That progress will likely come in fits and starts, but there’s no reason to suspect it won’t come.
All that said, it may well eventually turn out that emulating the brain hardware (or “wetware”) will never be an effective strategy. Maybe, to have a functional copied mind, we’ll have to recreate a new substrate very similar to the original, in other words, a new biology-like brain. Doing this while imparting the information from the source mind might be profoundly difficult, but again, taking a very long view, there’s nothing that fundamentally prevents it.
This will require a very thorough understanding of the brain and mind. However, having that understanding may actually enable another strategy. We may find that the best way to copy a mind is to put it through a translation process, to, in effect, port it to a new platform in the same way that programmers sometimes port software from one hardware platform to another.
This is easier to understand if we consider if a part of the brain were damaged and we swapped it out for a technological replacement. If someone’s, say, V1 vision processing center was destroyed, and we replaced it with a computer unit that processed vision in its own way, but provided the same information to the rest of the brain that the original V1 center did, would we still have the same person? If we replaced other components as they failed, at what point would the person cease being the original? And what’s different if we do it all at the same time?
Of course, this will make skeptics even more convinced that we haven’t really copied the mind, only set up an imitation. But it seems to me that skepticism is going to come regardless, and that people will still be arguing about it long after the first successful copy is made.
Unless, of course, there are aspects of this I’m not seeing?