Michael Graziano has an article at The Guardian, which feels like an excerpt from his new book, exploring what might happen if we can upload minds:
Imagine that a person’s brain could be scanned in great detail and recreated in a computer simulation. The person’s mind and memories, emotions and personality would be duplicated. In effect, a new and equally valid version of that person would now exist, in a potentially immortal, digital form. This futuristic possibility is called mind uploading. The science of the brain and of consciousness increasingly suggests that mind uploading is possible – there are no laws of physics to prevent it. The technology is likely to be far in our future; it may be centuries before the details are fully worked out – and yet given how much interest and effort is already directed towards that goal, mind uploading seems inevitable. Of course we can’t be certain how it might affect our culture but as the technology of simulation and artificial neural networks shapes up, we can guess what that mind uploading future might be like.
Graziano goes on to discuss how this capability might affect society. He explores an awkward conversation between the original version of a person and their uploaded version, and posits a society that sees those living in the physical world as being in a sort of larval stage that they would all eventually graduate from into the virtual world of their uploaded elders.
Mind uploading is one of those concepts that a lot of people tend to dismiss out of hand. Responses seem to vary between it being too hopelessly complicated for us to ever accomplish, to it being impossible, even in principle. People who have no problem accepting the possibility of faster than light travel, time travel, or many other scientifically dubious propositions, draw the line at mind uploading, even though the physics for mind uploading are far more feasible than those other options.
That’s not to say that mind uploading should be taken as a given. It is possible that there may eventually turn out to be something that makes it impossible.
For example, I’m currently reading Christof Koch’s new book, The Feeling of Life Itself, in which Koch explores the integrated information theory (IIT) of consciousness. A big part of IIT is positing that the physical causal structure of the system is crucial. As far as IIT is concerned, mind uploading is pointless because, even if the information processing is reproduced, if the physical causal structure isn’t, the resulting system won’t be conscious.
I think Koch too quickly dismisses the idea of it being sufficient to reproduce the causal structure at a particular level of organization. But if he’s right, mind uploading becomes far more difficult. Although even in that scenario, the possibility of neuromorphic hardware, computer hardware engineered to be physically similar to a nervous system, including physical neurons, synapses, etc, may still eventually make it possible.
Even if nueromorphic hardware isn’t required in principle, it might turn out to be required in practice. With Moore’s Law sputtering, the computing power to simulate a human brain may never be practical with the traditional von Neumann computer architecture. A whole brain emulation might be conscious using the standard serialized architecture, but unable to run at anything like the speed of an organic brain. It might take a neuromorphic architecture, or at least a similarly massively parallel one, to make running a mind in realtime feasible.
However, all of these considerations strike me as engineering difficulties that can eventually be overcome. Brains exist in nature, and unless anyone finds something magical about them, there’s no reason in principle their operation won’t eventually be reproducible technologically.
Although this may be several centuries in the future. I do think there’s good reasons to be skeptical of singularity enthusiast / alarmist predictions that it will happen in a few years. Our knowledge of the brain and mind still have a long way to go before we’ll be able to produce a system with human level intelligence, much less reproduce a particular one.
On the awkward conversation that Graziano envisions between the original and uploaded person, with the original in despair about being the obsolete version, I think the solution would be to simply have mind backups made periodically, but not run until the original person dies. That should avoid a lot of the existential angst of that conversation.
That’s assuming that there isn’t an ability to share memories between the copies, with maybe the original receiving them through a brain implant of some type. I think being able to remember being the virtual you would make being the mortal physical version a lot easier to bear. The architecture of the brain may prevent such sharing from ever being feasible; if so, then the non-executing backups seem the way to go.
I don’t know whether mind uploading will ever be possible, but in a universe ruled by general relativity, not to mention the conservation of energy, it seems like the only plausible way humans may ever be able to go to the stars in person. If it does turn out for some reason to be impossible, then humanity might be confined to this solar system, with the universe belonging to our AI progeny.
What do you think? Is mind uploading impossible? If so, why? Or is it possible and I’m too pessimistic of it happening in our lifetimes? Are there reasons to think the singularity is near?