Massimo Pigliucci wrote a paper on his skepticism of the possibility of mind uploading, the idea that our minds are information which it might be possible someday to copy into a computer virtual reality system or some other type of technology. His paper appears to be one chapter in a broader book, ‘Intelligence Unbound: The Future of Uploaded and Machine Minds‘, which I think I will have to read.
Interestingly, Massimo’s paper is in response to a paper by David Chalmers which apparently supports the idea of MU (mind uploading). I didn’t realize that Chalmers was open to something like that. Usually I think of Chalmers as the philosopher hopelessly preoccupied with the mystery of consciousness, but it looks like he doesn’t let his fascination with that mystery preclude him from considering possibilities like MU. This is causing me to reassess my views of him and wonder if I should be reading more of his material.
In his paper, I think Massimo makes some important cautionary points, but I think his conclusions from these points are unwarranted and over confident. Unfortunately, the paper at the link is a scanned PDF, so I can’t paste snippets and respond to them, so you’re going to get my own quick summation of each point. But you shouldn’t take my word on these; his paper deserves to be read in full.
Massimo’s assertions are in bold, with my responses following.
The brain is not a digital computer.
Yep. There are some advocates of MU who do seem to think that the brain is a digital computer, but I think anyone who has done any serious reading about the brain knows that isn’t true. The brain appears to be a massively parallel loose cluster of analog processors. Instead of transistors with discrete states, it uses synapses with smoothly varying strengths.
However, the brain takes in inputs from the senses, processes and stores information, and produces outputs in the form of movement. If we built a device that did that, regardless of its architecture, we would almost certainly call it a computer.
MU depends on the computational theory of mind, which is flawed because “we now know a number of problems” which can’t be computed.
All indications are that the brain is a physical system that works in this universe. If there are problems that modern computers can’t process, but that the brain can, that’s a flaw with modern computer architecture. But it’s a major leap from that observation to saying that no technology could ever solve it.
If Massimo means that there are problems that could never be computed with any machine architecture ever, then he’s effectively saying that the human mind has a non-physical aspect outside of this universe, which I doubt is an assertion he wants to make.
A simulated mind is not the same as a functional mind. Simulated photosynthesis doesn’t produce sugar, no matter how closely it models the actual physical process.
Simulated photosynthesis produces simulated sugar. A human mind is evolved to produce bodily movement. A simulated mind would produce simulated movement, which might be quite satisfactory in a simulated environment. But if the hardware running the simulated mind were connected to the right machinery, it could produce actual movements, turning the “simulation” into an arguably functional mind.
Would a simulated mind produce real consciousness? (Whatever “real consciousness” is.) It depends on how far down into the mechanisms a simulation goes, and at what layer in those mechanisms that consciousness actually resides. If consciousness resides in the quantum layer, then it’s hard to see a simulation ever capturing it. If it resides in the organization of neural and synaptic circuits, then I think it’s entirely doable, someday.
Human consciousness may be strongly dependent on its biological substrate. In other words, human consciousness may require human biology. Thinking otherwise is dualism, and dualism “has no place in modern philosophy of mind”.
This is entirely possible, although I think Massimo’s stand on this is filled with far too much certitude. But even if it is possible, that doesn’t mean that human technology may not be able to someday duplicate that biological substrate. It may be centuries in the future, but saying it will never happen strikes me as remarkably pessimistic about what human ingenuity might eventually be able to do.
Personally, I think a human consciousness uploaded into a silicon (or whatever) substrate will be unavoidably different. The question is, would it be so different that friends and family wouldn’t recognize their loved one? Of course, if the upload was not destructive, so that the original person was still around, the differences might be more noticeable.
I think the dualism assertion is, frankly, silly. There’s no requirement for Cartesian “ghost in the machine” dualism. The only type of dualism that would be required is the software/hardware dualism you accept by running Windows, Mac OS X, Linux, or whatever, on the device you’re using to read this.
Massimo feels it is self evident that Captain Kirk dies every time he steps into the Star Trek transporter. Since transporting is effectively a type of MU, this means no one should want to be uploaded, at least not if its destructive. Destructive uploading is hi-tech suicide.
I have to admit that I wouldn’t be eager to submit to a destructive (i.e. fatal) type of uploading.
But suppose I’m on my death bed. Regardless of what I do, my current physical manifestation is about to end. If I don’t upload, my pattern will disappear from the universe. Uploading might produce an imperfect copy, but something of me would continue after I was gone, something far more intimate than my work or even my children. That version of me would consider itself to be me. If that’s all I had left, I think I’d take it.
It’s worth noting that, due to the body’s never ending repair and waste removal processes, the physical me that exists today isn’t the physical me from ten years ago. Every atom in my brain has been replaced over those years. My current mind is a very imperfect copy of that me from ten years ago. Actually, it’s an imperfect copy of me from yesterday. Yet, I’m never really tempted to wonder if tomorrow’s me will be the real me.
Again, I think Massimo raises a number of important cautionary points. It might turn out that MU is impossible. For example, despite all indications, we might discover that something like Cartesian dualism actually is reality. Or human consciousness might be so fragile and so tightly tuned to the body it arose in that any attempt to copy it would render it non-functional. Or it might reside in quantum layers of reality that we may never understand. But I think these possibilities are unlikely.
My own prediction is that engineers will eventually produce something that resembles MU, that the uploaded minds will be different than the biological ones, that some people will be horrified by those differences, but that most will eventually learn to live with them, and simply come to see uploading as one of existence’s transitions.
It might be several centuries before this happens. Even singularity enthusiasts only see it happening in the near future with the help of super-intelligent AIs. But for many people, MU that is physically possible, but not achievable in our lifetimes, is the worst scenario, because it means that we might be among the last generations to disappear from the universe. For these people, far better to conclude that it will never be possible.
I can understand this impulse. But if it has any hope at all of being doable within any of our lifetimes, it’s unlikely to be accomplished by those who have already decided it is impossible.