David Chalmers in his book: Reality+: Virtual Worlds and the Problems of Philosophy, eventually gets around to addressing the 800-pound gorilla in the room for any discussion of the simulation hypothesis. Can consciousness itself be simulated, and if so, would the resulting entity be conscious?
This issue, I think, is what makes many react with far sharper incredulity to this hypothesis than they might for other speculative technologies like interstellar travel, nanotechnology, or flying cars. It’s one thing to imagine biological humans wired into a simulation like in The Matrix. It’s a whole other matter to imagine simulating the humans themselves.
Myself, I’m pretty much a stone cold functionalist. As far as I can see, if a machine can reproduce the functionality of a conscious system, then it will have reproduced consciousness. Which isn’t to say that reproducing that functionality is in any way trivial. We’re probably decades away, or maybe even centuries, from being able to do it. But I can’t see anything in principle to prevent it. (Obviously new discoveries could reveal blockers at any time.)
Chalmers, on the other hand, is the philosopher who coined the “hard problem of consciousness” and popularized the philosophical zombie thought experiment. He’s argued extensively that no physical explanation for consciousness is possible. For him, it can only be explained by either invoking non-physical forces, or by expanding our concept of what physics is. In this view, a theory of consciousness would be similar to a fundamental theory of physics, composed of brute irreducible facts that we simply have to accept.
All of which seems like it would make him skeptical of machine consciousness. But Chalmers’ stance has always been nuanced. He’s a dualist, but a property dualist rather than a substance dualist. He doesn’t equate consciousness only with functionality, but he does see it as something that “coheres” with the right functionality and organization. It’s a view that reconciles with science. So he’s long been open to the possibility that this non-physical or physics-expanding ontology can be present in a machine.
This nuanced view operationally seems like a combination of functionalism and identity theory. A straight functionalist can be open to functionality implemented with alternative strategies. Chalmers’ extra metaphysical commitment makes him care more about the specific organizational structure of the brain, and it seems to make the question of mind uploading harder to postpone.
He notes that the philosophical problem of other minds means we can never know for sure whether a machine or uploaded entity is actually conscious. Being uploaded, he muses, might just mean the creation of a philosophical zombie, even if it produces an entity with similar behavior that talks about its own consciousness.
In an attempt to work through this, Chalmers’ goes through a thought experiment (which some of you have already been discussing in the comments) where we replace our brain one neuron at a time with an artificial technological one. He asks, what happens to our consciousness during this process?
It seems implausible that it disappears on the first neuron being replaced, or on the last, or on the replacement of any one neuron in particular. Maybe it gradually fades away, but if our behavior and capabilities are preserved, then it’s a situation where we’re not aware of it fading away, where we are in fact massively out of touch with our own experience. Chalmers also finds this implausible. In his view the most likely scenario is our consciousness continues the entire time.
Interestingly, a functionalist might be open to a more aggressive version of this scenario. Imagine having cybernetic implants installed to reproduce functionality lost from strokes or other injuries. So if someone’s visual cortex is damaged, maybe we replace it with an implant that provides similar functionality. If later their amygdala is destroyed, we also replace it with an implant. Over time, every part of the brain gets replaced with something providing the same functionality of the lost part. This might be part of an overall process happening all over the body.
At what point, if any, does the person’s consciousness end? I wonder how Chalmers’ intuitions would change with this version since it wouldn’t preserve the fine grained organizational structure.
In either case, rather than an abrupt copy, we evolve the mind from one substrate to the other. It’s easy to see that there would be changes along the way, so that the final resulting mind has differences from the original. But I have differences from the me of ten years ago. I regard myself as the same person because of the continuity between us, even though most of the atoms of the ten year old me aren’t present anymore. It seems natural to take the same stance toward the gradual replacement.
Of course, these thought experiments can’t provide any kind of authoritative answer to the question of whether consciousness can be produced in a machine. Like any philosophical thought experiment, all they can do is exercise our intuitions for the scenarios where the functionality, and possibly organizational structure, are successfully reproduced. Many will simply reject that these scenarios are possible.
The common sentiment here is that we only have evidence for consciousness in organic brains, and assuming it can exist anywhere else is hasty, if not hopelessly misguided. But it’s worth noting that, strictly speaking, for a non-functionalist, we only have direct evidence for our own consciousness. Consciousness anywhere else has to be inferred from the behavior and functionality typically associated with it. Which behavior and functionality in particular is a controversial question.
For whatever functionality we decide is sufficient, the question remains whether it can be reproduced in technology. Here it’s also worth noting all the things that used to only be possible with natural brains, such as calculating ballistic tables, recognizing faces, playing chess, or any of a wide and ever increasing set of capabilities. Maybe we’ll eventually hit an insurmountable obstacle on what functionality can be reproduced, but there doesn’t seem to be any current reason to assume it. For any specific functionality at least, it’s eventually going to be an empirical question.
Unless of course I’m missing something?