In any online conversation about consciousness, sooner or later someone is going to bring up philosophical zombies as an argument for consciousness being non-physical, or at least some portion of it. The Stanford Encyclopedia of Philosophy introduces the p-zombie concept as follows:
Zombies in philosophy are imaginary creatures designed to illuminate problems about consciousness and its relation to the physical world. Unlike those in films or witchcraft, they are exactly like us in all physical respects but without conscious experiences: by definition there is ‘nothing it is like’ to be a zombie. Yet zombies behave just like us, and some even spend a lot of time discussing consciousness.
Few people, if any, think zombies actually exist. But many hold they are at least conceivable, and some that they are possible. It seems that if zombies really are possible, then physicalism is false and some kind of dualism is true. For many philosophers that is the chief importance of the zombie idea.
This is the classic version, one that is identical, atom for atom, to a conscious being, but has no conscious experience.
The biggest problem with p-zombies is that the premises of the idea presupposes its purported conclusion, the conclusion that some aspect of the mind is non-physical. If you remove the assumption of some form of substance dualism, the concept collapses. It becomes incoherent, a proposition similar to asserting that we can sum 2+2 and not get 4.
So, right off the bat, this classic version of the thought experiment seems like a failure, a circular argument, and for a long time that’s pretty much all the thought I gave to it. But I recently realized that classic p-zombies have a deeper problem. Even if you fully accept the dualism premise, it has another assumption, one that does more damage and ultimately makes the concept incoherent.
For the p-zombie concept to work, conscious experience must be an epiphenomenon, something that exists completely separate and apart from the causal framework that produces behavior. If consciousness is not an epiphenomenon, then its absence would make a difference in the p-zombie’s behavior, which is exactly what is not supposed to happen with a p-zombie.
Here’s the problem. We know epiphenomenalism is false. How? Well, if it’s true, then how can we discuss conscious experience? Somehow, the language centers of our brains send signals to the motor cortex that drive our speech muscles to make sounds relevant to it. Somehow signals are sent to my fingers so I can type this blog post, or similar signals are sent to your fingers if you decide to comment on it.
Whatever else it might be, conscious experience must be part of the causal framework that eventually leads to behavior. It has causal influence on the language centers of the brain if nowhere else, but that’s enough to have causal effects in the world. Epiphenomenalism cannot be true.
Without epiphenomenalism, it seems like the classic premise of the p-zombie collapses, even for dualists.
Now, maybe we can rescue the zombie concept somewhat if we retreat a bit from the classic conception and instead think about behavioral zombies. Unlike the classic version, b-zombies are allowed to be physically different from a conscious version of the being. It’s only in behavior that this kind of zombie is indistinguishable.
A computerized b-zombie seems trivial to do if we only need to momentarily fool an observer. However, the inability of any automated chat-bot systems to legitimately pass the most common (and weak) form of the Turing test demonstrates that the difficulty quickly escalates. (In the most commonly pursued version of the test, success is fooling only 30% of human subjects after five minutes of conversation.) Reliably fooling reasonably sophisticated observers for days, weeks, or months is not possible with any kind of current technology.
The difficulty here is that the longer the b-zombie can keep up the charade, the higher the probability that it isn’t actually a charade, that it is in fact implementing some alternate architecture for consciousness. Of course, to a substance dualist, physically implemented consciousness isn’t real consciousness. It’s a facade that mimics the results (including the ability to discuss conscious experience) but doesn’t include the actual qualia associated with it, no matter how much the zombie might insist that it does.
So unlike classic p-zombies, b-zombies are more logically coherent. They avoid the problem with epiphenomenalism since they can replace the putative non-physical aspect of consciousness with a physical implementation. But the conceptual existence of a b-zombie doesn’t have the same implications against physicalism, since even if consciousness is fully physical, it’s possible an alternate architecture to produce conscious seeming behavior might do it without conscious experience.
However, as with any conscious system, external observers could never actually access the putative b-zombies’s internal subjective experience, assuming it had one, no matter how much they knew about its internals. Which means that there would be no objective criteria that could be used to ever know whether a successful b-zombie was actually a zombie or a conscious being. (This was largely Alan Turing’s point when he first proposed the Turing test.)
This last point tends to make me view the idea of zombies overall as fairly pointless. It’s the classic problem of other minds. We can never know for sure that anyone other than ourselves are conscious. It seems reasonable to conclude that other mentally complete humans are, but everything else is up for debate. We’re forced to rely on our intuitions for babies, animals, or any other system that might act conscious-like.
Of course, caution is called for. Historically, those intuitions have often led us astray. Humans once saw consciousness in all kinds of things: rivers, volcanoes, storms, and many other phenomena that, because their effects often seemed arbitrary and capricious, led us to conclude that there was some god or spirit behind it. We have to take care that our intuitions are well informed.
But consciousness, once we do establish that it can’t be an epiphenomenon, that it is definitely part of the framework that produces behavior, must have evolved because it had some adaptive value. That implies that our use of behavior to assess its presence or absence is a sound one, as long as that assessment is rigorous.
Unless of course I’m missing something?