I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe. However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.
I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong. The reason is that with a sufficiently loose definition of consciousness, it is true. The question is how useful those loose definitions are.
But first I think a clarification is needed. Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.
The first is one I’ll call pandualism. Pandualism takes substance dualism as a starting point.
Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind. Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind. This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.
Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces. This group seems to include people like David Chalmers and Christof Koch.
I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong. There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation. The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations. It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.
The second panpsychist position is one I’m going to call naturalistic panpsychism. This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious. Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.
Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism. Apsychists don’t see consciousness as actually existing. In their view it’s an illusion, an obsolete concept similar to vitalism. We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.
The difference between naturalistic panpsychism and apsychism seems to amount to language. (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.) Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language. But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.
I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me. But ultimately, I think both approaches are problematic. Again, I won’t say that they’re wrong necessarily, just not productive. But their unproductiveness seems to arise from an overly broad definition of consciousness. As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.
Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t. The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge. Religion, biological life, and consciousness are all concepts that seem to fall into this category.
Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”. But that apparent simplicity masks a lot of complex underpinnings. Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them. Without this ability, what makes experience “subjective” or “like” anything?
Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat. If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.
So, metacognition seems to factor into our intuition of consciousness. But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality. My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.
The first layer, and the most basic, are reflexes. The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes. This could be considered the base programming of the system. A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.
Perception is the second layer. It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them. This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.
Attention, selective focusing of resources based on perception and reflex, is the third layer. It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain. This layer is a system to prioritize what the reflexes will react to.
Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment. Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet. Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.
Imagination is the fourth layer. It includes simulations of various sensory and action scenarios, including past or future ones. Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.
Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one. It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.
Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful. And introspection is the defining characteristic of human consciousness. Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.
How widespread is metacognition in the animal kingdom? No one really knows. Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals. A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).
But these tests are complex, and the animal’s overall intelligence may be a confounding variable. And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias. Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative. It may be that only humans and, to a lesser extent other primates, have it.
So, which layers are necessary for consciousness? If you answer one, the reflex one, then you may effectively be a panpsychist. If you say layer two, perception, then you might consider some artificial neural networks conscious. As I mentioned above, some autonomous robots are approaching layer three with attention. But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.
And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify. But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.
What about the common criteria of an ability to suffer? Consider that our version of suffering is inescapably tangled up with our metacognition. Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?
So what do you think? Does panpsychism remain a useful outlook? Are the layers I describe here hopelessly wrong? If so, what’s another way to look at it?