One of the things I get reminded of every few years, is that difficult determinations often look clearer when you consider them in a wider scope. Years ago, when I was trying to figure out whether conservative or progressive political policies were better, I discovered that widening my investigation to history helped immensely, and widening even further to the history of other developed countries helped even more. Many of the typical conservative hangups looked parochial in that broader context.
The same thing happened when I was trying to decide how worried to be about artificial intelligence. Many of the people who are worried about it are familiar with the technology, and their concerns carry weight with the general population. But learning about neuroscience and evolutionary psychology put those concerns in a much broader context and, at least for me, rendered most of them moot.
Consciousness is one of those topics that people have been writing and debating about for centuries. But I’ve found that many of the philosophical ideas often kicked around wither in the light of neurological case studies and overall neuroscience. We’ve gained a lot of insight into consciousness by looking carefully at the human brain, particularly the cases where it gets damaged of malfunctions in some way. But maybe a broader approach yet is to look at consciousness in animals, particularly in terms of evolution.
This is the approach used by Todd Feinberg and Jon Mallatt in their new book: ‘ The Ancient Origins of Consciousness: How the Brain Created Experience‘. (This is the first of what I hope will be a series of posts inspired by their fascinating, albeit technical, book.) The good thing about studying animal consciousness is that it gives a much broader array of systems to study. And animals can be studied in many more ways, ways that are often ethically unacceptable for humans. (Some of those ways I personally find unacceptable, but the knowledge gleaned from them is real.)
Of course, the biggest issue with studying animal consciousness is that we lose the primary advantage of focusing on human consciousness. We know that we ourselves are conscious, and it is uncontroversial to assume that all mentally complete humans are also conscious.
But the farther we move away from healthy adult homo sapiens, the more tenuous this assumption becomes. We have to be careful not to project our own experience on animals or other systems. It’s reasonable to assume that animal experience is not human experience, particularly as we move down the intelligence chain. This raises an interesting question, how much of human experience can we dispense with and still coherently use the label “consciousness”?
In their book, Feinberg and Mallatt make clear that they’re not attempting to explain human level consciousness, but that they are aiming for the hard problem of consciousness, the one that asks, why is there “something it is like” to be a conscious being? They equate this with what they consider to be primary or sensory consciousness.
But it’s not clear to me that what we mean by “something it is like” is so easily divorced from higher level consciousness capabilities. It might be that without the ability to reflect on our experience, that it is not necessarily “like anything” to be one of these creatures. As Thomas Nagel pointed out years ago, we can never know what it’s like to be a bat. But it’s possible that neither can an actual bat, if it doesn’t have at least some level of introspective ability. For that and other reasons, we have to be cautious in assuming that animals have an inner experience.
Still, anyone who has ever cared for a pet knows that the intuition of animal consciousness is very powerful. Whatever mental life animals possess, we sense in them fellow beings in a way that we don’t sense with plants or computer systems. This isn’t true of all animals of course. I don’t really sense any consciousness in a worm, a starfish, or an oyster, which makes sense since none of these animals have brains.
But pretty much any animal with eyes tends to trigger my intuition that there is some inner life there, something that is seeing and has some kind of intentionality, a worldview of some kind, even if it’s a limited one. This is a common intuition, which is why it’s not unusual for movies to show an opening eye to indicate that some thinking feeling thing is present. According to Feinberg and Mallatt, this turns out to be a reasonably good indicator.
High resolution eyes with lenses, as opposed to simple light sensors, are costly constructs in terms of complexity and energy, and evolution rarely wastes such resources. But without mental images, eyes would in fact be a waste. And mental imagery, another costly feature, would itself be useless without modelling of external objects and the environment, along with the animal’s body and its interactions with that environment. And that modelling would itself be useless without being a guide to possible actions the animal might take.
None of this is to say that the modelling done by the brain of a lamprey, one of the simplest vertebrates that Feinberg and Mallatt conclude may be conscious, is anything like that done by a human brain. Without a doubt, the lamprey’s models are far less rich, but then the lamprey has no real need of human like models. All that matters is whether its models are effective in allowing it to navigate its environment, and they generally appear to be so.
But do these capabilities count as consciousness? A lamprey doesn’t have a cerebrum, where human and mammal consciousness appears to reside, and sub-cortical processes in humans are below the level of consciousness. But a mammal with its cerebrum removed or destroyed is a severely disabled creature, without the ability to navigate its world and survive on its own. A lamprey does have that ability, indicating that the necessary modelling is still taking place somewhere in its more primitive brain.
This makes sense from an evolutionary point of view. Primary consciousness must have some adaptive value. It seems reasonable (although admittedly speculative) to assume that it is consciousness which allows animals to have a wide repertoire of available actions to navigate the world, find food and mates, and avoid predators. These capabilities were likely important catalysts leading to the evolution of complex brains, and consciousness, during the Cambrian explosion.
We’re not talking here about human level consciousness, but Feinberg and Mallatt use the analogy of an airliner and an ox cart. The experience of riding an ox cart is not the experience of riding on an airliner, but they’re both transportation. Likewise, the experience of a lamprey is not like the experience of a human, except that they both have experience.
But again, does this really count as consciousness? What I’ve alluded to here is called exteroceptive consciousness, one type of primary consciousness described in Feinberg and Mallatt’s book. The other two are interoceptive consciousness and affective consciousness, all of which I’ll describe in more detail in another post. But after some consideration, I’m inclined to accept it as part of core consciousness, although I would completely understand if someone insisted on the label “proto-consciousness”. Ultimately, the exact labeling here is a matter of convention on how to discuss a certain point on the evolutionary spectrum.
But this raises another interesting question. Is the Google self driving car conscious? It doesn’t have eyes exactly, but it does use LIDAR to model its environment, and its own interactions with that environment. Of course, the Google car’s models are currently far less effective than a lamprey’s, at least relative to their respective environments, and the motivations of a self driving car are very different from those of a living animal. But as Google and other technology companies improve these systems, might we eventually reach a point where it makes sense to consider them to have a sort of primal consciousness?