This is the fifth and final post in a series inspired by Todd Feinberg and Jon Mallatt’s new book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘. The previous posts were:
- What counts as consciousness?
- Predators and the rise of sensory consciousness
- Types of sensory consciousness
- The neural mechanics of sensory consciousness
In the first post of this series, I noted that F&M (Feinberg and Mallatt) were not attempting to explain human level consciousness, and that bears repeating. When talking about animal consciousness, we have to be cautious that we don’t project the full breadth of human experience on them.
While a zebrafish has sensory consciousness with its millions of neurons, its conscious experience is not in the same league as a dog’s with 160 million neurons in the dog’s cerebrum alone, much less that of humans with our average of 21 billion cerebral neurons.
One analogy that might illustrate the differences here is to compare 1970s era video games, running on systems with a few kilobytes of memory, to video games in the 1990s running on megabytes of memory, and then again to modern video games with gigabytes to work with. In all cases, you experience a game, but the 1970s variety were blocky dots interacting with each other (think Pong or Space Invaders), the 1990s versions were cartoons (early versions of Mortal Kombat), and the modern version is like being immersed in a live action movie (the latest versions of Call of Duty).
Not that the zerbrafish perceives its experience as low resolution, since even its ability to perceive is itself low resolution. It only perceives reality in the way it can and isn’t aware of the detail and meaning it misses.
All that said, the zebrafish and many other species do model their environment (exteroceptive awareness), their internal body states (interoceptive awareness), and their reflexive reactions to what’s in the models (affective awareness). These models give them an inner world, which, to the degree it’s effective, enables them to survive in their outer world.
A lot of human mental processing takes place sub-consciously, which might tempt us to wonder how conscious any of the zebrafish’s processing really is. But when human consciousness is compromised by injury or disease we become severely incapacitated, unable to navigate the world and take care of ourselves in any sustained manner, something the zebrafish and related species can do, indicating that consciousness is crucial and that organisms like zebrafish and lampreys have some form of it.
Considering all this has also made me realize that what we call self awareness isn’t an either-or thing, being either fully present or absent. Modelling the environment seems pointless if you don’t have at least a rudimentary representation of your own physical existence and its relation to that environment. Add in awareness of internal body states and emotional reactions, and at least incipient self awareness seems like an integral aspect of consciousness, even the most primitive kind.
(When I first started this blog, I was open to the possibility that self awareness was something only a few species had, mostly due to the results of the mirror test. But I now think the mirror test is more about intelligence than self awareness, measuring an animal’s ability to understand that it’s seeing itself in the mirror.)
All of which seems to indicate that many of the differences in consciousness between us and species such as lampreys are matters of degree rather than sharp distinctions. Of course, the difference between the earliest conscious creatures and pre-conscious ones is also not a sharp one. There was likely never a first conscious creature, just increasingly sophisticated senses and reflexes, gradually morphing into model driven actions, until there were creatures we’d consider to have primitive consciousness.
This lack of a sharp break bothers many people, who want consciousness to be something objectively fundamental to reality. Some solve this dilemma with panpsychism, the view that everything in the universe has consciousness, with animals just having it in much higher magnitude than do plants, rocks, or protons.
Others conclude that consciousness is an illusion, a mistaken concept that needs to go the way of biological vitalism. Best not to mention it, but instead to focus on the information processing necessary to produce certain behaviors. Many scientists seem to take this approach in their professional papers.
But I’m interested in the differences between systems we intuitively see as conscious and those we don’t. Concluding that they’re all conscious, or that none of them are, doesn’t seem like progress. I think the most productive approach is to regard consciousness as a suite of information processing functions. This does mean there’s an unavoidable aspect of interpretation as to which systems have these functions. But that type of difficulty already exists for many other categories, such as the distinctions between life and non-life (see viruses).
While F&M weren’t interested in tackling human consciousness, they were interested in addressing the hard problem of consciousness. Why does it feel “like something” to be certain kinds of systems? Why is all this information processing accompanied by experience?
I think making any progress on this question requires that we be willing to ask a closely related question: what are feelings? What exactly is experience?
The most plausible answer is that experience is the process of building, updating, and accessing these models. If we accept that answer, then the hard problem question becomes: why we does this modeling happen? The second post in this series discussed an evolutionary answer.
This makes sense when you consider the broader way we use words like “experience” to mean having had extensive sensory access to a topic in order to achieve an expert understanding of it, in other words to build superior internal models of it.
I can’t say I’m optimistic that those troubled by the hard problem will accept this unpacking of the word “experience”. The reason is that experience is subjectively irreducible. We can’t experience the mechanics of how we experience, just the result, so for many the idea that this is what experience is, simply won’t ring true.
The flip side of the subjective irreducibility of experience is that an observer of a system can never directly access that system’s subjective state, can never truly know its internal experience or feelings. We can never know what it’s like to be a bat, no matter how much we learn about its nervous system.
While F&M acknowledge that this subjective-objective divide can’t be closed, they express hope that it can be bridged. I fear the best that can be done with it is to clarify it, but maybe that’s what they mean by “bridged”. Those who regard the divide as a problem will likely continue to do so. Myself, I’ve always regarded the divide as a very profound fact, but not an obstacle to an objective understanding of consciousness.
In conclusion, F&M’s broader evolutionary approach has woken me from my anthropocentric slumber, changing my views on consciousness in two major ways. First, it’s not enough for a system to model itself for us to consider it conscious; it must also model its environment and the relation between the two, in essence build an inner world as a guide to its actions. Second, that modeling can be orders of magnitude less sophisticated than what humans do and still trigger our intuition of a fellow conscious being.
Which seems to lower the bar for achieving minimal consciousness in a technological system. Unless we find a compelling reason to narrow our definition of consciousness, it seems plausible to consider that some autonomous robotic systems have a primitive form of it, albeit without biological motivations. Self driving cars are the obvious example, systems that build models of the environment as a guide to their actions.
Unless of course I’m overlooking something?