Back in September (which now seems like a million years ago), I did a series of posts on consciousness inspired by Todd Feinberg and Jon Mallatt’s recent book, ‘The Ancient Origins of Consciousness: How the Brain Created Experience‘. In that series, I explored consciousness as a system modeling its environment and itself as a guide to action. My enthusiastic run of five posts reflected how much F&M’s excellent book had shaken me out of my anthropocentric views.
But that series had a glaring omission, and some of the people I’ve referred to it have called me on it. F&M’s book was focused on animal consciousness and its evolution. As I noted in the first post, while that broad approach had a lot of benefits, it had one big drawback. Animals can’t describe their conscious experience, most notably what behavior they do consciously versus unconsciously. As a result, this particular boundary wasn’t addressed in F&M’s book, and I only briefly alluded to it in the series.
This post on that boundary is admittedly my own speculation, informed by F&M’s book, but also by an outstanding article at Aeon by Anil K. Seth. My long time readers will know that I’ve historically put a good deal of stock in metacognitive theories such as Michael Graziano’s attention schema or Michael Gazzaniga’s interpreter, where consciousness is a model of some aspects of the internal processing of the brain. I still think these metacognitive models exist (it seems we use them anytime we have these discussions) but I’m less sure that they’re the sole crucial ingredient, although they could still be one of those ingredients.
Okay, so consider an early pre-Cambian animal. This animal doesn’t have a brain or even a spinal cord, but it does have a nerve net, with sensory neurons connecting directly to motor neurons. If the animal receives a sensory stimulus (such as touch or maybe a chemical gradient), it triggers a signal to the motor neurons resulting in movement. In this nervous system, stimulus A results in action A, stimulus B results in action B, etc. While some conditioning can modify the processing, there’s no consciousness here, just reflex actions.
Later species such as chordates developed a spinal cord. This centralized cord allowed for a combination of sensory inputs to lead to combinations of actions. So events A and B resulted in actions A and B. Again, these actions, while modifiable by conditioning (a primitive form of learning), were still basically reflex actions. Very few people (aside from panpsychists) think we’re at consciousness yet.
As animals began to develop distance senses (eyesight, hearing, smell), the amount of information available to the reflexes began to increase dramatically. This led to the spinal cord enlarging near those distance senses so the information from them could be quickly processed. The distance senses led to the creation of image maps, exteroceptive models of the environment. The mental reflexes described above now reacted to information in the models rather than directly to sensory inputs.
These exteroceptive models of the environment, along with the interoceptive models of the animal’s body state, formed an inner world. They provided the foundation of conscious experience. But I’m not sure they are what we would call consciousness. Another ingredient was necessary.
The large amount of information caused a problem. The models resulted in situations where the quantity of action reflexes triggered by a particular set of circumstances could be large, with some of those triggered actions perhaps being incompatible with other triggered actions. For example, an early Cambrian fish might see food off in the distance, which triggers a desire to approach and eat it, but not much further beyond the food is a predator, which triggers a desire to flee.
Our fish can’t do both actions. It could follow the stronger impulse. If it’s eaten recently, perhaps the urge to flee is stronger and that’s what it does. But maybe it’s desperately hungry, so it does attempt to get the food and risk getting close to the predator.
But given the life and death circumstances, our fish needs a new ability. It needs to be able to simulate what might happen if it takes certain actions. Having the ability to be aware of its own primal reflexive desires, in other words to do affect modeling, and then do trade-off decision processing on which desire to listen to, would have provided a survival advantage. This trade-off processing would involve running simulations: if action A is taken, it will result in consequence A, if action B, consequence B, etc.
In other words, the fish needs the ability to do predictive modeling on various possible courses of action, courses of action that would result from following each of its triggered action impulses. The consequences revealed by each simulation are evaluated in turn by the limbic system (or fish equivalent), each resulting in its own negative or positive affect, in other words, an evaluation of whether the consequence is desirable or undesirable, “good” or “bad” for the organism.
It’s this trade-off processing, this ability to simulate different courses of action, to do predictive modeling, that I’m suspecting is at the heart of what consciousness is. This modeling would have been very simple in the earliest conscious creatures, but increased steadily over hundreds of millions of years in sophistication and capacity. But at all times, it would have been the same basic functionality, simulations of possible courses of action as a guide to movement decisions.
Some of the predictive modeling would have involved simulating past sensory experiences, in other words, episodic memory. It’s important to understand that episodic memory isn’t a recording, but a reconstruction of past sensory events, a simulation. That’s why memory is so unreliable. But it’s effective as an aid to the trade off processing I’m talking about.
Consider what requires our own conscious awareness and what doesn’t. I can often drive to work without being conscious of what I’m doing. I’ve driven to work a great many times so that I can now do it in a habitual slumber. More precisely, the non-conscious aspects of my mind have been conditioned so that they will supply the right movement decisions when presented with each specific stimuli of the driving to work experience. Most of the time, this frees my mind to think about other things, to do simulations on other situations, like maybe what I’m going to do when I get to work, or maybe to mull that show I watched last night.
But then I suddenly run into severe traffic. Now I “wake up” and have to think about what I’m going to do. Can I get off the main highway and find an alternate route to get around the traffic? I now need to simulate various courses of action. I am “aware” and “thinking” about the drive now. I am conscious of it now.
Or perhaps the drive is going normally, but I’m doing it in a borrowed car, perhaps a type and model I’ve never driven before that handles differently than what I’m used to. Now my simulation engine is engaged in the minutia of the driving mechanics, and will be until handling the new vehicle becomes “natural”, that is, until it can be done without the need for constant simulations, without the need for conscious control.
On the other hand, I might be driving to work in my habitual slumber, and suddenly there is wreck happening and split second decisions are necessary. There is no time for conscious deliberation, no time for simulations, I have to just use whichever unconscious impulses are strongest. Although later simulations of the event will almost certainly be done.
If this view is right, then consciousness is a simulation engine, a prediction mechanism designed to serve as a guide to action, allowing an animal to subjectively travel backward or forward in time as it ponders movement decisions. In humans, the simulations would likely be initiated by the prefrontal cortex but heavily involve the modeling aspects of the sensory processing regions, with the limbic system providing the evaluative aspects.
I stated at the beginning of this post that it was speculative, and it is. But the predictive modeling, the simulations, certainly take place in some form or fashion. The speculative aspect is that the simulations are consciousness, that what is outside of them is in what we call the sub-conscious or unconscious, and what is in them are the contents of consciousness.
Given this speculation. I’d be very interested in any critiques, in particular in any examples that demonstrably violate this proposition. In other words, what have I overlooked?