Sean Carroll’s latest episode of his podcast, Mindscape, features an interview with neuroscientist Malcom MacIver, one that is well worth checking out for anyone interested in consciousness.
Consciousness has many aspects, from experience to wakefulness to self-awareness. One aspect is imagination: our minds can conjure up multiple hypothetical futures to help us decide which choices we should make. Where did that ability come from? Today’s guest, Malcolm MacIver, pinpoints an important transition in the evolution of consciousness to when fish first climbed on to land, and could suddenly see much farther, which in turn made it advantageous to plan further in advance. If this idea is true, it might help us understand some of the abilities and limitations of our cognitive capacities, with potentially important ramifications for our future as a species.
The episode is about 80 minutes long. If your time is limited, there’s a transcript at the linked page.
MacIver largely equates imagination, the ability to plan, to think, to remember episodic memories and to simulate possible courses of action, with consciousness. I can see where he’s coming from. I’ve toyed with that idea myself. (I don’t use the word “imagination” in the linked post, but that’s what’s being discussed.)
But while I think imagination is an important component of consciousness, meeting a lot of the attributes many of us intuitively associate with it, it doesn’t appear to be the whole show. This is one reason why I often talk about a hierarchy of consciousness:
- Reflexes: survival circuits, primal instinctive reactions to stimuli
- Perception: predictive models of the environment based on sensory input, increasing the scope of what the reflexes react to
- Attention: prioritization of what the reflexes react to
- Imagination / sentience: simulations of possible courses of action based on reflexive reactions, decoupling the reflexes so that they become affective feelings
- Metacognitive self awareness / symbolic thought
The consciousness of a healthy mature human contains this entire hierarchy. Most vertebrates have 1-4, although as MacIver discusses, the imagination of fish is very limited, usually only providing a second or two of advance planning. Land animals have more, although most can only plan a few minutes into their future. The more intelligent mammals and birds can plan further. But to plan weeks, months, or years in the future seems to require the volitional symbolic thought that only humans seem to possess.
But many of us, if presented with an animal who only has 1-3, will still regard it as conscious to at least some degree. This is particularly true with humans who, due to brain pathologies, may lose 4 and 5. The fact that they are still aware of their environment and can respond habitually or reflexively to things still triggers most people’s intuition of consciousness.
Which view is right? Which layers must be present for consciousness? I don’t think there’s a fact of the matter answer. Unless of course I’m missing something?
h/t James of Seattle
25 thoughts on “Malcolm MacIver on imagination and consciousness”
I’m glad to see that people are trying to figure out consciousness. I wonder if the answer is simple or complex. I read about one theory that the entire universe fits into one huge equation.
LikeLiked by 3 people
I think the answer will be complex. People used to wonder about the difference between life and non-life; today we know it comes down to chemistry and organization, but biochemistry is hideously complex. I’m expecting us to find the same for consciousness.
LikeLiked by 2 people
Heh, well, we’ve talked about this, and I’ve blogged about it, too. I’m very much onboard with imagination; I do think it’s one thing that sets us apart from all other “conscious” beings. As I wrote about, I think imaginative minds may be the one non-deterministic thing in the universe.
I’m not quite sure what the question is that you’re posing. Defining consciousness seems very much like the tree falling in the woods making a sound: it really depends on what you mean.
As we’ve also talked about, I believe more in “the gap” than “the spectrum” so I enjoy a clear distinction between “lower” forms of consciousness and our “higher” one. Clearly all the layers are necessary for higher consciousness (you say that yourself, I think).
Whatever it is, I think consciousness will turn out to be irreducible and holistic. These layers or functional blocks may well turn out to be just metaphors that describe parts of the whole, but it is only in the (physical) operation of the whole that consciousness emerges.
LikeLiked by 3 people
I think there are aspects of holism, but it’s not absolute. For example, perception and attention, at least bottom up attention, seemed so thoroughly linked that I occasionally debate whether having them as separate layers is productive. And imagination seems to dramatically enhance perception and attention.
But the thing about consciousness seems to be that you can lose large scale components of it and still be conscious. As long as you don’t lose too much, consciousness can continue, or at least our intuition of consciousness continues.
On the gap, I think the main difference is symbolic thought, the ability to substitute a symbol for complex sensory or action concepts. Planning far into the future seems difficult if you don’t have a concept of months, years, etc. And substituting symbols for mental concepts means having access to those concepts: metacognition.
Of course, all of this assumes we’re not just delusional with the dolphins indulging us to keep us happy while they ponder the secrets of the universe.
“And imagination seems to dramatically enhance perception and attention.”
I can think of situations where being unimaginative allows one to really focus, whereas imagination can be a distraction. It can also serve as a distraction when our pattern-matching imagines patterns that don’t exist.
But that probably speaks more to the difficulties of discussing an irreducible thing with words. It’s kind of the “blind men and elephant” problem. We see only parts, dimly perceived, and haven’t grasped the whole beast, yet.
“But the thing about consciousness seems to be that you can lose large scale components of it and still be conscious.”
Perhaps like an auto engine that can keep on working despite various kinds of damage or part removal. Performance can be impaired, but the engine still cranks out revs.
If the goal was saying which part of the engine is “the core” without which the engine cannot be an engine, I think there is no specific thing we can name. There are many. Bad fuel? No engine. Broken crankshaft? No engine. All sparking plug wires cut? No engine. And so on.
There is no fundamental part or sub-system that’s “really” the engine. The whole thing is the the engine. Some parts are more crucial than others, but all parts are necessary for a completely healthy engine.
(Of course, auto engineers spend their lifetimes getting to know and understand those parts. If we ever got that far, we might be able to build some really interesting “engines.”)
“On the gap, I think the main difference is symbolic thought,…”
Certainly a necessary part of imagination! I like to think there’s a bit more to it than just manipulating symbols, though. 😀
(Pssst. Don’t let the dolphins know you’re on to them. People have been known to go swimming and are never heard from again.)
I think we have to be careful on the difference between perception and imagination. Perception is building a model based on sensory inputs. Often that model is more about what you expect to see than the sensory information coming in from the eyes, although the sensory information provides an ongoing error correction. But that’s why two witnesses to the same event can come away with wildly different accounts. Often they genuinely had wildly different perceptions.
Imagination is simulating perceptions, either to remember an episodic event, or to plan a new one. It can often involve things we would never actually perceive, such as Superman, or a pink bear with butterfly wings. (Unless either of those things are in front of you right now, but if so I want to know where you are or what kind of drugs you’re using 🙂 )
I see where you’re getting at with the engine analogy. Another one might be the fleet of UPS vehicles (planes, trucks, etc). Any one UPS vehicle can be down and UPS will still function. Even if 20% of the vehicles are down, UPS may well still function (albeit at reduced capacity). Even if 60% of the vehicles are down, UPS may still be able to function in some capacity. On the other hand, if all of UPS delivery planes are down (maybe UPS really invested in 737 Max planes), it might not matter too much if all the trucks are working.
On manipulating symbols, remember that it is much more than just symbol manipulation, it’s manipulation of symbols given meaning via imagination, attention, perception, and primal reactions. It’s the tip of a hierarchy of functionality, much of which we still can’t touch technologically. But that tip dramatically extends the capabilities of a hominid from the African savanna.
“I think we have to be careful on the difference between perception and imagination.”
Yep. They different!
“On manipulating symbols, remember that it is much more than just symbol manipulation,…”
Yes, that’s all I was getting at. (They different! 😀 )
LikeLiked by 1 person
I take it from this that fish are totally useless at chess, right?
LikeLiked by 2 people
Depends on your definitions of “chess” and “useless” 🙂
I’ll admit that the subject of consciousness is something I have a hard time wrapping my head around as effectively as some people. Your discussions and Hariod here above me definitely seem more well versed on the literature.
Nevertheless I had this very similar idea to what’s being expressed here. After listening to a podcast interview with Anil Seth on Sam Harris’ podcast, I even sent this exciting new idea of consciousness to Anil Seth. He never responded, which means he either thought I was an idiot or so genius that he will steal my idea. I am not going to be angry with him. This is his life work and he will deserve the credit because he’s going to have to do all the experimentation to prove it true. I have young children and don’t have the time for such things. lol
However, it is the subject of time that sort of got me thinking along these lines. I am very interested in the subject of time, both it’s history and how we perceive it. In Seth’s interview he was talking about sort of the fundamental ways we define consciousness and I don’t remember the details of the start of that inspiration, but it basically boiled down to consciousness being tied down to our ability to predict. If you want a more physiological basis for this idea, I remember too that there was a particular area of the brain that he was talking about as seemingly being important to consciousness which was also an area in the brain that seems to be important for our perception of time. If I were to say what is the one thing that really separates us from other life it is our ability to predict is much stronger. Now maybe I don’t have a higher level of consciousness than a dog, but it does seem that way, and I think I am much better at predicting than a dog on average. Now I may not experience things in quite the same way as a dog and a dog may have certain advantages in certain situations, but overall I’m going to put my predictive skills against a dog and I should likely win. 🙂 After I had this idea, I talked to some of my colleagues in the biology department about predictive capabilities of both animal and plant organisms. After those discussions it seemed to me that consciousness may not be a jump that happens upon a certain brain size and number of neurons, but more of a continuum where even plants could be said to have so me low level of consciousness based on predictive abilities. So I got quite excited by all this, because if it’s true, it’s quite profound and while it my not be a new idea, I felt like it was all mine. But of course I am not very well read on the subject and so I just tried to excited about the excitement and leave it at that. 🙂
LikeLiked by 2 people
I think you’re on the right track with prediction, and you’ll either be delighted or disappointed to know that a lot of neuroscientists consider consciousness to be prediction. In this view, which I think Seth represents, the nervous system has reflexes, and it has reflexes modified by prediction circuits. In terms of the hierarchy I described in the post, everything after layer 1 is prediction.
When thinking about this, you have to remember that we’re not talking about conscious prediction, like me attempting to predict who will win a game, but what kind of functionality is being added to a system that start out as nothing but reflex arcs, that is, programmatic circuits.
When a fish perceives something in front of them, that perception is actually a prediction framework. Is the object food? A predator? Irrelevant to its needs? Get this prediction wrong and it could lead to the fish passing up a meal, or being eaten.
Suppose our fish sees some food but also a predator. Should it attempt to eat the food or flee? If it then imagines what might happen in each scenario, with reflexive reactions to each predicted outcome, it may then allow the eat reaction and inhibit (temporarily) the flee one, or inhibit the eat one and allow the flee one.
The nice thing about this view is it allows us to see how consciousness and intelligence could evolve. It would have started with very simple predictions and built from there.
The only thing I would ask here is, what leads you to conclude that plants make predictions?
LikeLiked by 2 people
That’s good news. I guess I meant to also add more clearly that our ability to perceive both the pace of events and duration of events seems also advanced compared to other species, which I think is also tied in to our increased ability to predict. And I agree prediction doesn’t necessarily mean just predicting the winner of a ball game.
In terms of your question about plants, I don’t remember the details of the conversation I had with the couple of botanists on campus, but they did say that in the plants they are more familiar with there does seem to evidence of predictive behavior. It’s very low level. If consciousness exists indeed on a spectrum for life, it could be plants are at such a low end of the spectrum that our ability to detect their predictive behaviors would almost indistinguishable from reactive behavior. I don’t know…certainly on shaky ground here, but there is something about life in general that seems synonymous with a continuum of prediction from high levels to very low levels.
In searching around on the internet, there does seem to be at least one guy doing this kind of research. Maybe there are others as I’m not overly familiar with biological research, but it might be worth a read.
Click to access 18–Plants—Adaptive-behavior–root-brains–and-minimal-cognition.pdf
LikeLiked by 2 people
I would add, that in no way do I see any of this as settled science, but I do think it’s worth asking the question “What is it like to be a tree?” Given that are definitions of consciousness and intelligence tend be very human-centered it may not be far fetched to consider the possibility. I promise you…this is about as “woo” as I get. lol
LikeLiked by 1 person
Thanks for the links! Just skimmed the abstracts, but they didn’t give me a good sense of their reasoning.
I have to admit I haven’t read much in this area. The little I have read elsewhere didn’t really seem convincing that what they were describing was prediction by the individual plant based on things it had learned. (Adaptive behavior based on genetic programming, while fascinating at times, doesn’t really strike me as prediction.)
But nature seems to delight in complicating our little categories, so I won’t be too shocked if there details that challenge our assumptions.
LikeLiked by 1 person
I think at the very least it will be interest to see where it leads. It feels like there is still far to go in solving the problem of consciousness. I like that so many people are working on it and trying to answer question!
LikeLiked by 1 person
[deep breath … it’s okay when he says “prediction” because that’s the same as cybernetic control, which is the same as free energy minimization, of all things. Scott Alexander (Slate Star Codex) explained it nicely here … deep breath]
Okay, I think you’re largely on the right track, but I have some issues. First, I think you need to be precise about what you mean by “symbolic thought”. I suspect you will have to unpack both of those words. I say this because I have reason to believe your level 1 reflex action can involve symbols in the semiotic sense. Any time you have a reflex arc that involves one neuron triggering the next neuron, the neurotransmitter is a symbol. So what do you require for symbolic thought?
Also, I really think “sentience” belongs with Level 2, perception. Why do you put it with imagination, level 4?
I think you should find a place in your hierarchy for planning (i.e., goals). Maybe that’s what you’re thinking with level 4 simulations, but I could see a scenario like this:
1. See a potential prey animal in a certain setting,
2. Create a “plan” or “goal” of capturing said animal. This step does not create details of a plan, but only creates the fact that there is such a goal, and that goal can affect subsequent behavior.
3. Creation of the goal does two things:
a. queue up predatory behavior but suppress it for the time being
b. generate memories of seeing similar animals in similar settings
4. If memories of similar situations with bad outcomes predominate, suppress the predatory behavior and also the goal, otherwise un-suppress the behavior.
So what level of consciousness would this scenario represent?
Finally, if you want a better presentation of MacIver’s ideas, see this YouTube video.
I’ll try to avoid the p-word so I don’t spike your heart rate. 🙂
On symbolic thought, I’m using it in the anthropological sense of volitional conscious use of symbols. For a sense of how this works, check out this article: https://www.nytimes.com/2014/12/07/magazine/hunting-for-the-origins-of-symbolic-thought.html
I think the disconnect is you’re thinking of the use of symbols at a different layer, the data processing layer. I see value in that viewpoint, although I remain a little uneasy how we think of a sensory image map in terms of symbols. Still, you could argue that the redness of red is essentially a data processing convention, a way to communicate the pattern of retinal photoreceptor excitations to other brains regions, aka symbolic communication. But biology is messy and never as clean and distinct as technology so the symbol concept doesn’t seem as sharp in a neurobiological system as it might in a computer system.
On sentience and the layers, I think we need to think about what feelings are. They are not sensory impressions. They’re a reaction to sensory impressions. They actually form on the motor side of the brain (the front), not the sensory perceiving side (the back). In other words, they’re part of the motorium, not the sensorium. They’re the lower level motorium communicating to the higher level planning aspect of the motorium (the part that does the imagining).
(Of course, biology again, so things aren’t always clean and tidy. It’s possible that some affects rise up through the back layers, notably through the insula cortex, but the lion share seem to come up through regions like the amygdala, hippocampus, anterior cingulate cortex, striatum, and ventro-medial prefrontal cortex.)
Much of what you describe happens in layer 4. Just to be clear (I failed to put the disclaimer in the post, my bad), these layers are just my shorthand for keeping track of all the processing. It’s a vast oversimplification. I usually only pull them out when someone says “consciousness is X”. Usually “X” is one of these layers, and my point is always, “It ain’t that simple.” Of course, it ain’t as simple as these layers either, but communicating that without regurgitating a whole neuroscience book requires simplification.
BTW, I’m not saying MacIver is wrong. He’s oversimplifying too to communicate concepts to us. I suspect he’d agree with a lot of this. I’m just pointing out that saying imagination=consciousness requires some cleanup.
So the reason I jump on your use of “symbol” is that I’m convinced that understanding how minds (brains, computers) use symbols is exactly what is necessary to understand consciousness. It is the use of symbols which explains qualia, the hard problem.
The article you linked to pretty much equates “symbol” with an artifact which represents something. But I’m not sure how that applies to “symbolic thought”. It can’t just be thinking about such artifacts. I suppose the significance of symbolic thought is that it requires the “bringing to mind” of a concept that is not sensorily (?) motivated.
As for what “feelings” are, my idea is close to yours, but we can always just use yours as a definition. What I’m wondering about is how you explain simple perceptions, like my looking out the window and seeing a tree. Perceiving the tree is a feeling, yes? I also perceive several other things which I won’t tell you about, but I will remember, if briefly. So it sounds like you’re saying the feeling of the tree happens in the motorium, or doesn’t happen at all. Is that right?
On symbolic thought, the main point is that we associate symbols with sensory perceptions, feelings, or actions. The most basic example of this is language, where we associate a sound (word) with something like an apple. When someone says the “apple” sound to you, you imagine an apple. This also applies to art, music, mathematics, etc. It’s the one capability that seems to set us aside from non-human animals. Some animals do have a very basic version of this, but nothing like what humans have. The main thing it does is allow us to extend our thinking.
So, when you see a tree, an image forms in your visual cortex (topographically mapped from the retina), which causes cascades both dorsally into the parietal lobe and ventrally into the temporal lobe, but it all eventually converges on a point in the superior parietal lobe, sometimes called the posterior association cortex, where the tree concept exists.
This treeness gets sent to the frontal lobes. Structures like the amygdala either get it from the frontal lobes or directly from one of the other cortical structures, where the tree memory is linked with lower level reactions from the brainstem. These reactions get communicated back to the cortex, as feelings.
So the frontal lobes receive the tree concept, and perhaps request details about the tree, but as it is doing so, it’s also signalling and receiving feeling information from the limbic system. In other words, the frontal lobes are getting both the sensory perceptual information and the accompanying feeling information in an ongoing loop.
This fits when you consider the reported effects of a prefrontal lobotomy. Typically such a person can still be aware of their environment, and react habitually and reflexively, but their ability to feel emotionally seems largely destroyed, as well as most of their ability to plan or have executive control.
Okay, so you’re defining symbols as artifacts representing concepts, which is fine.
Re: feelings, you said above “They’re a reaction to sensory impressions. They actually form on the motor side of the brain (the front), not the sensory perceiving side (the back). In other words, they’re part of the motorium, not the sensorium.” I’m trying to reconcile this with
“Structures like the amygdala either get it from the frontal lobes or directly from one of the other cortical structures, where the tree memory is linked with lower level reactions from the brainstem. These reactions get communicated back to the cortex, as feelings.”
Are structures like the amygdala part of the motorium?
Also, I’m not convinced I have any feelings about the tree. Shouldn’t I be aware of feelings, by definition?
“Are structures like the amygdala part of the motorium?”
I would say they are. Think about it from a data processing standpoint. What the amygdala does is link specific reactions to cognitive states in the frontal lobes. In essence, it is communicating the brainstem’s reaction to the planning mechanisms in the cortex. The planning may be on what’s to happen in the next second, but it still uses the information, both to spur the planning and to evaluate options.
“Also, I’m not convinced I have any feelings about the tree. Shouldn’t I be aware of feelings, by definition?”
Up above you said: “Perceiving the tree is a feeling, yes?”
I think the feelings associated for most trees for most people are probably pretty low level and background. In other words, most of them aren’t very intense. (At least unless you’re an aggressive tree hugger.)
“2 Perception: predictive models of the environment based on sensory input …”
I’m pretty sure you’ve talked before about
2b. Models of (what are in fact, but usually not in conception) internal low-level brain states based on intra-brain communications
Which is a key aspect of what I would call “sentience”. For example, I predict that if I stick my left hand into 25 Celsius water, it will feel warm. I predict that if I stick my right hand into that same water, it will feel cold. My left hand has been in ice-water lately, while my right hand has been in hot water. In addition to representations about the objective world, I have subjective sensations. This is what causes so much philosophical trouble. This is what David Chalmers thinks is a Hard problem. (It’s not, so much. Unless you run away from it, in which case it definitely is.)
Note that the subjective sensations persist even if I know for a fact that the water is 25 C. And even if I’m so experienced with moving hands between baths that I can tell without looking at a thermometer that the water is 25 C.
LikeLiked by 1 person
On modeling lower level brain state, much of that happens in layer 4 and 5, although this is biology, which is never as cleanly organized as technological systems, so some of it does happen in 2. The most notable is the body sense in the insula cortex, which from what I’ve read may be the cortex modeling what the brainstem is doing. But modeling lower level functionality is much more pronounced in the frontal lobes.
On sentience, there are actually two aspects of it. The first is sensory perception. But the second is the valenced reaction to that perception. In other words, feeling hot or cold is the interoceptive perception, but the valence of hot or cold being good or bad is the valenced affect, the reaction that turns data into feeling.
Interestingly, the effect you describe with the hands is related to habituation. The sensory neurons in your hand fire less intensely when they get the same stimulus continuously. They fire much more intensely when it’s suddenly changed, which is why 25 C feels warm after being in ice water and cool after being in hot water. The signalling that the brain actually receives is different.
Hey, just wondering if you’ve read Marvin Minsky’s 2006 book, The Emotion Machine. He’s saying pretty much the same as you are, but he has six levels instead of five:
I’ve only read the first chapter, then skipped to chapter 4 (Consciousness, obv.), but this stuck out.
Just thought you needed more stuff to read. 🙂
Thanks. I haven’t read his book. The levels look interesting. Although I wonder what he means by “learned” in the second one, or what he sees as the differences between the last three.
BTW, have you read Michael Gazzaniga’s latest book, The Consciousness Instinct? He has speculations involving symbolic processing, semiotics, and other things that resonate with what you’ve discussed.
He relates the complementarity of quantum physics to a “complementarity” of biological life, saying that we can consider the logical symbolic version of what’s happening or the physical version, but not both at the same time. In some ways I think he’s right, but I think he oversells the significance of it, claiming that it’s uniquely biological. (I don’t think it is.) Still, interesting stuff.