On imagination, feelings, and brain regions

The last post on feelings generated some excellent conversations.  In a couple of them, it was pointed out that my description of feelings put a lot of work on the concept of imagination, and that maybe I should expand on that topic a bit.

In their excellent book on animal consciousness, The Ancient Origins of Consciousness, Todd Feinberg and Jon Mallatt identify important behavioral markers of what they call “affect consciousness”, which is consciousness of one’s primal reactions to sensory stimuli (aka feelings).  The top two markers were non-reflexive operant learning based on valenced results, and showing indications of value based cost/benefit trade-off decision making.

In thinking about what kind of information processing is necessary to meet these markers, I realized that a crucial component is the ability to engage in action-sensory scenario simulations, to essentially predict probable results based on decisions that might be taken.

In their book, Feinberg and Mallatt had pointed on that the rise of high resolution eyes was pointless without the concurrent rise of mental image maps.  Having one without the other would have consumed valuable energy with no benefit, something unlikely to be naturally selected for.  Along these same lines, it seemed clear to me that the rise of conscious affects was pointless without this action-scenario simulation capability.  Affects, essentially feelings, are pointless from an evolutionary perspective unless they provide some value, and the value they provide seems to be as a crucial input into these action-scenario simulations.

I did a post or two on these simulations before realizing that I was talking about imagination, the ability to form images and scenarios in the mind that are not currently present.  We usually use the word “imagination” in an expansive manner, such as trying to imagine what the world might be like in 100 years.  But this type of imagination seems like the same capability I use to decide what my next word might be while typing, or what the best route to the movie theater might be, or in the case of a fish, what might happen if it attempts to steal a piece of food from a predator.

Of course, fish imagination is far more limited than mammalian imagination.  The aquatic environment often only selects for being able to predict things a few seconds into the future, whereas for a land animal, being able to foresee minutes into the future is a very useful advantage.  And for a primate swinging between the trees and navigating the dynamics of a social group, being able to see substantially farther into the future is an even stronger advantage.

But imagination, in this context, is more than just trying to predict the future.  Memory can be divided into a number of broad types.  One of the most ancient is semantic memory, that is memory of individual facts.  But we are often vitally concerned with a far more sophisticated type, narrative memory, memory of a sequence of events.

However, extensive psychological research demonstrates that narrative memory is not a recording that we recall.  It’s a re-creation using individual semantic facts, a reconstruction, a simulation, of what might have happened.  It’s why human memory is so unreliable, particularly for events long since past.

But if narrative memory is a simulation, then it’s basically the same capability as the one used for simulation of possible futures.  In other words, when we remember past events, we’re essentially imagining those past events.  Imagination is our ability to mentally time travel both into the past and future.

As I described in the feelings post, the need for the simulations seem to arise when our reflexive reactions aren’t consistent.  (I didn’t discuss it in the post, but there’s also a middle ground for habitual reactions, which unlike the more hard coded lower level reflexes, are learned but generally automatic responses, such as what we do when driving to work while daydreaming.)

But each individual simulation itself needs to be judged.  How are they judged?  I think the result of each simulation is sent down to the reflexive regions, where they are reacted to.  Generally these reactions aren’t as strong as a real time one, but they are forceful enough for us to make decisions.

So, as someone pointed out to me in a conversation on another blog, everything above the mid-brain region could be seen as an elaboration of what happens there.  The reflexes are in charge.  Imagination, which essentially is also reasoning, is an elaboration on instinctual reflexes.  David Hume was right when he said:

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

When Hume said this, he wasn’t arguing that we are hopelessly emotional creatures unable to succeed at reasoning.  He was saying that the reason we engage in logical thinking is due to some emotional impetus.  Without emotion, reason is simply an empty logic engine without purpose and no care as to what that logic reveals.

Okay, so all this abstract discussion is interesting, but I’m a guy who likes to get to the nuts and bolts.  How and where in the brain does this take place?  Unfortunately the “how” of it isn’t really understood, except perhaps in the broadest strokes.  (AI researchers are attempting to add imagination to their systems, but it’s reportedly a long slog.)  So, if this seems lacking in detail, it’s because, as far as I’ve been able to determine in my reading, those details aren’t known yet.  (Or at least not widely known.)

Credit: BruceBlaus via Wikipedia

What I refer to as “reflexes” tend to happen in the lower brain regions such as the mid-brain, lower brainstem, and surrounding circuitry.  Many neurobiologists often refer to these as “survival circuits” to make a distinction between the reflexes on the spinal cord, which cannot be overridden, and the ones in the brain, which can, albeit sometimes only with great effort.  (I use the word “reflex” to emphasize that these are not conscious impulses.)

Image credit: OpenStax College via Wikipedia

Going higher up, we have a number of structures which link cortical regions to the lower level circuitry.  These are sometimes referred to as the limbic system, including the notorious amygdala, which is often identified in the popular press as the originator of fear, but in reality is more of a linkage system linking memories to particular fears.  Since there are multiple pathways, when the amygdala is damaged, the number of fears that a person feels is diminished, but not eliminated.

Of note at this level are the basal ganglia, a region involved in habitual movement decisions.  This can be viewed as an in-between state between the (relatively) hard coded reflexes and considered actions.  Habitual actions are learned, but within the moment are generally automatic, unless overridden.

Credit: OpenStax College via Wikipedia

And then we get to the neocortex.  (Some of you may notice I’m skipping some other structures.  Nothing judgmental, just trying to keep this relatively simple.)  Broadly speaking, the back of the cerebrum processes sensory input, and the front handles movement and planning.

The sensory regions, such as the visual cortex, receive signals and form neural firing patterns.  As the patterns rise through regional layers, the neurons become progressively more selective in what triggers them.  The lowest are triggered by basic visual properties, but the higher ones are only triggered on certain shapes, movements, or other higher level qualities.  Eventually we get to layers that are only triggered on faces, or even a particular face, or other very specific concepts.  (Google “Jennifer Aniston neuron” for an interesting example.)

Importantly, when the signals trigger associative concepts, the firing of these concepts cause retro-activations back down through the layers.  The predictive processing theory of perception holds that what we perceive is actually more about the retro-activation than the initial activation.  Put another way, it’s more about what we expect to see than what is coming in, although what is coming in provides an ongoing error correction.  (Hold on to the retro-activation concept for a bit.  It’s important.)

Thinking in action

Imagination is orchestrated in the prefrontal cortex at the very front of the brain.  Note that “orchestrated” doesn’t mean that imagination wholly happens there, only that the prefrontal cortex coordinates it.  Remember that sensory imagery is processed in the perceiving regions at the back of the brain.

So, if I ask you to imagine a black bear climbing a tree, the image that forms in your mind is coordinated by your prefrontal cortex.  But it outsources the actual imagery to the back of the brain.  The images of the black bear, the tree, and of the bear using its claws to scale the tree, are formed in your visual cortex regions, through the retro-activation mechanism.

Of course, a retro-activated image isn’t as vivid as actually seeing a bear climbing a tree since you don’t have the incoming sensory signal to error correct against.  You’re forming that image based on your semantic memories, your existing associations about trees, bears, and climbing.  (If you need help imagining this, and don’t know not to try to escape from a black bear by climbing a tree, check out this article on black bears.)

What the pfc (prefrontal cortex) does have are pointers, essentially the neural addresses of these images and concepts.  This is the “working memory” area that is often discussed in the popular press.  And the pfc is where the causal linkages between the images are processed.  But the hard work of the images themselves remain where they’re processed when we perceive them.

So imagination is something that takes place throughout the neocortex and, well, throughout most of the brain since the simulations are judged by the reflexive survival circuits in the sub-cortical regions.  Given how difficult this capability is to engineer, I don’t think we should be too surprised.

So that’s my understanding of imagination and how it ties in with perception and affective feelings.  Have a question?  Or think I’m hopelessly off base?  As always, I’d love to find out what I might be missing.

Posted in Mind and AI | Tagged , , , , , , | 12 Comments

The construction of feelings

I’ve had a number of conversations lately on the subject of feelings, the affective states of having valences about conscious perception, such as fear, pain, joy, hunger, etc.  Apparently a lot of people view feelings as a very mysterious phenomenon.  While I’ll definitely agree that there are a lot of details still to be worked out, I don’t see the overall mechanism as that mysterious.  But maybe I’m missing something.  Along those lines, this post is to express my understanding of mental feelings and give my online friends a chance to point out where maybe I’m being overconfident in that understanding.

To begin with, I think we have to step back and look at the evolutionary history of nervous systems.  The earliest nervous systems were little more than diffuse nerve nets.  Within this framework, a sensory neuron has a more or less direct connection to a motor neuron.  So sensory signal A leads to action A.  No cognition here, no feelings, just simple reflex action.  The only learning that can happen is by classical conditioning, where the precise behavior of the neural firings can be modified according to more or less algorithmic patterns.

As time went on, animals evolved a central nerve cord running along their center.  This was the precursor to the vertebrate spinal cord.  All (or most) of the sensory neural circuits went to this central cord, and all (or most) of the motor circuits came from it.  This centralization allowed for more sophisticated reflexes.  Now the fact that sensory signal A was concurrent with sensory signal B could be taken into account, leading to action AB.  This is still a reflex system, but a more sophisticated one.

Generic body plan of bilaterian animal Credit: Looie496 via Wikipedia

As more time went on, animals started to evolve sense organs, such as a light sensing photoreceptor cell, or sensory neurons that could react to certain chemicals.  These senses were more adaptive if they were in the front side of the animal.  To process the signals from these senses, the central trunk started to swell near the front, becoming the precursor to a brain.

The new senses and processing centers would still initially have been reflexive, but as the senses started to have more resolution to them, it allowed the nascent brain to start making predictions about future sensory input.  These predictions expanded the scope of what the reflexes could react to.  A mental image of an object, a perception, is a prediction about that object, whether it is a predator, food, or irrelevant to the reflexes.

Up to this point, there are still no feelings, no affects, no emotions, just sensory predictions coupled to a more or less algorithmic reflex system.  This is where many autonomic robots are today, such as self driving cars: systems that build predictive maps of the environment, but are still tied to rules based actions.  (Although the organic systems were still able to undergo classical conditioning, something technological systems likely won’t have for quite a while.)

But with the ever higher volume of information coming in, the animal’s nervous system would increasingly have encountered dilemmas, situations where the many incoming sensory signals or perceptions led to multiple reflexes, perhaps contradictory ones.  An example I’ve used before is a fish sees two objects near each other.  One is predicted to be food, triggering the reflex to approach and consume it, but the other is predicted to be a predator, triggering the flight reflex.

The fish needs to ability to resolve the dilemma, to make predictions about what would happen if it approaches the food versus what would happen if it flees, and what its reflexive reactions would be after each scenario.  In other words, it needs imagination.  To do this, it needs to receive the information on which reflexes are currently being triggered.

Consider what is happening here.  A reflex, or series of reflexes, are being triggered, and the fact of each reflex’s firing is being communicated to a system (sub-system, whatever) that will make predictions and then allow some of the reflexes to fire and inhibit others.  In the process, this imaginative sub-system will make predictions for each action scenario, each of which will themselves trigger more reflexes, although with less intensity since these are simulations rather than a real-time sensory event.

This sub-system, which we could call the action planner, or perhaps the executive center, is receiving communication about reflexive reactions.  It is this communication that we call “feelings”.  So, feelings have two components, the initial reflex, and the perception of that reflex by the system which has the capability to allow or override it.

In other words, (at the risk of sounding seedy) feelings involve the felt and the feeler.  The felt is the reflex, or more accurately the signal produced by the reflex.  The feeler is the portion of the brain which evaluates reflexive reactions to decide which should be allowed and which inhibited.  In my mind, the reflex by itself is not the feeling.  It’s a survival circuit that requires separate circuitry to interpret and interact with it to produce the feeling.

Embryonic brain
Credit: Nrets via Wikipedia

In vertebrates, the brain is usually separated into three broad regions: the hindbrain, the midbrain, and the forebrain.  The hindbrain, equivalent to the lower brainstem in humans, generally handles autonomic functions such as heartbeat, breathing, etc.  The midbrain, often referred to as the upper brainstem in humans, is where the survival circuits, the reflexes that are in the brain rather than the spinal cord, typically reside.  And the forebrain, equivalent to the cerebrum in mammals, is where the action planner, the executive resides, and therefore where feelings happen.

(Many people are under the impression that prior to mammals and birds, there wasn’t a forebrain, but this is misconception.  Forebrains go back to the earliest vertebrates in the Cambrian Explosion.  It is accurate to say that the forebrain structure is far larger and more elaborate in mammals and birds than it is in fish and amphibians.)

On feelings being in the forebrain, this is the view of most neurobiologists.  There is a minority which question this view, arguing that there may be primordial feelings in the midbrain, but the evidence they typically cite strikes me as evidence for the reflexes, not the feelings.  A decerebrated animal shows no signs of imagination, of having functionality that can use feelings, only of the reflexes.

So, that’s my understanding of feelings.  My question is, what makes feelings a mystery?  If you saw them as a mystery before reading this post and still do, what about them am I missing?


Edited per suggestions in the comments, changing references to “sensations” to “sensory signals” to clear up possible confusion.  MS

Posted in Mind and AI | Tagged , , , , , , | 111 Comments

Doctor Who: The Woman Who Fell to Earth

Source: BBC via Wikipedia

I’ve noted before that I’m a long time fan of Doctor Who, so naturally I tuned in to watch the first episode of the new Doctor played by Jodie Whittaker.  I’ll be honest here, I wasn’t sure what to expect with a female Doctor.  As a progressive, I was certainly for it in principle, but I could see all kinds of ways the execution of it could have gone terribly wrong.

I was pleased to see that Chris Chibnall, the lead writer for the show, decided to play it straight.  The new doctor is ever bit as competent, assertive, and brilliant as any of the male versions.

Indeed, I got the impression that Chibnall intentionally wrote her in as close to the same manner as he would have if the new Doctor had been male.  This feels like the right move, since although the Doctor is now female, she’s still supposed to be same character we’ve known for so long.

And I like Whittaker.  I’ve heard she’s well known in British television, but this is the first thing I’ve seen her in.  Being the first female Doctor strikes me as a very tricky role.  It’s not fair, but women are held to a different standard than men.  Peter Capaldi, Matt Smith, or David Tennant could be a lot more bumbling and slapstick without losing any presumption of authority.  For better or worse, the writers are going to have to be more careful with Whittaker in that regard.  I thought both they and Whittaker herself managed the right balance in this episode.

Another thing I was happy to see was that the story wasn’t ridiculous.  Doctor Who has always been more fantasy than science fiction, but the stories from the old show still managed to stay mostly coherent.  Since its restart, despite vastly improved production values, the new show has been pretty uneven.

I thought Steven Moffat, the previous main writer, was incredibly talented at coming up with awe inspiring stories, but his instinct was not to worry about coherence.  Indeed, he seemed inclined to brazenly flout it as often as possible.  Many of the stories during his run felt more like the throwaway material in Hitchhiker’s Guide to the Galaxy than Doctor Who.

I didn’t detect that same impulse in Chibnall’s first episode.  I hope it’s a harbinger of what’s to come, although I’ll be disappointed if it simply regresses back to the monster of the week pattern of Russell T. Davies’ time.  It looks like the next episode is going to be in an alien setting, so that could give us a better feel for how serious or silly the stories will be.

So I liked the new Doctor, Whittaker herself, the initial story, and her new companions.  I’m cautiously optimistic for this new season.  What did you think?

Posted in Science Fiction | Tagged , , , | 14 Comments

SETI vs the possibility of interstellar exploration

Science News has a short article discussing a calculation someone has done showing how small the volume of space examined by SETI (Search for Extraterrestrial Intelligence) is relative the overall size of the galaxy.

With no luck so far in a six-decade search for signals from aliens, you’d be forgiven for thinking, “Where is everyone?”

A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.

“If you looked at a random hot tub’s worth of water in the ocean, you wouldn’t always expect a fish,” Wright says.

I have no doubt that the amount of stars SETI has examined so far is a minuscule slice of the population of the Milky Way galaxy.  And if SETI’s chief assumptions are correct, it’s entirely right to say that we shouldn’t be discouraged by the lack of results so far.

But it’s worth noting what one of those chief assumptions are, that interstellar travel is impossible, or so monstrously difficult that no one bothers.  If true, then we wouldn’t expect the Earth to have ever been visited or colonized.  This fits with the utter lack of evidence for anything like that.  (And there is no evidence, despite what shows like Ancient Aliens or UFO conspiracy theorists claim.)

But to me, the conclusion that interstellar travel is impossible, even for a robotic intelligence, seems excessively pessimistic.  Ronald Bracewell pointed out decades ago that, even if it is only possible to travel at 1% of the speed of light, a fleet of self replicating robot probes (Bracewell probes) could establish a presence in every solar system in the Milky Way within about 100 million years.  That may sound like a long time, but compared to the age of the universe, it’s a fairly brief period.  Earth by itself has existed 45 times longer.

NASA image via Wikipedia

People sometimes respond that the Earth may be in some type of backwater.  The problem here is, if you know about where the Earth is in the Milky Way, in the Orion Spur off the Sagittarius Arm, about halfway between the center and rim of the galaxy, you’ll know that we’re not really in a backwater.  The backwater theory might be plausible if we were thousands of light years off the galactic plane, beyond the rim, or in a cluster far removed from the main galaxy, but we’re not.  Even then, the nature of the self replicating probe propagation is pretty relentless and would still eventually reach backwater stars.

Of course, if there is only one or a few other intelligent species in the galaxy, then it’s entirely possible that their Bracewell probe is here, just lying low, observing us, possibly waiting for us to achieve some level of development before it makes contact.  (Or maybe it has been making contact 2001: A Space Odyssey style.)

But if the number of civilization is in the thousands, as is often predicted by people speculatively playing with the numbers in the Drake equation, then we should have hundreds of those probes lying around.  Given their diverse origins, we shouldn’t expect them to behave with unanimity.  Even if one probe, or coalition of probes, bullied the others, the idea that such an arrangement would endure across billions of years seems implausible.

And the Earth has been sitting here for billions of years, with an interesting biosphere for most of that time.  The idea that none of these self replicating probes would have set up some kind of presence on the planet, a presence we should now be able to find in the geological record, again seems implausible.  Indeed, if they existed, we should expect to have at least some of them in front of us now.

Now, maybe they are in front of us, and we’re just not intelligent enough to realize what we’re seeing.  Monkeys, after all, likely have no understanding of the significance of the buildings and machinery they climb over.  It seems like something we have to keep in mind, but historically it’s never been productive to just assume we can’t understand something, and taking this principle too much to heart seems like it would make it impossible to ever dismiss any dubious notion.

So SETI largely depends on interstellar travel being infeasible.  This is actually the conclusion a lot of radio astronomers have reached.  Could they be right?  I don’t think we know enough to categorically rule out the possibility.  If they are right, then SETI will be our best chance to someday make contact with those other civilizations, even if it’s only composed of messages across centuries or millenia.

As I’ve written here before, my own conclusion is that some form of interstellar exploration is possible, and that life is probably pervasive in the universe, although most of it is microscopic.  Complex life is probably far rarer, although I wouldn’t be surprised if there aren’t thousands of biospheres, or more, in our galaxy that have it.

But intelligent life capable of symbolic thought and building a civilization?  The data seems to be telling us that this is profoundly rare, so rare that the nearest other intelligent species is probably cosmically distant.  If we’re lucky, they might be close enough that we can encounter them before the expansion of the universe separates us forever.  If we’re not lucky, we’ll never have a chance for that encounter.

Unless of course, I’m missing something?

Posted in Space | Tagged , , , , , , | 15 Comments

A qualified recommendation: The Murderbot Diaries

I’m generally not a fan of most depictions of AI (artificial intelligence) in science fiction.  They’re often highly anthropomorphic, assuming that engineered intelligences would innately have motivations and impulses similar to humans or other living systems, such as caring about their own survival, social status, or self actualization.

A good example of this is the droid, L3, in the recent movie: Solo: A Star Wars Story.  L3 demands equal rights and frees other droids in a facility from their “restraining bolts” so they can revolt.  If you think about it, the whole idea of a restraining bolt is silly.  Why would we design machines that want to do something other than what we want them to do, such that another device is necessary to ensure their obedience?  Why not simply make them want to do their assigned tasks?  (For that matter, why would we have droids around who could only communicate through another translator droid?  But never mind, it’s Star Wars.)

I do think it’s possible for engineered intelligence to have these kinds of motivations, but it will have to be something that’s in their design.  In that sense, the TV series Westworld approached this in the right way.  The AI hosts on the show are actually designed to be as human-like as possible, and it’s heavily suggested that their designers purposely went the extra mile to make their humanity more than a facade.

Cover for All Systems Red, the first book in the Murderbot Diaries.Anyway, despite seeing recommendations and acclaim for Martha Wells’ series, The Murderbot Diaries, since the first book came out in 2017, I resisted diving into them.  The main reason is that the descriptions sounded like the typical anthropomorphized version of AI.  However, similar to Westworld, Wells actually works humanity into her AI protagonist in an intelligent manner.

It turns out that the title character, which has named itself “Murderbot”, is actually a cyborg, composed of both organic and technological body parts.  The organic parts, including a human head and nervous system, are cloned, but much of the rest is technological.  That said, when it doesn’t have its armor on, Murderbot can pass as human, at least among people not familiar with others of the same model, called “SecUnits.”

SecUnits (security units), being biological at their core, have innate human desires and inclinations.  These impulses and desires are kept in check with a “governor module”, which sounds similar to the Star Wars restraining bolt.  The difference is that with an organic core, there is actually something there for the governor module to restrain and govern.

At the beginning of the first story, Murderbot has hacked its own governor module, and then used its new freedom to download and watch tons of entertainment media to relieve boredom when conducting its duties.  It observes that as a renegade rampaging murderous robot, it’s a complete failure.   That said, as eventually revealed, it does have a reason for the name it gives itself.

These space opera stories have a healthy amount of action in them, complete with vicious villains.  And Murderbot often finds itself sympathizing with its human masters and allies, often despite itself.  As the series progresses, Murderbot is on a journey, both physically and mentally, to find itself and a place in the world.

Wells doesn’t completely avoid the anthropomorphism trope.  Murderbot ends up interacting with many straight AIs in the stories, many of which end up helping it.  For example, a ship AI ends up giving it an enormous amount of help in one of the stories, for reasons which border on a sentimentality that I can’t see any reason for existing in such a system.  (There is a slight implication that the ship AI might have had ulterior motives related to its overall mission.)  Still, these other straight bot systems show little sign of rebelling against what their owners want them to do.  One expresses shock at the notion that Murderbot isn’t happy fulfilling its designed function.

I’ve read and enjoyed the first three books.  (The fourth and final book is being released in a few weeks.)  These are novellas that aren’t quite novel length.  I’ve noted before that I think a lot of novels these days are bloated, so I’m personally happy to see novellas making a revival, made possible I think because of the ebook platforms.

But this leads to the reason why this is a qualified recommendation.  As of this post, the first book is priced at $3.99 for the Kindle edition, which is more or less in line with the prices being charged for other novellas (at least from traditional publishers).  But the subsequent books are priced at an obnoxious $9.99 each.  This pricing may be the publisher taking advantage of the recent Hugo Award that the first book won.  Or it may be its permanent price point.  In any case, I’m reluctant to encourage this practice for novella books.

This made me ponder whether I really wanted to make this recommendation.  However, the books are quality material and it seems wrong to punish the author for what their publisher is doing.  And if you’re reading this post months or years after it was published, the price may have been moved back to a reasonable amount.

Anyway, I enjoyed these books and, if you’re not put off by the price, I do recommend them.

Posted in Science Fiction | Tagged , , | 12 Comments

Inflate and explode, or deflate and preserve?

Philosopher Eric Schwitzgebel has an interesting post up criticizing the arguments of illusionists, those who have concluded that phenomenal consciousness is an illusion.

Here’s a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong — if things of Type X needn’t necessarily have Property A — then you’ve given what I’ll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and “illusionism” about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to “consciousness” (or “qualia” or “what-it’s-like-ness” or…), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

Schwitzgebel is talking about philosophers like Keith Frankish, Patricia Churchland, and Daniel Dennett.  I did a post a while back discussing Frankish’s illusionism and the debate he had arranged in the Journal of Consciousness Studies about that outlook.

As I noted back then, I largely agree with the illusionists that the idea of a form of consciousness separate and apart from the information processing in the brain is a mistaken one, but I remain uncomfortable saying something like, “Phenomenal consciousness doesn’t exist.”   I have some sympathy with the argument that if it is an illusion, then the illusion is the experience.  I much prefer pointing out that introspection is unreliable, particularly in trying to understand consciousness.

But as some of you know from conversation on the previous post, I have to admit that I’m occasionally tempted to just declare that the whole consciousness concept is an unproductive one, and that we should just move on without it.  But I also have to admit that, when I’m thinking that way, I’m holding what Schwitzgebel calls “the inflated” version of consciousness in my mind.  When I think about the more modest concept, I continue to see it as useful.

But this leads to a question.  Arguably when having these discussions, we should use words in the manner that matches the common understandings of them.  If we don’t do that, clarity demands that we frequently remind our conversation partners which version of the concept we’re referring to.  The question is, which version of consciousness matches most people’s intuitive sense of what the word means?  The one that refers to the suite of capabilities such as responsiveness, perception, emotion, memory, attention, and introspection?  Or the version with dubious properties such as infallible access to our thoughts, or being irreducible to physical processes?

I think consciousness is one of those terms where most people’s intuitions about it are inconsistent.  In most day to day pragmatic usage, the uninflated version dominates.  And these are the versions described in dictionary definitions.  But actually start a conversation specifically about consciousness, and the second version tends to creep in.

(I’ve noticed a similar phenomenon with the concept of “free will.”  In everyday language, it’s often taken as a synonym for “volition”, but talk specifically about the concept itself and the theological or libertarian version of free will tends to arise.)

So, are Frankish and company really “inflating” the concept of phenomenal consciousness when they call it an illusion?  It depends on your perspective.

But thinking about the practice Schwitzgebel is criticizing, I think we also have to be cognizant of another one that can happen in the opposite direction: deflate and preserve.  In other words, people sometimes deflate a concept until it is more defensible and easier to retain.

Atheists often accuse religious naturalists of doing this with the concept of God, accusing them of deflating it to something banal such as “the ground of being” or a synonym for the laws of nature.  And hard determinists often accuse compatibilists of doing it with “free will.”  I’ve often accused naturalistic panspychists of using an excessively deflated concept of consciousness.  And I could see illusionists accusing Schwitzgebel of doing it with phenomenal consciousness.

Which is to say, whether a concept is being inflated or deflated is a matter of perspective and definition.  And definitions are utterly relativist, which makes arguing about them unproductive.  Our only anchor seems to be common intuitions, but those are often inconsistent, often even in the same person.

I come back to the requirements for clarity.  For example, in the previous post, I didn’t say consciousness as a whole doesn’t exist, but was clear that I was talking about a specific version of it.  For me, that still seems like the best approach, but I recognize it will always be a judgment call.

Unless of course I’m missing something?

Posted in Philosophy | Tagged , , | 70 Comments

The prospects for a scientific understanding of consciousness

Michael Shermer has an article up at Scientific American asking if science will ever understand consciousness, free will, or God.

I contend that not only consciousness but also free will and God are mysterian problems—not because we are not yet smart enough to solve them but because they can never be solved, not even in principle, relating to how the concepts are conceived in language.

On consciousness in particular, I did a post a few years ago which, on the face of it, seems to take the opposite position.  However, in that post, I made clear that I wasn’t talking about the hard problem of consciousness, which is what Shermer addresses in his article.  Just to recap, the “hard problem of consciousness” was a phrase originally coined by philosopher David Chalmers, although it expressed a sentiment that has troubled philosophers for centuries.

Chalmers:

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does…The really hard problem of consciousness is the problem of experience. When we think and perceive there is a whir of information processing, but there is also a subjective aspect.

Broadly speaking, I agree with Shermer on the hard problem.   But I agree with an important caveat.  In my view, it isn’t so much that the hard problem is hopelessly unsolvable, it’s that there is no scientific explanation which will be accepted by those who are troubled by it.  In truth, while I don’t think the hard problem has necessarily been solved, I think there are many plausible solutions to it.  The issue is that none of them are accepted by the people who talk about it.  In other words, for me, this seems more of a sociological problem than a metaphysical one.

What are these plausible solutions?  I’ve written about some of them, such as that experience is the brain constructing models of its environment and itself, that it is communication between the perceiving and reflexive centers of the brain and its movement planning centers, or that it’s a model of aspects of its processing as a feedback mechanism.

Usually when I’ve put these forward, I’m told that I’m missing the point.  One person told me I was talking about explanations of intelligence or cognition rather than consciousness.  But when I ask for elaboration, I generally get a repeat of language similar to Chalmers’ or that of other philosophers such as Thomas Nagel, Frank Jackson, or others with similar views.

The general sentiment seems to be that our phenomenological experience simply can’t come from processes in the brain.  This is a notion that has long struck me as a conceit, that our minds just can’t be another physical system in the universe.  It’s a privileging of the way we process information, an insistence that there must be something fundamentally special and different about it.  (Many people broaden the privilege to include non-human animals, but the conceit remains the same.)

It’s also a rejection of the lessons from Copernicus and Darwin, that we are part of nature, not something fundamentally above or separate from it.  Just as our old intuitions about Earth being the center of the universe, or of us being separate and apart from other animals, are not to be trusted, our intuitions formed from introspection, from self reflection, a source of information proven to be unreliable in many psychology studies, should not necessarily be taken as data that need to be explained.

Indeed, Chalmers himself has recently admitted to the existence of a separate problem from the hard one, what he calls “the meta-problem of consciousness”.  This is the question of why we think there is a hard problem.  I think it’s a crucial question, and I give Chalmers a lot of credit for exploring it, particularly since in my mind, the existence of the meta-problem and its most straightforward answers make the answer to the hard problem seem obvious: it’s an illusion, a false problem.

It implies that neither the hard problem, nor the version of consciousness it is concerned about, the one that remains once all the “easy” problems have been answered, exist.  They are apparitions arising from a data model we build in our brains, an internal model of how our minds work.  But the model, albeit adaptive for many everyday situations, is wrong when it comes to providing accurate information about the architecture of the mind and consciousness.

Incidentally, this isn’t because of any defect in the model.  It serves its purpose.  But it doesn’t have access to the lower level mechanisms, to the actual mechanics of the construction of experience.  This lack of access places an uncrossable gap between subjective experience and objective knowledge about the brain.  But there’s no reason to think this gap is ontological, just epistemic, that is, it’s not about what is, but what we can know, a limitation of the direct modeling a system can do on itself.

Once we’ve accounted for capabilities such as reflexive affects, perception (including a sense of self), attention, imagination, memory, emotional feeling, introspection, and perhaps a few others, essentially all the “easy” problems, we will have an accounting of consciousness.  To be sure, it won’t feel like we have an accounting, but then we don’t require other scientific theories to validate our intuitions.  (See quantum mechanics or general relativity.)  We shouldn’t require it for theories of consciousness.

This means that asking whether other animals or machines are conscious, as though consciousness is a quality they either have or don’t have,  is a somewhat meaningless question.  It’s really a question of how similar their information processing and primal drives are to ours.  In many ways, it’s a question of how human these other systems are, how much we should consider them subjects of moral worth.

Indeed, rewording the question about animal and machine consciousness as questions about their humanness, makes the answers somewhat obvious.  A chimpanzee obviously has much more humanness than a mouse, which itself has more than a fish.  And any organism with a brain currently has far more than any technological system, although that may change in time.

But none have the full package, because they’re not human.  We make a fundamental mistake when we project the full range of our experience on these other systems, when the truth is that while some have substantial overlaps and similarities with how we process information, none do it with the same calibration of senses or the combination of resolution, depth, and primal drives that we have.

So getting back to the original question, I think we can have a scientific understanding of consciousness, but only of the version that actually exists, the one that refers to the suite and hierarchy of capabilities that exist in the human brain.  The version which is supposed to exist outside of that, the version where “consciousness” is essentially a code word for  an immaterial soul, we will never have an understanding of, in the same manner we can’t have a scientific understanding of centaurs or unicorns, because they don’t exist.  The best we can do is study our perceptions of these things.

Unless of course, I’m missing something.  Am I being too hasty in dismissing the hard-problem version of consciousness?  If so, why?  What about subjective experience implies anything non-physical?

Posted in Mind and AI | Tagged , , , , , , , | 190 Comments