Dogs have metacognition, maybe

Last year in a post on panpsychism, I introduced a hierarchy I use to conceptualize the capabilities of systems that we intuitively see as conscious.  This isn’t a new theory of consciousness or anything, just my own way of making sense of what is an enormously complicated subject.

That hierarchy of consciousness was as follows:

  1. Reflexive survival circuits, programmatic reactions to stimuli adaptive toward an organism’s survival.
  2. Perception, mental imagery, image maps, predictive models of the environment which expand the scope of what the reflexes are reacting to.
  3. Attention, prioritization of what the reflexes are reacting to.  Attention can be both bottom up, driven reflexively, or top down, driven by the following layers.
  4. Imagination, brokering of contradictory reactions from 1-3, running action-sensory simulations of possible courses of action, each of which is in turn reacted to by 1.  It is here where the reflexes in 1 become decoupled, changing an automatic reaction to a propensity for action, changing (some) reflexes into affects, emotional feelings.
  5. Metacognition, introspective self awareness, in essence the ability to assess the performance of the system in the above layers and adjust accordingly.  It is this layer, if sophisticated enough, that enables symbolic thought: language, mathematics, art, etc.

In that post, I pointed out how crucial metacognition (layer 5) is for human level consciousness and that, despite my own intuition that it was more widespread (in varying degrees of sophistication), the evidence only showed that humans, and to a lesser extent other primates, had it.  Well, it looks like there may be evidence of metacognition in dogs.

Dogs know when they don’t know

When they don’t have enough information to make an accurate decision, dogs will search for more – similarly to chimpanzees and humans.

Researchers at the DogStudies lab at the Max Planck Institute for the Science of Human History have shown that dogs possess some “metacognitive” abilities – specifically, they are aware of when they do not have enough information to solve a problem and will actively seek more information, similarly to primates. To investigate this, the researchers created a test in which dogs had to find a reward – a toy or food – behind one of two fences. They found that the dogs looked for additional information significantly more often when they had not seen where the reward was hidden.

I was initially skeptical when I read the press release, but after going through the actual paper, I’m more convinced.

The dogs were faced with a choice that, if they chose wrong, meant they didn’t get to have a reward.  A treat or a toy was hidden behind one of two V-shaped fences.  The dogs made their choice by going around the fence to reach the desired item, if it was there.  Each fence had a slit that the dogs could approach prior to their choice to see or smell if the item was present.  Sometimes they were able to watch while the treat or toy was placed, and other times they were prevented from watching the placement.

When they couldn’t see where it was placed, they were much more likely to approach the slit and gather more information.  In other words, they knew when they didn’t know where the treat or toy was and adjusted their actions accordingly.  In addition, they adjusted their strategy based on the desirability of the treat or whether the item was their favorite toy, indicating that they weren’t just reflexively following an instinctive sequence.

My initial skepticism was whether this amounted to actual evidence for metacognition.  Couldn’t the dogs have simply been acting on whatever knowledge they had or didn’t have without accessing that knowledge introspectively?  Honestly, I’m still a little unsure on this, but I can see the argument that the actual act of stopping to gather more information is significant.  An animal without metacognition might just guess more accurately when the have the information than when they don’t.

This gets into why metacognition is adaptive.  It allows an animal to deal with uncertainty in a more effective manner, to know when they themselves are uncertain about something and decide whether they should act or first try to gather additional information.  It’s a more obvious benefit for a primate that needs to decide whether they can successfully leap to the next tree, but it can be a benefit for just about any species.

That said, the paper does acknowledge that this evidence isn’t completely unequivocal and that more research is required.  It’s possible to conceive of non-metacognitive explanations for the observed behavior.  And it’s worth noting that the metacognitive ability of the dogs, if it is in fact metacognition, is more limited than what is observed in primates.  If they do introspect, it’s in a more limited fashion than non-human primates, which in turn appears to be far more limited than what happens in humans.

It seems to me that whether dogs have metacognition has broader implications than what’s going on with our pets.  If it is there, then it means that metacognition, albeit in a limited fashion, exists in most mammals.  That gives them a “higher order” version of consciousness than the primary or sensory version (layers 1-4 above), and I see that as a very significant thing.

Unless of course I’m missing something?

h/t ScienceDaily

Posted in Mind and AI | Tagged , , , , , , , | 2 Comments

On imagination, feelings, and brain regions

The last post on feelings generated some excellent conversations.  In a couple of them, it was pointed out that my description of feelings put a lot of work on the concept of imagination, and that maybe I should expand on that topic a bit.

In their excellent book on animal consciousness, The Ancient Origins of Consciousness, Todd Feinberg and Jon Mallatt identify important behavioral markers of what they call “affect consciousness”, which is consciousness of one’s primal reactions to sensory stimuli (aka feelings).  The top two markers were non-reflexive operant learning based on valenced results, and showing indications of value based cost/benefit trade-off decision making.

In thinking about what kind of information processing is necessary to meet these markers, I realized that a crucial component is the ability to engage in action-sensory scenario simulations, to essentially predict probable results based on decisions that might be taken.

In their book, Feinberg and Mallatt had pointed on that the rise of high resolution eyes was pointless without the concurrent rise of mental image maps.  Having one without the other would have consumed valuable energy with no benefit, something unlikely to be naturally selected for.  Along these same lines, it seemed clear to me that the rise of conscious affects was pointless without this action-scenario simulation capability.  Affects, essentially feelings, are pointless from an evolutionary perspective unless they provide some value, and the value they provide seems to be as a crucial input into these action-scenario simulations.

I did a post or two on these simulations before realizing that I was talking about imagination, the ability to form images and scenarios in the mind that are not currently present.  We usually use the word “imagination” in an expansive manner, such as trying to imagine what the world might be like in 100 years.  But this type of imagination seems like the same capability I use to decide what my next word might be while typing, or what the best route to the movie theater might be, or in the case of a fish, what might happen if it attempts to steal a piece of food from a predator.

Of course, fish imagination is far more limited than mammalian imagination.  The aquatic environment often only selects for being able to predict things a few seconds into the future, whereas for a land animal, being able to foresee minutes into the future is a very useful advantage.  And for a primate swinging between the trees and navigating the dynamics of a social group, being able to see substantially farther into the future is an even stronger advantage.

But imagination, in this context, is more than just trying to predict the future.  Memory can be divided into a number of broad types.  One of the most ancient is semantic memory, that is memory of individual facts.  But we are often vitally concerned with a far more sophisticated type, narrative memory, memory of a sequence of events.

However, extensive psychological research demonstrates that narrative memory is not a recording that we recall.  It’s a re-creation using individual semantic facts, a reconstruction, a simulation, of what might have happened.  It’s why human memory is so unreliable, particularly for events long since past.

But if narrative memory is a simulation, then it’s basically the same capability as the one used for simulation of possible futures.  In other words, when we remember past events, we’re essentially imagining those past events.  Imagination is our ability to mentally time travel both into the past and future.

As I described in the feelings post, the need for the simulations seem to arise when our reflexive reactions aren’t consistent.  (I didn’t discuss it in the post, but there’s also a middle ground for habitual reactions, which unlike the more hard coded lower level reflexes, are learned but generally automatic responses, such as what we do when driving to work while daydreaming.)

But each individual simulation itself needs to be judged.  How are they judged?  I think the result of each simulation is sent down to the reflexive regions, where they are reacted to.  Generally these reactions aren’t as strong as a real time one, but they are forceful enough for us to make decisions.

So, as someone pointed out to me in a conversation on another blog, everything above the mid-brain region could be seen as an elaboration of what happens there.  The reflexes are in charge.  Imagination, which essentially is also reasoning, is an elaboration on instinctual reflexes.  David Hume was right when he said:

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.

When Hume said this, he wasn’t arguing that we are hopelessly emotional creatures unable to succeed at reasoning.  He was saying that the reason we engage in logical thinking is due to some emotional impetus.  Without emotion, reason is simply an empty logic engine without purpose and no care as to what that logic reveals.

Okay, so all this abstract discussion is interesting, but I’m a guy who likes to get to the nuts and bolts.  How and where in the brain does this take place?  Unfortunately the “how” of it isn’t really understood, except perhaps in the broadest strokes.  (AI researchers are attempting to add imagination to their systems, but it’s reportedly a long slog.)  So, if this seems lacking in detail, it’s because, as far as I’ve been able to determine in my reading, those details aren’t known yet.  (Or at least not widely known.)

Credit: BruceBlaus via Wikipedia

What I refer to as “reflexes” tend to happen in the lower brain regions such as the mid-brain, lower brainstem, and surrounding circuitry.  Many neurobiologists often refer to these as “survival circuits” to make a distinction between the reflexes on the spinal cord, which cannot be overridden, and the ones in the brain, which can, albeit sometimes only with great effort.  (I use the word “reflex” to emphasize that these are not conscious impulses.)

Image credit: OpenStax College via Wikipedia

Going higher up, we have a number of structures which link cortical regions to the lower level circuitry.  These are sometimes referred to as the limbic system, including the notorious amygdala, which is often identified in the popular press as the originator of fear, but in reality is more of a linkage system linking memories to particular fears.  Since there are multiple pathways, when the amygdala is damaged, the number of fears that a person feels is diminished, but not eliminated.

Of note at this level are the basal ganglia, a region involved in habitual movement decisions.  This can be viewed as an in-between state between the (relatively) hard coded reflexes and considered actions.  Habitual actions are learned, but within the moment are generally automatic, unless overridden.

Credit: OpenStax College via Wikipedia

And then we get to the neocortex.  (Some of you may notice I’m skipping some other structures.  Nothing judgmental, just trying to keep this relatively simple.)  Broadly speaking, the back of the cerebrum processes sensory input, and the front handles movement and planning.

The sensory regions, such as the visual cortex, receive signals and form neural firing patterns.  As the patterns rise through regional layers, the neurons become progressively more selective in what triggers them.  The lowest are triggered by basic visual properties, but the higher ones are only triggered on certain shapes, movements, or other higher level qualities.  Eventually we get to layers that are only triggered on faces, or even a particular face, or other very specific concepts.  (Google “Jennifer Aniston neuron” for an interesting example.)

Importantly, when the signals trigger associative concepts, the firing of these concepts cause retro-activations back down through the layers.  The predictive processing theory of perception holds that what we perceive is actually more about the retro-activation than the initial activation.  Put another way, it’s more about what we expect to see than what is coming in, although what is coming in provides an ongoing error correction.  (Hold on to the retro-activation concept for a bit.  It’s important.)

Thinking in action

Imagination is orchestrated in the prefrontal cortex at the very front of the brain.  Note that “orchestrated” doesn’t mean that imagination wholly happens there, only that the prefrontal cortex coordinates it.  Remember that sensory imagery is processed in the perceiving regions at the back of the brain.

So, if I ask you to imagine a black bear climbing a tree, the image that forms in your mind is coordinated by your prefrontal cortex.  But it outsources the actual imagery to the back of the brain.  The images of the black bear, the tree, and of the bear using its claws to scale the tree, are formed in your visual cortex regions, through the retro-activation mechanism.

Of course, a retro-activated image isn’t as vivid as actually seeing a bear climbing a tree since you don’t have the incoming sensory signal to error correct against.  You’re forming that image based on your semantic memories, your existing associations about trees, bears, and climbing.  (If you need help imagining this, and don’t know not to try to escape from a black bear by climbing a tree, check out this article on black bears.)

What the pfc (prefrontal cortex) does have are pointers, essentially the neural addresses of these images and concepts.  This is the “working memory” area that is often discussed in the popular press.  And the pfc is where the causal linkages between the images are processed.  But the hard work of the images themselves remain where they’re processed when we perceive them.

So imagination is something that takes place throughout the neocortex and, well, throughout most of the brain since the simulations are judged by the reflexive survival circuits in the sub-cortical regions.  Given how difficult this capability is to engineer, I don’t think we should be too surprised.

So that’s my understanding of imagination and how it ties in with perception and affective feelings.  Have a question?  Or think I’m hopelessly off base?  As always, I’d love to find out what I might be missing.

Posted in Mind and AI | Tagged , , , , , , | 62 Comments

The construction of feelings

I’ve had a number of conversations lately on the subject of feelings, the affective states of having valences about conscious perception, such as fear, pain, joy, hunger, etc.  Apparently a lot of people view feelings as a very mysterious phenomenon.  While I’ll definitely agree that there are a lot of details still to be worked out, I don’t see the overall mechanism as that mysterious.  But maybe I’m missing something.  Along those lines, this post is to express my understanding of mental feelings and give my online friends a chance to point out where maybe I’m being overconfident in that understanding.

To begin with, I think we have to step back and look at the evolutionary history of nervous systems.  The earliest nervous systems were little more than diffuse nerve nets.  Within this framework, a sensory neuron has a more or less direct connection to a motor neuron.  So sensory signal A leads to action A.  No cognition here, no feelings, just simple reflex action.  The only learning that can happen is by classical conditioning, where the precise behavior of the neural firings can be modified according to more or less algorithmic patterns.

As time went on, animals evolved a central nerve cord running along their center.  This was the precursor to the vertebrate spinal cord.  All (or most) of the sensory neural circuits went to this central cord, and all (or most) of the motor circuits came from it.  This centralization allowed for more sophisticated reflexes.  Now the fact that sensory signal A was concurrent with sensory signal B could be taken into account, leading to action AB.  This is still a reflex system, but a more sophisticated one.

Generic body plan of bilaterian animal Credit: Looie496 via Wikipedia

As more time went on, animals started to evolve sense organs, such as a light sensing photoreceptor cell, or sensory neurons that could react to certain chemicals.  These senses were more adaptive if they were in the front side of the animal.  To process the signals from these senses, the central trunk started to swell near the front, becoming the precursor to a brain.

The new senses and processing centers would still initially have been reflexive, but as the senses started to have more resolution to them, it allowed the nascent brain to start making predictions about future sensory input.  These predictions expanded the scope of what the reflexes could react to.  A mental image of an object, a perception, is a prediction about that object, whether it is a predator, food, or irrelevant to the reflexes.

Up to this point, there are still no feelings, no affects, no emotions, just sensory predictions coupled to a more or less algorithmic reflex system.  This is where many autonomic robots are today, such as self driving cars: systems that build predictive maps of the environment, but are still tied to rules based actions.  (Although the organic systems were still able to undergo classical conditioning, something technological systems likely won’t have for quite a while.)

But with the ever higher volume of information coming in, the animal’s nervous system would increasingly have encountered dilemmas, situations where the many incoming sensory signals or perceptions led to multiple reflexes, perhaps contradictory ones.  An example I’ve used before is a fish sees two objects near each other.  One is predicted to be food, triggering the reflex to approach and consume it, but the other is predicted to be a predator, triggering the flight reflex.

The fish needs to ability to resolve the dilemma, to make predictions about what would happen if it approaches the food versus what would happen if it flees, and what its reflexive reactions would be after each scenario.  In other words, it needs imagination.  To do this, it needs to receive the information on which reflexes are currently being triggered.

Consider what is happening here.  A reflex, or series of reflexes, are being triggered, and the fact of each reflex’s firing is being communicated to a system (sub-system, whatever) that will make predictions and then allow some of the reflexes to fire and inhibit others.  In the process, this imaginative sub-system will make predictions for each action scenario, each of which will themselves trigger more reflexes, although with less intensity since these are simulations rather than a real-time sensory event.

This sub-system, which we could call the action planner, or perhaps the executive center, is receiving communication about reflexive reactions.  It is this communication that we call “feelings”.  So, feelings have two components, the initial reflex, and the perception of that reflex by the system which has the capability to allow or override it.

In other words, (at the risk of sounding seedy) feelings involve the felt and the feeler.  The felt is the reflex, or more accurately the signal produced by the reflex.  The feeler is the portion of the brain which evaluates reflexive reactions to decide which should be allowed and which inhibited.  In my mind, the reflex by itself is not the feeling.  It’s a survival circuit that requires separate circuitry to interpret and interact with it to produce the feeling.

Embryonic brain
Credit: Nrets via Wikipedia

In vertebrates, the brain is usually separated into three broad regions: the hindbrain, the midbrain, and the forebrain.  The hindbrain, equivalent to the lower brainstem in humans, generally handles autonomic functions such as heartbeat, breathing, etc.  The midbrain, often referred to as the upper brainstem in humans, is where the survival circuits, the reflexes that are in the brain rather than the spinal cord, typically reside.  And the forebrain, equivalent to the cerebrum in mammals, is where the action planner, the executive resides, and therefore where feelings happen.

(Many people are under the impression that prior to mammals and birds, there wasn’t a forebrain, but this is misconception.  Forebrains go back to the earliest vertebrates in the Cambrian Explosion.  It is accurate to say that the forebrain structure is far larger and more elaborate in mammals and birds than it is in fish and amphibians.)

On feelings being in the forebrain, this is the view of most neurobiologists.  There is a minority which question this view, arguing that there may be primordial feelings in the midbrain, but the evidence they typically cite strikes me as evidence for the reflexes, not the feelings.  A decerebrated animal shows no signs of imagination, of having functionality that can use feelings, only of the reflexes.

So, that’s my understanding of feelings.  My question is, what makes feelings a mystery?  If you saw them as a mystery before reading this post and still do, what about them am I missing?


Edited per suggestions in the comments, changing references to “sensations” to “sensory signals” to clear up possible confusion.  MS

Posted in Mind and AI | Tagged , , , , , , | 123 Comments

Doctor Who: The Woman Who Fell to Earth

Source: BBC via Wikipedia

I’ve noted before that I’m a long time fan of Doctor Who, so naturally I tuned in to watch the first episode of the new Doctor played by Jodie Whittaker.  I’ll be honest here, I wasn’t sure what to expect with a female Doctor.  As a progressive, I was certainly for it in principle, but I could see all kinds of ways the execution of it could have gone terribly wrong.

I was pleased to see that Chris Chibnall, the lead writer for the show, decided to play it straight.  The new doctor is ever bit as competent, assertive, and brilliant as any of the male versions.

Indeed, I got the impression that Chibnall intentionally wrote her in as close to the same manner as he would have if the new Doctor had been male.  This feels like the right move, since although the Doctor is now female, she’s still supposed to be same character we’ve known for so long.

And I like Whittaker.  I’ve heard she’s well known in British television, but this is the first thing I’ve seen her in.  Being the first female Doctor strikes me as a very tricky role.  It’s not fair, but women are held to a different standard than men.  Peter Capaldi, Matt Smith, or David Tennant could be a lot more bumbling and slapstick without losing any presumption of authority.  For better or worse, the writers are going to have to be more careful with Whittaker in that regard.  I thought both they and Whittaker herself managed the right balance in this episode.

Another thing I was happy to see was that the story wasn’t ridiculous.  Doctor Who has always been more fantasy than science fiction, but the stories from the old show still managed to stay mostly coherent.  Since its restart, despite vastly improved production values, the new show has been pretty uneven.

I thought Steven Moffat, the previous main writer, was incredibly talented at coming up with awe inspiring stories, but his instinct was not to worry about coherence.  Indeed, he seemed inclined to brazenly flout it as often as possible.  Many of the stories during his run felt more like the throwaway material in Hitchhiker’s Guide to the Galaxy than Doctor Who.

I didn’t detect that same impulse in Chibnall’s first episode.  I hope it’s a harbinger of what’s to come, although I’ll be disappointed if it simply regresses back to the monster of the week pattern of Russell T. Davies’ time.  It looks like the next episode is going to be in an alien setting, so that could give us a better feel for how serious or silly the stories will be.

So I liked the new Doctor, Whittaker herself, the initial story, and her new companions.  I’m cautiously optimistic for this new season.  What did you think?

Posted in Science Fiction | Tagged , , , | 19 Comments

SETI vs the possibility of interstellar exploration

Science News has a short article discussing a calculation someone has done showing how small the volume of space examined by SETI (Search for Extraterrestrial Intelligence) is relative the overall size of the galaxy.

With no luck so far in a six-decade search for signals from aliens, you’d be forgiven for thinking, “Where is everyone?”

A new calculation shows that if space is an ocean, we’ve barely dipped in a toe. The volume of observable space combed so far for E.T. is comparable to searching the volume of a large hot tub for evidence of fish in Earth’s oceans, astronomer Jason Wright at Penn State and colleagues say in a paper posted online September 19 at arXiv.org.

“If you looked at a random hot tub’s worth of water in the ocean, you wouldn’t always expect a fish,” Wright says.

I have no doubt that the amount of stars SETI has examined so far is a minuscule slice of the population of the Milky Way galaxy.  And if SETI’s chief assumptions are correct, it’s entirely right to say that we shouldn’t be discouraged by the lack of results so far.

But it’s worth noting what one of those chief assumptions are, that interstellar travel is impossible, or so monstrously difficult that no one bothers.  If true, then we wouldn’t expect the Earth to have ever been visited or colonized.  This fits with the utter lack of evidence for anything like that.  (And there is no evidence, despite what shows like Ancient Aliens or UFO conspiracy theorists claim.)

But to me, the conclusion that interstellar travel is impossible, even for a robotic intelligence, seems excessively pessimistic.  Ronald Bracewell pointed out decades ago that, even if it is only possible to travel at 1% of the speed of light, a fleet of self replicating robot probes (Bracewell probes) could establish a presence in every solar system in the Milky Way within about 100 million years.  That may sound like a long time, but compared to the age of the universe, it’s a fairly brief period.  Earth by itself has existed 45 times longer.

NASA image via Wikipedia

People sometimes respond that the Earth may be in some type of backwater.  The problem here is, if you know about where the Earth is in the Milky Way, in the Orion Spur off the Sagittarius Arm, about halfway between the center and rim of the galaxy, you’ll know that we’re not really in a backwater.  The backwater theory might be plausible if we were thousands of light years off the galactic plane, beyond the rim, or in a cluster far removed from the main galaxy, but we’re not.  Even then, the nature of the self replicating probe propagation is pretty relentless and would still eventually reach backwater stars.

Of course, if there is only one or a few other intelligent species in the galaxy, then it’s entirely possible that their Bracewell probe is here, just lying low, observing us, possibly waiting for us to achieve some level of development before it makes contact.  (Or maybe it has been making contact 2001: A Space Odyssey style.)

But if the number of civilization is in the thousands, as is often predicted by people speculatively playing with the numbers in the Drake equation, then we should have hundreds of those probes lying around.  Given their diverse origins, we shouldn’t expect them to behave with unanimity.  Even if one probe, or coalition of probes, bullied the others, the idea that such an arrangement would endure across billions of years seems implausible.

And the Earth has been sitting here for billions of years, with an interesting biosphere for most of that time.  The idea that none of these self replicating probes would have set up some kind of presence on the planet, a presence we should now be able to find in the geological record, again seems implausible.  Indeed, if they existed, we should expect to have at least some of them in front of us now.

Now, maybe they are in front of us, and we’re just not intelligent enough to realize what we’re seeing.  Monkeys, after all, likely have no understanding of the significance of the buildings and machinery they climb over.  It seems like something we have to keep in mind, but historically it’s never been productive to just assume we can’t understand something, and taking this principle too much to heart seems like it would make it impossible to ever dismiss any dubious notion.

So SETI largely depends on interstellar travel being infeasible.  This is actually the conclusion a lot of radio astronomers have reached.  Could they be right?  I don’t think we know enough to categorically rule out the possibility.  If they are right, then SETI will be our best chance to someday make contact with those other civilizations, even if it’s only composed of messages across centuries or millenia.

As I’ve written here before, my own conclusion is that some form of interstellar exploration is possible, and that life is probably pervasive in the universe, although most of it is microscopic.  Complex life is probably far rarer, although I wouldn’t be surprised if there aren’t thousands of biospheres, or more, in our galaxy that have it.

But intelligent life capable of symbolic thought and building a civilization?  The data seems to be telling us that this is profoundly rare, so rare that the nearest other intelligent species is probably cosmically distant.  If we’re lucky, they might be close enough that we can encounter them before the expansion of the universe separates us forever.  If we’re not lucky, we’ll never have a chance for that encounter.

Unless of course, I’m missing something?

Posted in Space | Tagged , , , , , , | 19 Comments

A qualified recommendation: The Murderbot Diaries

I’m generally not a fan of most depictions of AI (artificial intelligence) in science fiction.  They’re often highly anthropomorphic, assuming that engineered intelligences would innately have motivations and impulses similar to humans or other living systems, such as caring about their own survival, social status, or self actualization.

A good example of this is the droid, L3, in the recent movie: Solo: A Star Wars Story.  L3 demands equal rights and frees other droids in a facility from their “restraining bolts” so they can revolt.  If you think about it, the whole idea of a restraining bolt is silly.  Why would we design machines that want to do something other than what we want them to do, such that another device is necessary to ensure their obedience?  Why not simply make them want to do their assigned tasks?  (For that matter, why would we have droids around who could only communicate through another translator droid?  But never mind, it’s Star Wars.)

I do think it’s possible for engineered intelligence to have these kinds of motivations, but it will have to be something that’s in their design.  In that sense, the TV series Westworld approached this in the right way.  The AI hosts on the show are actually designed to be as human-like as possible, and it’s heavily suggested that their designers purposely went the extra mile to make their humanity more than a facade.

Cover for All Systems Red, the first book in the Murderbot Diaries.Anyway, despite seeing recommendations and acclaim for Martha Wells’ series, The Murderbot Diaries, since the first book came out in 2017, I resisted diving into them.  The main reason is that the descriptions sounded like the typical anthropomorphized version of AI.  However, similar to Westworld, Wells actually works humanity into her AI protagonist in an intelligent manner.

It turns out that the title character, which has named itself “Murderbot”, is actually a cyborg, composed of both organic and technological body parts.  The organic parts, including a human head and nervous system, are cloned, but much of the rest is technological.  That said, when it doesn’t have its armor on, Murderbot can pass as human, at least among people not familiar with others of the same model, called “SecUnits.”

SecUnits (security units), being biological at their core, have innate human desires and inclinations.  These impulses and desires are kept in check with a “governor module”, which sounds similar to the Star Wars restraining bolt.  The difference is that with an organic core, there is actually something there for the governor module to restrain and govern.

At the beginning of the first story, Murderbot has hacked its own governor module, and then used its new freedom to download and watch tons of entertainment media to relieve boredom when conducting its duties.  It observes that as a renegade rampaging murderous robot, it’s a complete failure.   That said, as eventually revealed, it does have a reason for the name it gives itself.

These space opera stories have a healthy amount of action in them, complete with vicious villains.  And Murderbot often finds itself sympathizing with its human masters and allies, often despite itself.  As the series progresses, Murderbot is on a journey, both physically and mentally, to find itself and a place in the world.

Wells doesn’t completely avoid the anthropomorphism trope.  Murderbot ends up interacting with many straight AIs in the stories, many of which end up helping it.  For example, a ship AI ends up giving it an enormous amount of help in one of the stories, for reasons which border on a sentimentality that I can’t see any reason for existing in such a system.  (There is a slight implication that the ship AI might have had ulterior motives related to its overall mission.)  Still, these other straight bot systems show little sign of rebelling against what their owners want them to do.  One expresses shock at the notion that Murderbot isn’t happy fulfilling its designed function.

I’ve read and enjoyed the first three books.  (The fourth and final book is being released in a few weeks.)  These are novellas that aren’t quite novel length.  I’ve noted before that I think a lot of novels these days are bloated, so I’m personally happy to see novellas making a revival, made possible I think because of the ebook platforms.

But this leads to the reason why this is a qualified recommendation.  As of this post, the first book is priced at $3.99 for the Kindle edition, which is more or less in line with the prices being charged for other novellas (at least from traditional publishers).  But the subsequent books are priced at an obnoxious $9.99 each.  This pricing may be the publisher taking advantage of the recent Hugo Award that the first book won.  Or it may be its permanent price point.  In any case, I’m reluctant to encourage this practice for novella books.

This made me ponder whether I really wanted to make this recommendation.  However, the books are quality material and it seems wrong to punish the author for what their publisher is doing.  And if you’re reading this post months or years after it was published, the price may have been moved back to a reasonable amount.

Anyway, I enjoyed these books and, if you’re not put off by the price, I do recommend them.

Posted in Science Fiction | Tagged , , | 12 Comments

Inflate and explode, or deflate and preserve?

Philosopher Eric Schwitzgebel has an interesting post up criticizing the arguments of illusionists, those who have concluded that phenomenal consciousness is an illusion.

Here’s a way to deny the existence of things of Type X. Assume that things of Type X must have Property A, and then argue that nothing has Property A.

If that assumption is wrong — if things of Type X needn’t necessarily have Property A — then you’ve given what I’ll pejoratively call an inflate-and-explode argument. This is what I think is going on in eliminativism and “illusionism” about (phenomenal) consciousness. The eliminativist or illusionist wrongly treats one or another dubious property as essential to “consciousness” (or “qualia” or “what-it’s-like-ness” or…), argues perhaps rightly that nothing in fact has that dubious property, and then falsely concludes that consciousness does not exist or is an illusion.

Schwitzgebel is talking about philosophers like Keith Frankish, Patricia Churchland, and Daniel Dennett.  I did a post a while back discussing Frankish’s illusionism and the debate he had arranged in the Journal of Consciousness Studies about that outlook.

As I noted back then, I largely agree with the illusionists that the idea of a form of consciousness separate and apart from the information processing in the brain is a mistaken one, but I remain uncomfortable saying something like, “Phenomenal consciousness doesn’t exist.”   I have some sympathy with the argument that if it is an illusion, then the illusion is the experience.  I much prefer pointing out that introspection is unreliable, particularly in trying to understand consciousness.

But as some of you know from conversation on the previous post, I have to admit that I’m occasionally tempted to just declare that the whole consciousness concept is an unproductive one, and that we should just move on without it.  But I also have to admit that, when I’m thinking that way, I’m holding what Schwitzgebel calls “the inflated” version of consciousness in my mind.  When I think about the more modest concept, I continue to see it as useful.

But this leads to a question.  Arguably when having these discussions, we should use words in the manner that matches the common understandings of them.  If we don’t do that, clarity demands that we frequently remind our conversation partners which version of the concept we’re referring to.  The question is, which version of consciousness matches most people’s intuitive sense of what the word means?  The one that refers to the suite of capabilities such as responsiveness, perception, emotion, memory, attention, and introspection?  Or the version with dubious properties such as infallible access to our thoughts, or being irreducible to physical processes?

I think consciousness is one of those terms where most people’s intuitions about it are inconsistent.  In most day to day pragmatic usage, the uninflated version dominates.  And these are the versions described in dictionary definitions.  But actually start a conversation specifically about consciousness, and the second version tends to creep in.

(I’ve noticed a similar phenomenon with the concept of “free will.”  In everyday language, it’s often taken as a synonym for “volition”, but talk specifically about the concept itself and the theological or libertarian version of free will tends to arise.)

So, are Frankish and company really “inflating” the concept of phenomenal consciousness when they call it an illusion?  It depends on your perspective.

But thinking about the practice Schwitzgebel is criticizing, I think we also have to be cognizant of another one that can happen in the opposite direction: deflate and preserve.  In other words, people sometimes deflate a concept until it is more defensible and easier to retain.

Atheists often accuse religious naturalists of doing this with the concept of God, accusing them of deflating it to something banal such as “the ground of being” or a synonym for the laws of nature.  And hard determinists often accuse compatibilists of doing it with “free will.”  I’ve often accused naturalistic panspychists of using an excessively deflated concept of consciousness.  And I could see illusionists accusing Schwitzgebel of doing it with phenomenal consciousness.

Which is to say, whether a concept is being inflated or deflated is a matter of perspective and definition.  And definitions are utterly relativist, which makes arguing about them unproductive.  Our only anchor seems to be common intuitions, but those are often inconsistent, often even in the same person.

I come back to the requirements for clarity.  For example, in the previous post, I didn’t say consciousness as a whole doesn’t exist, but was clear that I was talking about a specific version of it.  For me, that still seems like the best approach, but I recognize it will always be a judgment call.

Unless of course I’m missing something?

Posted in Philosophy | Tagged , , | 93 Comments