Does information require conscious interpretation to be information?

Peter Kassan has an article at Skeptic Magazine which sets out to disprove the simulation hypothesis, the idea that we’re all living in a computer simulation.

I personally find arguing about the simulation hypothesis unproductive.  Short of the simulation owner deciding to jump in and contact us, we can’t prove the hypothesis.  Even if the simulation has flaws that it would allow us to find and perceive, we can never know whether we are looking at an actual flaw or just something we don’t understand.  For example, is quantum wave-particle duality a flaw in the simulation, or just a puzzling aspect of nature?

Nor can we disprove the simulation.  There’s simply no way to prove to a determined skeptic that the world is real.  And if we are in a simulation, it appears to exact unpleasant consequences for not taking it seriously.  It effectively is our reality.  And we have little choice but to play the game.

But this post isn’t about the simulation hypothesis.  It’s about the central argument Kassan makes against it, that there can’t be a consciousness inside a computer system.  The argument Kassan uses to make this case is one I’m increasingly encountering in online conversations, involving assertions about the nature of information.

ASCII code for “Wikipedia”
Image credit: User:spinningspark at Wikipedia

The argument goes something like this.  Information is only information because we interpret it to be information.  With no one to do that interpretation, the patterns we refer to as information are just patterns, structures, configurations, with no inherent meaning.  Consequently, the physical machinations of computers are information processing only because of our interpretations of what we put into them, what they do with it, and what they produce.  However, brains do their work regardless of the interpretation, so they can’t be processing information, and information processing can’t lead to consciousness.

To be fair, this brief summary of the argument may not do it justice.  If you want to see the case made by someone who buys it, I recommend reading Kassan’s piece.

That said, I think the argument fails for at least two reasons.

The first is that it depends on a particularly narrow conception of information.  There are numerous definitions of information out there.  But for purposes of this post, we don’t need to settle on any one specific definition.  We just need to discuss an implied aspect of all of them, that information must be for something.

The people making the argument are right about one thing.  Information, in an of itself, is not inherently information.  To be information, something must make use of it.  But the assertion is that this role of making use of information can only be fulfilled by a conscious agent.  No conscious agent involved, then no information.  The problem is that this ignores the non-conscious systems that make use of information.

For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information.  But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration.  These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration.  Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.

Another example is a particular type of molecule that is allowed entry through the cell’s membrane.  There’s nothing about that molecule in and of itself that makes it information.  But if the chemical properties of the molecule cause the cell to change its development or behavior, then we often talk about the molecule, perhaps a hormone, being a chemical “signal”.  It’s the cell’s response to the molecule that makes it information.

But even in computer technology, there are often transient pieces of information that no conscious observer interprets.  The device you’re reading this on likely has a MAC address which it uses to communicate on your local network.  It probably contacted a DHCP server to get a dynamically assigned IP address for it to communicate on the internet.  It had to contact a domain name server to get the IP address for this website.  The various apps on it likely all have various internal system identifiers.  None of these things are anything you likely know or think about, but they’re vital for the device to do its job.  Many of the dynamically assigned items will come into and go out of existence without any conscious observer ever interpreting them.  Yet it seems perverse to say that these aren’t information.

Of course, we could fall back to the etymology of “information” and insist on defining it only as something that inputs Platonic forms into a conscious mind (in-form).  But then we’ve created a need to come up with a new word for the patterns, such as DNA or transient IP addresses, that have causal effects on non-conscious systems.  Maybe we could call such patterns “causalation”.  Which means we could talk about brains being causalation processing systems.  Of course, computers would also be causalation processing systems, which just brings us right back to the original bone of contention.

And that in turn bring us to the second reason the argument fails.  Every information processing system is a physical system, and can be described in purely physical terms.  Consider the following description.

A system is constantly propagating energy, at small but consistent levels, through portions of its structure.  The speed and direction of the energy flows are altered by aspects of the structure.  But many of those structural aspects themselves are altered by the energy flow, creating a complex synergy between energy and structure.  The overall dynamic is altered by energy from the environment, and alters the environment by the energy it emits.  Interactions with the environment often happen through intermediate systems that modulate and moderate the inbound energy patterns to a level consistent with the central system, and magnify the causal effects of the emitted energy.

This description can pertain to both computers and central nervous systems.  The energy in commercial computers is electricity, the modifiable aspects of the structure are transistor voltage states, and the intermediate systems are I/O devices such as keyboards, monitors, and printers.  The energy in nervous systems is electrochemical action potentials, the aspects of modifiable structure are the synapses between neurons, and the intermediate systems are the peripheral nervous system and musculature.

(It’s also worth noting that computers can also be built in other ways.  For example, they can be built with mechanical switches, where the energy is mechanical force and the modifiable aspects are the opening and closing switches.  A computer could, in principle, also be built with hydraulic plumbing controlling the flow of liquids.  In his science fiction novel, The Three-Body Problem, Cixin Liu describes an alien computer implemented with a vast army of soldiers, with each soldier acting as a switch, raising or lowering their arms  following simple rules based on what the soldiers next to them did.)

It’s the similarities between how these physical systems work that make it easy for neuroscientists to talk in terms of neural circuits and neural computation, and to see the brain as an information processing organ.  Engaging in lingistic jiu jitsu over the definition of “information” (or “computation” as often happens in similar arguments) doesn’t change these similarities.

Not that there aren’t major differences between a commercial digital computer and an organic brain.   (Although the differences between technology and biology are constantly decreasing.)  The issue isn’t whether brains are computers in the narrow modern sense, but whether they are computational information processing systems.

So, am I being too dismissive of this interpretation argument?  Or are there similar arguments that may make a better case?  How do you define “information”?

Posted in Mind and AI | Tagged , , , , , | 32 Comments

Layers of self awareness and animal cognition

In the last consciousness post, which discussed issues with panpsychism and simple definitions of consciousness, I laid out five functional layers of cognition which I find helpful when trying to think about systems that are more or less conscious.  Just to recap, those layers are:

  1. Reflexes, primal reactions to stimuli.
  2. Perception, sensory models of the environment that increase the scope of what the reflexes can react to.
  3. Attention, prioritizing which perceptions the reflexes are reacting to.
  4. Imagination, action planning, scenario simulations, deciding which reflexes to allow or inhibit.
  5. Metacognition, introspective access to portions of the processing happening in the above layers.

In the discussion thread on that post, self awareness came up a few times, particularly in relation to this framework.  As you might imagine, as someone who’s been posting under the name “SelfAwarePatterns” for several years, I have some thoughts on this.

Just like consciousness overall, I don’t think self awareness is a simple concept.  It can mean different things in different contexts.  For purposes of this post, I’m going to divide it up into four concepts and try to relate them to the layers above.

At consciousness layer 2, perception, I think we get the simplest form of self awareness, body awareness.  In essence, this is having a sense that there is something different about your body from the rest of the environment.  I think body awareness is phylogenetically ancient, dating back to the Cambrian explosion, and is pervasive in the animal kingdom, including any animal with distance senses (sight, hearing, smell).  As I’ve said before, distance senses seem pointless unless they enable modeling of the environment, and those models are themselves of limited use if they don’t include your body and its relation to that environment.

The next type is attention awareness, which models the brain’s attentional state.  I think of this as layer 4 modeling what’s happening in layer 3.  (These layers appear to be handled by different regions of the brain.)  This type of awareness is explored in Michael Graziano’s attention schema theory.  It provides what we typically think of as top down attention, as opposed to bottom up attention driven from the perceptions in layer 2.

The third type, affect awareness, is integral to the scenario simulations that happen in layer 4.  Affects can be thought of as roughly synonymous with emotions or feelings, although at a broader and more primal level.  Affects include states like fear, pleasure, anger, but also more primal ones like hunger.

Each action scenario needs to be assessed on its desirability, whether it should be the action attempted, and those assessments happen in terms of the affects each scenario triggers.  The results of the simulations are that some reflexes are inhibited and some allowed.  Arguably, it’s this change from automatic action to possible action that turn the reflexes into affects, so in a sense, affect awareness could be considered reflex awareness that enables the creation of affects.

The types of self awareness discussed so far are essentially a system modeling the function of something else.  Body awareness is the brain modeling the body, attention awareness is the planning regions of the brain modeling the attention regions, and affect awareness is the planning regions modeling the sub-cortical reflex circuits.  But the final type, metacognitive awareness, recursive self reflection, is different.  It’s the planning regions modeling their own processing.

Metacognitive awareness lives in layer 5, metacognition.  This is self awareness in its most profound sense.  It’s being aware of your own awareness, experiencing your own experience, thinking about your own thoughts, being conscious of your own consciousness.  But it’s more than that, because if you understand this paragraph, it shows you have the ability to be aware of the awareness of your awareness.  And if you understood the last sentence, it means you have the ability to do so to an arbitrary level of recursion.

This type of awareness is far rarer in the animal kingdom than the other kinds.  It requires a metacognitive capability, an ability to build models not just of the environment, your own body, your attention, or your affective states, but to build models of the models, to reason about your own reasoning.  This capability appears to be limited to only a few species.  But scientifically determining exactly which species is difficult.

Mirror test with a baboon
Image credit: Moshe Blank via Wikipedia

One test that’s been around for a few decades is the mirror test.  You sneak a mark or sticker on the animal where it can’t see it, then put them in front of a mirror.  If the animal sees its reflection, notices the mark or sticker and tries to remove it, then, the advocates of this test propose, it is aware of itself.  But this test seems to conflate the different types of self awareness noted above, so it’s not clear what’s being demonstrated.  It could be only body awareness, although I can also see a case that it might demonstrate attention awareness too.

Regardless, most species fail the mirror test.  Mammals that pass include elephants, chimpanzees, bonobos, orangutans, dolphins, and killer whales.  The only non-mammal that passes is the Eurasian magpie.  Gorillas, monkeys, dogs, cats, octopusses, and other tested species, all fail.

But testing for the higher form of self awareness, metacognitive awareness, means testing for metacognition itself, which more recent tests try to get at directly.

One test looks at how animals behave when they’ve been given ambiguous information about how to get a reward (usually a piece of food).  If the ambiguity causes them to display uncertainty, the reasoning goes, then they must understand how limited their knowledge is.  Dolphins and monkeys seem to pass this test, but not birds.  However, this test has been criticized because it’s not clear that the displayed behavior comes from knowledge of uncertainty, or just uncertainty.  It could be argued that fruit flies display uncertainty.  Does that prove they have metacognition?

A more rigorous experiment starts by showing an animal information, then hides that information.  The animal then has to decide whether to take a test on what they remember seeing.  If they decide not to take the test, they get a moderately tasty treat.  If they do take the test and fail, they get nothing.  But if they take it and succeed, they get a much tastier treat.  The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information.  The goal of the overall experiment is to measure how accurately the animal can assess its own memory.

Some primates pass this more rigorous test, but nothing else seems to.  Dolphins and birds reportedly fail it.  This type of self reflective ability appears to be restricted to only primates.  (There was a study that seemed to show rats passing a similar test, but the specific test reportedly had a flaw where the rats might simply have learned an optimized sequence without any metacognition.)

What do all these tests mean?  Well, failure to pass them is not necessarily conclusive.  There may be confounding variables.  For example, all of these tests seem to require relatively high intelligence.  I think this is a particularly serious issue for the mirror test.  What it’s testing for is a fairly straightforward type of body or attention awareness, but the intelligence required to figure out who the reflection is seems bound to generate false negatives.

This seems like less of an issue for the metacognition tests.  Metacognition could itself be considered a type of intelligence.  And its functionality might not be a useful adaptation unless it’s paired with a certain level of intelligence.  Still, as I noted in the panpsychism post, any time a test shows that only primates have a certain ability, we need to be mindful of the possibility of an anthropocentric bias.

Again, my own sense is that body awareness is pervasive among animals.  I think attention and affect awareness are also relatively pervasive, although as this NY Times article that amanimal shared with me discusses, humans are able to imagine and plan far more deeply and much further into the future than other animals.  Most animals can only think ahead by a few minutes, whereas humans can do it days, months, years, or even decades into the future.

This seems to indicate that the level 4 capabilities of most animals, along with the associated attention and affect awareness, are far more limited than in humans.  And metacognitive awareness, the highest form of self awareness, only appears to exist in humans and, to a lesser extent, in a few other species.

Considering that our sense of inner experience likely comes from a combination of attention, affect, and metacognitive awareness, it seems like the results of these tests are a stark reminder that we should be careful to not project our own cognitive scope on animals, even when our intuitions are powerfully urging us to do so.

Unless of course there are aspects of this I’m missing?

Posted in Mind and AI | Tagged , , , , , , , | 40 Comments

Recommendation: We Are Legion (We Are Bob)

One of the things that many space enthusiasts find frustrating about the space age is how slow it’s moving, at least relative to its early years.  Humans made it to the moon almost 50 years ago, but since then seem to have retreated to low Earth orbit, working in space stations just above the atmosphere.  Although there is always lots of talk about going further out again, it always seems to be several years in the future.

But while this has been going on with human spaceflight, robots have been exploring the entire solar system.  We’ve sent probes on missions to every planet, and have some orbiting several of them.  Mars has been thoroughly mapped from orbit and has had rovers exploring its surface pretty much continuously for the last couple of decades.

When it comes to space, humans simply aren’t the pioneers.  That role now falls to robots.  I don’t see that changing in the future, particularly as AI (artificial intelligence) continues growing in capabilities.  This means that when humanity does reach the stars, it will inevitably be first, and possibly exclusively, with robots.

This puts science fiction authors in a bind.  Stories of humans sitting around drinking coffee and eating bagels as news comes in of all the things the robots are doing aren’t very compelling.  Some authors solve this by simply ignoring AI, or by imagining some limitation that AI development will run into that allows human characters to be at the center of the action again.  But some solve it by making the stories about the AIs, or even by making the AIs….us.

That’s the approach that Dennis Taylor takes with his Bobiverse books, the first of which is We Are Legion (We Are Bob).  The Bob in the title is Bob Johansson, a software entrepreneur who signs a contract with a cryonics company to be frozen at his death in hopes of revival in the future when medical technology improves.  Shortly afterward, he ends up getting killed in an accident.

He wakes up a century later inside a computer, as a software replicant of the original Bob, an uploaded mind.  America has become a theocracy, one that considers him in his new form to be technology rather than a person.  He is drafted into being the control system for an interstellar Von Neumann probe, a self replicating robotic spacecraft designed to reach other solar systems and build new copies of itself using local resources, and then send the new copies further out exploring, where they’ll eventually build copies of themselves, and so on.

Just as Bob is being launched, a full scale war erupts and he barely makes it out.  He then has to fight probes he encounters from other nations among the stars on his way to building BobNet, a network of replicated Bobs exploring the stars near Earth.  Eventually, some of his copies return to a devastated Earth to help the last remnants of humanity escape to the stars.

He also encounters a primitive but intelligent species on one of the planets he explores, setting up a situation that starts off similar to the one at the beginning of 2001 A Space Odyssey, but essentially seen from the perspective of the monolith.   And as the story progresses, he encounters a powerful existential threat to himself and humanity.

As a writer, I found the first book interesting, partly because, despite it’s lack of a tight plot structure, it was still very satisfying.  We follow Bob as he reaches other solar systems, fights dangers, replicates, and makes new discoveries.  There is a constantly increasing number of story threads as each new Bob comes online, and many different conflicts.  It’s a bit episodic, with the episodes overlapping with each other, but works because the concept of self replicating interstellar probes is being explored.

I think the loose structure starts to make itself felt in the second book.  I sometimes found the earlier parts of that book tedious, with some of the conflicts and issues feeling a bit like filler.  But as the second books progresses, the existential threat becomes more apparent, which adds tension and excitement back to the story.

The story is told in first person, with the specific instance of the Bob (who each have their own names often drawn from classic science fiction stories, mythologies, or other sources) and his location listed at the beginning of each chapter.  Bob is a good natured and sympathetic character whose fairly positive and humorous viewpoint keeps the story approachable, even when it gets pretty dark.

These books aren’t super hard science fiction.  The Von Neumann concept is a serious one, but Taylor introduces some magical technologies in order to tell story he wants.  These include a subspace concept that enables a reactionless drive, relativistic travel, faster than light detection and, eventually, faster than light communication.  He also has each Bob’s personality be a little different, which he doesn’t explain except to hint that quantum indeterminacy may be the cause.  And it seems like Taylor relies heavily on the panspermia concept for his aliens.

Of course, these compromises have the benefits of having each Bob be a somewhat unique character that can be in physical jeopardy, allows a more interactive community across interstellar distances, makes the aliens more relatable, and saves the story from being far more concerned with the logistics of energy production and usage than it otherwise would have needed to be.

There’s a lot to like in these books.  I’ve read the first two and expect to quickly consume the third when it becomes available later this year.  If the ideas of mind uploading, self replicating interstellar probes, and space battles appeal to you, I highly recommend them.

Posted in Science Fiction | Tagged , , , | 8 Comments

Panpsychism and layers of consciousness

The Neoplatonic “world soul”
Source: Wikipedia

I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe.  However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.

I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong.  The reason is that with a sufficiently loose definition of consciousness, it is true.  The question is how useful those loose definitions are.

But first I think a clarification is needed.  Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.

The first is one I’ll call pandualism.  Pandualism takes substance dualism as a starting point.

Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind.  Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind.  This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.

Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces.  This group seems to include people like David Chalmers and Christof Koch.

I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong.  There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation.  The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations.  It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.

The second panpsychist position is one I’m going to call naturalistic panpsychism.  This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious.  Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.

Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism.  Apsychists don’t see consciousness as actually existing.  In their view it’s an illusion, an obsolete concept similar to vitalism.  We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.

The difference between naturalistic panpsychism and apsychism seems to amount to language.  (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.)  Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language.  But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.

I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me.  But ultimately, I think both approaches are problematic.  Again, I won’t say that they’re wrong necessarily, just not productive.  But their unproductiveness seems to arise from an overly broad definition of consciousness.  As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.

Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t.  The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge.  Religion, biological life, and consciousness are all concepts that seem to fall into this category.

Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”.  But that apparent simplicity masks a lot of complex underpinnings.  Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them.  Without this ability, what makes experience “subjective” or “like” anything?

Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat.  If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.

So, metacognition seems to factor into our intuition of consciousness.  But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality.  My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.

The first layer, and the most basic, are reflexes.  The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes.  This could be considered the base programming of the system.  A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.

Perception is the second layer.  It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them.  This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.

Attention, selective focusing of resources based on perception and reflex, is the third layer.  It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain.  This layer is a system to prioritize what the reflexes will react to.

Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment.  Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet.  Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.

Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.

Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one.  It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.

Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful.  And introspection is the defining characteristic of human consciousness.  Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.

How widespread is metacognition in the animal kingdom?  No one really knows.  Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals.  A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).

But these tests are complex, and the animal’s overall intelligence may be a confounding variable.  And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias.  Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative.  It may be that only humans and, to a lesser extent other primates, have it.

So, which layers are necessary for consciousness?  If you answer one, the reflex one, then you may effectively be a panpsychist.  If you say layer two, perception, then you might consider some artificial neural networks conscious.  As I mentioned above, some autonomous robots are approaching layer three with attention.  But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.

And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify.  But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.

What about the common criteria of an ability to suffer?  Consider that our version of suffering is inescapably tangled up with our metacognition.  Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?

So what do you think?  Does panpsychism remain a useful outlook?  Are the layers I describe here hopelessly wrong?  If so, what’s another way to look at it?

Posted in Mind and AI | Tagged , , , , , , , | 79 Comments

Recommendation: The Roboteer Trilogy

I’m sure anyone who’s paid attention to my science fiction novel recommendations has noticed that I love space opera.  But as much as I love the genre, I’m often aware of an issue many of its stories have.  In order to have the characters be in jeopardy, they often ignore the implications of artificial intelligence.  For instance, I love James S. A. Corey’s Expanse books, but the fact that the characters are often depicted doing dangerous jobs that robots could be doing has always struck me as a world building flaw.

Alex Lamb’s Roboteer series, to some extent, addresses this issue.  It posits a universe where large interstellar warships have a small human crew (4-5 people), but where most of the work is actually done by robots.  In the first book, a crew specialist, a “roboteer”, mentally controls the robots with brain implants, although by the third book all the crew members are effectively roboteers.

The main protagonist, Will Kuno-Monet, is one of the early roboteers at the beginning of the first book.  His augmentations also give him access to a virtual reality, and so a substantial part of the story happens in virtual settings.

In this universe, humans have colonized other star systems, and have a faster than light technology based on the Alcubierre warp drive concept, but with constraints due to the physics of the drive that limit destinations to stars on a “galactic shell”, a thin area of roughly consistent distance from the galactic core.  In the shell, the spacetime properties are the same in front of and behind the warp ship.  Travelling between shells, where the spacetime properties vary, fouls the warp drive, making faster than light travel between the shells impossible.

This effectively puts limits on interstellar expansion, allowing travel in a circle around the galaxy but not toward its center or edge, and explains, along with other story elements, the Fermi paradox, the question of why Earth has never been colonized by aliens.  Part of the plot is the discovery of regions that serve as bridges to other shells, and new unexplored regions of the galaxy.

In the first book, Earth is ruled by a theocracy that is asserting its dominion over all the other human worlds.  Many of the colonies are resisting, but they are falling one by one.  The main characters are from a world called Galatea whose citizens engage in genetic editing, controlling the traits of their children.  Earth’s theocracy regards this and any resulting offspring as an abomination that must be eradicated.  So the Galateans see the war as one of survival.

Earth appears to have developed a new weapon.  Will and his shipmates are sent on a mission to learn about it.  It quickly becomes evident that Earth is getting the weapon technology from an alien source, a very advanced and powerful alien civilization.  But the aliens have their own agenda, one that involves assessing humanity’s worth and deciding whether to wipe it out or guide it to a higher level of maturity.  Will ends up establishing a connection with the aliens, and finds himself on a broader mission to save the overall human race.

As the series progresses, the situation for humanity becomes increasingly precarious, with a new threat introduced in the second book.  By the beginning of the third book, the humans are in a desperate fight for survival, and losing, making the tension in the third book very high.

My reason for recommending this series is its overall exploration of what it means to be human.  The early portions are dominated by the clash between different human cultures, but toward the end it becomes a sublime exploration of how human evolution may progress, looking at questions of free will, personal identity, the architecture of the mind, and the nature of happiness, particularly whether happiness achieved by altering the mind counts as the real thing.

A couple of quick caveats.

The first may actually attract some of you but leave others uncomfortable.  Religion features heavily in this series, but its depiction is consistently and relentlessly negative, particularly in the first two books.  The third book rarely mentions it explicitly, but explores religious themes, and again those themes are presented in a pretty harsh light.

The second caveat is that, although the series has a pretty satisfying ending, the overall message about reality ends up being pretty stark.  It’s one a lot of people will intensely dislike.  I enjoyed the books, but I’m not sure myself how to feel about that final message.

That said, if you like hard core but intelligent space opera, then you’ll find a lot to like here.  There’s a lot of nerd candy in these stories.  Lamb does an excellent job of exploring cool technologies and extremely strange alien cultures and biology.  Whatever my feelings about the ending, he makes the journey a lot of fun.  And he is very skilled at creating dramatic tension and suspense.  The books are thrilling adventure stories where you can often feel the desperate pinch the characters are in.

I enjoyed them enough that I’m going to keep a close eye out for future work by Lamb.  I think the blurbs on the covers from Stephen Baxter are right, he’s a major new talent.

Posted in Science Fiction | Tagged , , , | 6 Comments

Having productive internet conversations

Anyone who’s frequented this blog knows I love having discussions, and can pontificate all day on subjects I’m interested in.  I’ve actually been participating in online discussions, on and off, for decades.

My earliest conversations were on dial up bulletin boards.  Those were usually tightly focused discussions about technology and gaming.  With the rise of services like CompuServe, AOL, and eventually the web, the conversations broadened to include other topics.

BBS signon screen. Image credit: massacre via Wikipedia

A lot has changed since the old bulletin board chat rooms, but much of the interpersonal dynamics haven’t.  There have always been a mix of different types of people: those looking for cogent conversation, others wanting to sell an agenda of some sort (technical, political, religious, etc), or trolls simply looking to rile everyone up under the cover of anonymity.

Debates have always been there.  The earliest I recall were about which programming languages were the best.  (Anyone remember 8088 assembler, BASIC, Pascal, Pilot?)  Or about which computing platform was superior (think Apple II vs Atari vs Commodore).  It’s interesting how often time renders old debates moot.

One thing I’ve learned repeatedly over the years, is that you can virtually never change anyone’s mind about anything during a debate.  I can count on my fingers the number of times I’ve seen it happen, and in that small number of cases, it was always someone who wasn’t particularly committed to the point of view they started the conversation with.

That’s not to say that I haven’t seen people change their mind on even the most dug in subject, but it’s almost always been over a period of weeks, months, or years.  If a conversation I participated in contributed to that change, I generally only heard about it long after the change had happened, and then only if the conversation ended on cordial terms.

Why then participate in these conversations?  For me personally, a big part of the draw is testing my own ideas by seeing what faults others can find in them.  It’s one of the things that brought me back to online discussions, including blogging, after a break of several years.

But I’ll admit persuasion remains part of the motivation, although I’ve known for a long time that persuasion is by necessity a long term game.  The best we can hope to do in any one conversation is to lay the seeds of change.  Whether those seeds take root is completely up to the recipient.  Of course, to have any hope of changing someone else’s mind, they have to get the sense that we’re at least open to changing our own.

All of which is why I generally try to avoid getting into acrimonious debates, at least in recent years.  (Not that I always succeed.)  In my view, Dale Carnegie was right, you can’t win an argument.  Trying to win only causes people to dig in deeper and, if the argument goes on too long, causes hard feelings and wounded relationships.  Even if your argument is unassailable, people won’t recognize it in their urge to save face.

This is why my approach is usually to lay out a position, explain the reasons for that position, and then address any questions someone may ask.  If someone lays out their position, I try to ask for their reasons (if they haven’t already given them), and if I disagree, lay out my reasons for disagreeing.  As long as that’s happening in the conversation, an exchange of viewpoints and the reasons for them, I think it’s a productive one, one that I, the other person, or maybe some third party reader might learn from.

One of the things I try to watch out for is when points previously made start getting repeated.  This is easy to miss when a discussion has been going on for days or weeks.  But when we reach that point, the discussion is in danger of, or has already morphed into an argument.  Long experience has taught me that continuing the conversation further is unlikely to be productive.  (There are exceptions, but they’re rare ones.)

For a long time, I tended to end the conversation by announcing we were starting to loop and that I thought it was time to stop.  This seemed like the polite thing to do.  But just in the last year or so, I’ve concluded something many of you already knew, that the last announcement message is also counter-productive, particularly if the debate has become intense.  It’s far better to let the other person have the last word and move on.

This raises an important point, one that also took me a long time to learn and internalize.  Just because someone says something, I’m not necessarily obligated to respond.  This is particularly true if the other person is being nasty.  I always have the option of just moving on.

If I do choose to respond, I’m also not obligated to respond to every point the other person made.  Maybe the point has already been addressed earlier in the thread, or it might be a subject matter I’m not particularly knowledgeable about, or responding to it might involve a lot of effort I don’t feel like putting in right then.  Sometimes it’s a point I’m simply not interested in discussing.

Discussions about science and philosophy have a special burden, because often the topic is difficult to describe, to put into language.  That means for the discussions to be productive, everyone has to exercise at least a degree of interpretational charity.  Just about every philosophical proposition can be interpreted in a strawman fashion, in a way that’s obviously wrong and easy to knock down.  Doing so is easy but it has a tendency to rush a discussion into the argument phase.   A rewarding philosophical or scientific discussion requires that both parties try to find the intelligent interpretation of the other person’s words, and respond to that rather than the strawman version.

When I’m in doubt about how to interpret someone’s statement, I usually either ask for clarification or restate what I think their thesis is before addressing it.  A lot of misunderstandings have been cleared up with those restatements.

If science and philosophy can be difficult, political discussions are often impossible, especially these days.  But again, I find value in stating a position and then laying out the reasons for it.  When people disagree, it again helps to have them explain why.  Often what we take to be a hopelessly uninformed or selfish outlook has more substantive grounds than we might want to admit.  Even when it doesn’t, treating the other person as though they’re immoral or an idiot is pretty much surrendering any chance of changing their mind.

Not that I’m a saint about any of this, as anyone who goes through the archive of this blog or my Twitter or Facebook feeds can attest.  Much of what I’ve described here is aspirational.  Still, since I’ve been striving to meet these standards, my online conversations have become much richer.

All that said, there are undeniably a lot of trolls out there who have no interest in having real conversation.  I think one important aspect of enjoying an online life is knowing how to block jerks.  Every major platform has mechanisms for doing this, and they’re well worth learning about.  I’ve personally never had to resort to these measures, but it’s  nice to know they’re there.

What do you think?  Is my way too mamby pamby?  Too unwilling to reap the benefits of gladiatorial discussion?   Or are there other techniques I’m missing that could make for better conversations?

Posted in Society | Tagged , , , , | 51 Comments

The system components of pain

Image credit: Holger.Ellgaard via Wikipedia

Peter Hankins at Conscious Entities has a post looking at the morality of consciousness, which is a commentary on piece at Nautilus by Jim Davies on the same topic.  I recommend reading both posts in their entirety, but the overall gist is that which animals or systems are conscious has moral implications, since only conscious entities should be of moral concern.

From Peter’s post:

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult…

I left a comment on Peter’s post, which I’m repeating here and expanding a bit.

I think it helps to consider what an organism needs to have in order to experience pain.  It seems to need an internal self-body image (Damasio’s proto-self) built by continuous signalling from an internal network of sensors (nerves) throughout its body.  It needs to have strong preferences about the state of that body so that when it receives signals that violate those preferences, it has powerful defensive impulses, impulses it cannot dismiss and can only inhibit with significant energy.

We could argue about whether it needs to have some level of introspection so it knows that it’s in pain, but it’s not clear that newborn babies have that capability, yet I wouldn’t be comfortable saying a newborn can’t feel pain.  (Although it used to be a common medical sentiment that they couldn’t, few people seem to believe that today.)

When asking if plants feel pain, you might could argue that they can be damaged, and may respond to that damage, but I can’t see any evidence that they build an internal body image.  They do seem to have impulses about finding water, catching sunlight, spreading seeds, etc, but it doesn’t seem to amount to anything above robotic action, very slow robotic action by our standards.

Things get a little hazy with organisms that have nervous systems without any central brain, such as c-elegans worms.  These types of worms will respond to noxious stimuli, but it’s hard to imagine they have any internal image in their diffuse and limited nervous system.  You could argue that their responses to stimuli constitute preferences, but these seem, again, like largely robotic impulses, although subject to classical conditioning.

But any vertebrate or invertebrate with distance senses has a central brain or ganglia.  They build image maps, models, of the environment and its relation to themselves.  Which means they have some notion of their self as distinct from that environment, and likely have at least an incipient body image.  Coupled with the impulse responses they inherited from their worm forebears, it seems like even the simplest species have the necessary components.

I often read that insects don’t feel pain, but when I spray one, it sure buzzes and convulses like it’s in serious distress, enough so that I usually try to put it out of its misery if I can. Am I just projecting?  Perhaps, but I prefer to err on the side of caution (admittedly not to the extent of letting the bug continue to live in my house).

I think people resist the idea of animal consciousness because we eat them, use them for scientific research, or, in many cases, eradicate them when they cross our interests, and taking the stance that they’re not conscious avoids having to deal with difficult questions.  Myself, I don’t think the research or pest control should necessarily stop, but we should be clear about what we’re doing and carefully weigh the benefits against the cost.

But what about something like an autonomous mine sweeping robot?  It presumably has sensors to monitor its body state, and I’m sure given the option, its programming is to maintain its body’s functionality as long as possible.  When it becomes damaged from setting off a mine, is there any basis to conclude that it’s in pain?

I did a post on the question of machine suffering last year.  My thoughts now are much the same as then, that unless we engineered the machine’s information processing systems with a certain architecture, it wouldn’t undergo what we think of as suffering.

Above, I said that to feel pain, the system would need to have strong preferences about the state of its body image, resulting in impulses it could not dismiss and could only inhibit with significant energy.  I think that’s what’s missing in the robot example.  It presumably can monitor its body state and take action to correct it if there is opportunity, but if there isn’t opportunity, it can log the issue and then calmly adjust to its current state and continue its mission as much as possible.

Living systems obviously don’t have this capability.  We don’t have the option to decide whether feeling pain is useful, to have the distress of what it is conveying go away.  (At least without drugs.)

The robot is also missing another important quality.  It isn’t a survival machine in the way that all living organisms are.  It likely has programming to preserve its functionality as long as possible, but that’s only in service to its primary goals, which is finding mines.  It has no dread of being damaged or of being destroyed entirely.

Which brings us back to the original question that Hankins and Davies were looking at.  Regardless of how intelligent it might be, could we ever regard such a robot as conscious?  If not, what does this tell us about our intuitive feeling of what consciousness fundamentally is?

I’ve done a lot of posts on this blog about consciousness.  A lot of what I’ve described in those posts, models, simulations, etc, could often be said to amount to a description of intelligence.  I’ve mentioned to a few of you recently in conversations that this realization is bringing me back to a position I held when I first started this blog, that consciousness is, intuitively, intelligence plus emotions, that is, intelligence in service of survival instincts.

But maybe I’m missing something?

Posted in Mind and AI | Tagged , , , , , , , | 29 Comments