The difficulty of interstellar travel for humans

Futurism.com has an article reviewing the results of a survey they conducted with their readers asking when the first human might leave the solar system.  The leading answer was after the year 2100, which make sense given our current level of progress just getting humans back out of low Earth orbit.  But I think the prospects for human interstellar exploration are far more difficult than most people realize.

First, when pondering this question, we can forget about the traditional sci-fi answers of hyperspace, warp drive, or similar notions.  These concepts are fictional plot devices with little or no basis in science.  Even concepts that have been considered by scientists, such as wormholes and Alcubierre warp drives, are extremely speculative, requiring that certain speculative and unproven aspects of physics be true.

Getting anything like these types of technology will require a new physics.  Undoubtedly, we will learn new things about physics in the decades and centuries to come, but the probability that what we learn will enable interstellar travel to function like sea travel (the preoccupation of most space opera stories) is an infinitesimal slice of the possibilities.

The only way to speculate scientifically on this kind of thing is to take the science that we currently have and try to extrapolate from it.  When we do that, the obstacles to human interstellar flight seem pretty daunting.

Worrying about the speed of light limit, which current physics tells us is the ultimate speed limit in the universe, is sour grapes to a large extent.  Even getting to an appreciable percentage of the speed of light turns out to require astounding amounts of energy.  Doing it with our current chemical rocket technology is a lost cause.  According to Paul Gilster, in his book Centauri Dreams, Imagining and Planning Interstellar Exploration (which I recommend for anyone interested in this subject, as well as his blog), it would take more mass than exists in the visible universe to propel a chemical rocket to a substantial fraction of the speed of light.

An artist’s concept of nuclear pulse propulsion.
Image credit: NASA

Of course, there are better, more efficient propulsion options that might eventually be available.  For purposes of this post, let’s leap to the most efficient and plausible near term option, nuclear pulse propulsion.  This is a refined version of an original idea that involved lobbing nuclear bombs behind a spacecraft to push it forward.

Giltster, in his book, notes that a nuclear pulse propulsion spacecraft, to reach 10% of light speed, would need a mass ratio of 100:1.  This means that for every kilogram of the spacecraft with only its payload, you’d need 100 kilograms of fuel.   Initially, that doesn’t sound too bad since the Apollo missions had an overall mass ratio of 600:1.  But that was for the entire mission, and all we’ve considered so far is the mass ratio to accelerate to 10% of light speed.  We haven’t talked about slowing down at the destination.

In space, given inertia and no air friction, slowing down takes just as much energy as speeding up.  And the kicker is that you have to accelerate the fuel you’ll later need to decelerate.  So slowing down doesn’t just double the mass ratio from 100:1 to 200:1.  The deceleration fuel has to be on the “1” side in the initial acceleration ratio.  That means the ratio for the overall mission (and we’re only talking about a one way mission here), has to be squared, taking it from 100:1 to 10,000:1.

Traveling at 10% lightspeed gets us to Proxima Centauri, the nearest star to the sun, in about 43 years.  When you consider what kind of living space a human crew would need for that time span, and multiply it out by 10,000, an interstellar mission starts to look like the most expensive thing human civilization might ever attempt.  It gets worse if we try to shorten the time span.  Increasing the speed to 20% of light speed raises the ratio to 100,000,000:1.

Imagining antimatter technology might improve the mass ratio substantially.  But it adds new difficulties.  Producing antimatter itself takes tremendous amounts of energy.  It would have to be manufactured and stored in deep space, since any containment failure of any substantial amount would likely result in a gigaton level explosion.  We might save on the mass ratio of the spacecraft, but only at the expense of vast resources dedicated to generating and storing the fuel.  And human crews would likely have to be heavily shielded from the gamma rays generated by antimatter reactions, increasing mass throughout the overall craft.

No discussion of this type is complete without at least mentioning the Bussard ramjet, the idea of a spacecraft with an immense ram scoop to take in interstellar dust to use as fuel.  There was a lot of excitement for this concept in the 60s and 70s, but further study has shown that the interstellar medium isn’t nearly as thick as the initial design hoped for, and many think the ram scoop would generate as much friction as thrust.

Other options are to forego rocketry altogether and go for something like light sails.  Robert Forward, decades ago, put forth a design where a gigantic laser on Mercury would send out a beam to an interstellar light sail spacecraft, steadily accelerating it.  At some point, the craft would separate its sail into two components, one of which would be hit by the laser and reflect it back to the remaining component attached to the craft, decelerating it.  Forward’s design is ingenious, but it would still require titanic amounts of energy, and precise coordination across centuries and light years to work.

Things get a lot easier if we just think about sending uncrewed probes.  That’s the current direction of the Starshot Breakthrough initiative.  The idea is to propel a small, perhaps gram sized probe, to 20% of light speed using Earth based lasers.  The probes would reach Proxima Centauri in about 22 years, taking pictures and readings as they fly through the system, and transmitting the information back to Earth.  There are still major technological hurdles to overcome with this idea, but they all seem achievable within reasonable time periods and with reasonable amounts of energy.

The big drawback to the Starshot design is that it doesn’t have any way to slow the probe down, so everything would have to be learned in the few hours available as it sped through the target system.  An alternate design has been proposed, using the unique topology of the Alpha Centauri / Proxima Centauri system to slow down the probe, but at the cost of increasing the travel time to over a century.

But once we give up the idea of crewed missions, the rocket solutions actually become more plausible.  A 10,000:1 ratio doesn’t seem problematic if the ultimate payload is a one gram probe.  Even the 100,000,000:1 ratio associated with a 20% light speed mission starts to look conceivably manageable.

And when we consider the ongoing improvements in artificial intelligence and the idea of probes building their own daughter probes to explore the destination system, and perhaps even to eventually launch toward systems further out, the possibilities start to look endless.

All of which is to say, that it’s much easier to conduct interstellar exploration with robots, particularly very small ones, than with humans.  It seems likely that we’re going to be exploring the stars with robots for a long time before humans get there, if they ever do.

Unless of course I’m missing something?

Posted in Space | Tagged , , , | 9 Comments

Does information require conscious interpretation to be information?

Peter Kassan has an article at Skeptic Magazine which sets out to disprove the simulation hypothesis, the idea that we’re all living in a computer simulation.

I personally find arguing about the simulation hypothesis unproductive.  Short of the simulation owner deciding to jump in and contact us, we can’t prove the hypothesis.  Even if the simulation has flaws that it would allow us to find and perceive, we can never know whether we are looking at an actual flaw or just something we don’t understand.  For example, is quantum wave-particle duality a flaw in the simulation, or just a puzzling aspect of nature?

Nor can we disprove the simulation.  There’s simply no way to prove to a determined skeptic that the world is real.  And if we are in a simulation, it appears to exact unpleasant consequences for not taking it seriously.  It effectively is our reality.  And we have little choice but to play the game.

But this post isn’t about the simulation hypothesis.  It’s about the central argument Kassan makes against it, that there can’t be a consciousness inside a computer system.  The argument Kassan uses to make this case is one I’m increasingly encountering in online conversations, involving assertions about the nature of information.

ASCII code for “Wikipedia”
Image credit: User:spinningspark at Wikipedia

The argument goes something like this.  Information is only information because we interpret it to be information.  With no one to do that interpretation, the patterns we refer to as information are just patterns, structures, configurations, with no inherent meaning.  Consequently, the physical machinations of computers are information processing only because of our interpretations of what we put into them, what they do with it, and what they produce.  However, brains do their work regardless of the interpretation, so they can’t be processing information, and information processing can’t lead to consciousness.

To be fair, this brief summary of the argument may not do it justice.  If you want to see the case made by someone who buys it, I recommend reading Kassan’s piece.

That said, I think the argument fails for at least two reasons.

The first is that it depends on a particularly narrow conception of information.  There are numerous definitions of information out there.  But for purposes of this post, we don’t need to settle on any one specific definition.  We just need to discuss an implied aspect of all of them, that information must be for something.

The people making the argument are right about one thing.  Information, in an of itself, is not inherently information.  To be information, something must make use of it.  But the assertion is that this role of making use of information can only be fulfilled by a conscious agent.  No conscious agent involved, then no information.  The problem is that this ignores the non-conscious systems that make use of information.

For example, if the long molecules that are DNA chromosomes somehow spontaneously formed by themselves somewhere, there would be nothing about them that made them information.  But when DNA is in the nucleus of a cell, the proteins that surround it create mRNA molecules based on sections of the DNA’s configuration.  These mRNA molecules physically flow to ribosomes, protein factories that assemble amino acids into specific proteins based on the mRNA’s configuration.  Arguably it’s the systems in the cell that make DNA into genetic information on how to construct its molecular machinery.

Another example is a particular type of molecule that is allowed entry through the cell’s membrane.  There’s nothing about that molecule in and of itself that makes it information.  But if the chemical properties of the molecule cause the cell to change its development or behavior, then we often talk about the molecule, perhaps a hormone, being a chemical “signal”.  It’s the cell’s response to the molecule that makes it information.

But even in computer technology, there are often transient pieces of information that no conscious observer interprets.  The device you’re reading this on likely has a MAC address which it uses to communicate on your local network.  It probably contacted a DHCP server to get a dynamically assigned IP address for it to communicate on the internet.  It had to contact a domain name server to get the IP address for this website.  The various apps on it likely all have various internal system identifiers.  None of these things are anything you likely know or think about, but they’re vital for the device to do its job.  Many of the dynamically assigned items will come into and go out of existence without any conscious observer ever interpreting them.  Yet it seems perverse to say that these aren’t information.

Of course, we could fall back to the etymology of “information” and insist on defining it only as something that inputs Platonic forms into a conscious mind (in-form).  But then we’ve created a need to come up with a new word for the patterns, such as DNA or transient IP addresses, that have causal effects on non-conscious systems.  Maybe we could call such patterns “causalation”.  Which means we could talk about brains being causalation processing systems.  Of course, computers would also be causalation processing systems, which just brings us right back to the original bone of contention.

And that in turn bring us to the second reason the argument fails.  Every information processing system is a physical system, and can be described in purely physical terms.  Consider the following description.

A system is constantly propagating energy, at small but consistent levels, through portions of its structure.  The speed and direction of the energy flows are altered by aspects of the structure.  But many of those structural aspects themselves are altered by the energy flow, creating a complex synergy between energy and structure.  The overall dynamic is altered by energy from the environment, and alters the environment by the energy it emits.  Interactions with the environment often happen through intermediate systems that modulate and moderate the inbound energy patterns to a level consistent with the central system, and magnify the causal effects of the emitted energy.

This description can pertain to both computers and central nervous systems.  The energy in commercial computers is electricity, the modifiable aspects of the structure are transistor voltage states, and the intermediate systems are I/O devices such as keyboards, monitors, and printers.  The energy in nervous systems is electrochemical action potentials, the aspects of modifiable structure are the synapses between neurons, and the intermediate systems are the peripheral nervous system and musculature.

(It’s also worth noting that computers can also be built in other ways.  For example, they can be built with mechanical switches, where the energy is mechanical force and the modifiable aspects are the opening and closing switches.  A computer could, in principle, also be built with hydraulic plumbing controlling the flow of liquids.  In his science fiction novel, The Three-Body Problem, Cixin Liu describes an alien computer implemented with a vast army of soldiers, with each soldier acting as a switch, raising or lowering their arms  following simple rules based on what the soldiers next to them did.)

It’s the similarities between how these physical systems work that make it easy for neuroscientists to talk in terms of neural circuits and neural computation, and to see the brain as an information processing organ.  Engaging in lingistic jiu jitsu over the definition of “information” (or “computation” as often happens in similar arguments) doesn’t change these similarities.

Not that there aren’t major differences between a commercial digital computer and an organic brain.   (Although the differences between technology and biology are constantly decreasing.)  The issue isn’t whether brains are computers in the narrow modern sense, but whether they are computational information processing systems.

So, am I being too dismissive of this interpretation argument?  Or are there similar arguments that may make a better case?  How do you define “information”?

Posted in Mind and AI | Tagged , , , , , | 32 Comments

Layers of self awareness and animal cognition

In the last consciousness post, which discussed issues with panpsychism and simple definitions of consciousness, I laid out five functional layers of cognition which I find helpful when trying to think about systems that are more or less conscious.  Just to recap, those layers are:

  1. Reflexes, primal reactions to stimuli.
  2. Perception, sensory models of the environment that increase the scope of what the reflexes can react to.
  3. Attention, prioritizing which perceptions the reflexes are reacting to.
  4. Imagination, action planning, scenario simulations, deciding which reflexes to allow or inhibit.
  5. Metacognition, introspective access to portions of the processing happening in the above layers.

In the discussion thread on that post, self awareness came up a few times, particularly in relation to this framework.  As you might imagine, as someone who’s been posting under the name “SelfAwarePatterns” for several years, I have some thoughts on this.

Just like consciousness overall, I don’t think self awareness is a simple concept.  It can mean different things in different contexts.  For purposes of this post, I’m going to divide it up into four concepts and try to relate them to the layers above.

At consciousness layer 2, perception, I think we get the simplest form of self awareness, body awareness.  In essence, this is having a sense that there is something different about your body from the rest of the environment.  I think body awareness is phylogenetically ancient, dating back to the Cambrian explosion, and is pervasive in the animal kingdom, including any animal with distance senses (sight, hearing, smell).  As I’ve said before, distance senses seem pointless unless they enable modeling of the environment, and those models are themselves of limited use if they don’t include your body and its relation to that environment.

The next type is attention awareness, which models the brain’s attentional state.  I think of this as layer 4 modeling what’s happening in layer 3.  (These layers appear to be handled by different regions of the brain.)  This type of awareness is explored in Michael Graziano’s attention schema theory.  It provides what we typically think of as top down attention, as opposed to bottom up attention driven from the perceptions in layer 2.

The third type, affect awareness, is integral to the scenario simulations that happen in layer 4.  Affects can be thought of as roughly synonymous with emotions or feelings, although at a broader and more primal level.  Affects include states like fear, pleasure, anger, but also more primal ones like hunger.

Each action scenario needs to be assessed on its desirability, whether it should be the action attempted, and those assessments happen in terms of the affects each scenario triggers.  The results of the simulations are that some reflexes are inhibited and some allowed.  Arguably, it’s this change from automatic action to possible action that turn the reflexes into affects, so in a sense, affect awareness could be considered reflex awareness that enables the creation of affects.

The types of self awareness discussed so far are essentially a system modeling the function of something else.  Body awareness is the brain modeling the body, attention awareness is the planning regions of the brain modeling the attention regions, and affect awareness is the planning regions modeling the sub-cortical reflex circuits.  But the final type, metacognitive awareness, recursive self reflection, is different.  It’s the planning regions modeling their own processing.

Metacognitive awareness lives in layer 5, metacognition.  This is self awareness in its most profound sense.  It’s being aware of your own awareness, experiencing your own experience, thinking about your own thoughts, being conscious of your own consciousness.  But it’s more than that, because if you understand this paragraph, it shows you have the ability to be aware of the awareness of your awareness.  And if you understood the last sentence, it means you have the ability to do so to an arbitrary level of recursion.

This type of awareness is far rarer in the animal kingdom than the other kinds.  It requires a metacognitive capability, an ability to build models not just of the environment, your own body, your attention, or your affective states, but to build models of the models, to reason about your own reasoning.  This capability appears to be limited to only a few species.  But scientifically determining exactly which species is difficult.

Mirror test with a baboon
Image credit: Moshe Blank via Wikipedia

One test that’s been around for a few decades is the mirror test.  You sneak a mark or sticker on the animal where it can’t see it, then put them in front of a mirror.  If the animal sees its reflection, notices the mark or sticker and tries to remove it, then, the advocates of this test propose, it is aware of itself.  But this test seems to conflate the different types of self awareness noted above, so it’s not clear what’s being demonstrated.  It could be only body awareness, although I can also see a case that it might demonstrate attention awareness too.

Regardless, most species fail the mirror test.  Mammals that pass include elephants, chimpanzees, bonobos, orangutans, dolphins, and killer whales.  The only non-mammal that passes is the Eurasian magpie.  Gorillas, monkeys, dogs, cats, octopusses, and other tested species, all fail.

But testing for the higher form of self awareness, metacognitive awareness, means testing for metacognition itself, which more recent tests try to get at directly.

One test looks at how animals behave when they’ve been given ambiguous information about how to get a reward (usually a piece of food).  If the ambiguity causes them to display uncertainty, the reasoning goes, then they must understand how limited their knowledge is.  Dolphins and monkeys seem to pass this test, but not birds.  However, this test has been criticized because it’s not clear that the displayed behavior comes from knowledge of uncertainty, or just uncertainty.  It could be argued that fruit flies display uncertainty.  Does that prove they have metacognition?

A more rigorous experiment starts by showing an animal information, then hides that information.  The animal then has to decide whether to take a test on what they remember seeing.  If they decide not to take the test, they get a moderately tasty treat.  If they do take the test and fail, they get nothing.  But if they take it and succeed, they get a much tastier treat.  The idea is that their decision on whether or not to take the test depends on their evaluation of how well they remember the information.  The goal of the overall experiment is to measure how accurately the animal can assess its own memory.

Some primates pass this more rigorous test, but nothing else seems to.  Dolphins and birds reportedly fail it.  This type of self reflective ability appears to be restricted to only primates.  (There was a study that seemed to show rats passing a similar test, but the specific test reportedly had a flaw where the rats might simply have learned an optimized sequence without any metacognition.)

What do all these tests mean?  Well, failure to pass them is not necessarily conclusive.  There may be confounding variables.  For example, all of these tests seem to require relatively high intelligence.  I think this is a particularly serious issue for the mirror test.  What it’s testing for is a fairly straightforward type of body or attention awareness, but the intelligence required to figure out who the reflection is seems bound to generate false negatives.

This seems like less of an issue for the metacognition tests.  Metacognition could itself be considered a type of intelligence.  And its functionality might not be a useful adaptation unless it’s paired with a certain level of intelligence.  Still, as I noted in the panpsychism post, any time a test shows that only primates have a certain ability, we need to be mindful of the possibility of an anthropocentric bias.

Again, my own sense is that body awareness is pervasive among animals.  I think attention and affect awareness are also relatively pervasive, although as this NY Times article that amanimal shared with me discusses, humans are able to imagine and plan far more deeply and much further into the future than other animals.  Most animals can only think ahead by a few minutes, whereas humans can do it days, months, years, or even decades into the future.

This seems to indicate that the level 4 capabilities of most animals, along with the associated attention and affect awareness, are far more limited than in humans.  And metacognitive awareness, the highest form of self awareness, only appears to exist in humans and, to a lesser extent, in a few other species.

Considering that our sense of inner experience likely comes from a combination of attention, affect, and metacognitive awareness, it seems like the results of these tests are a stark reminder that we should be careful to not project our own cognitive scope on animals, even when our intuitions are powerfully urging us to do so.

Unless of course there are aspects of this I’m missing?

Posted in Mind and AI | Tagged , , , , , , , | 40 Comments

Recommendation: We Are Legion (We Are Bob)

One of the things that many space enthusiasts find frustrating about the space age is how slow it’s moving, at least relative to its early years.  Humans made it to the moon almost 50 years ago, but since then seem to have retreated to low Earth orbit, working in space stations just above the atmosphere.  Although there is always lots of talk about going further out again, it always seems to be several years in the future.

But while this has been going on with human spaceflight, robots have been exploring the entire solar system.  We’ve sent probes on missions to every planet, and have some orbiting several of them.  Mars has been thoroughly mapped from orbit and has had rovers exploring its surface pretty much continuously for the last couple of decades.

When it comes to space, humans simply aren’t the pioneers.  That role now falls to robots.  I don’t see that changing in the future, particularly as AI (artificial intelligence) continues growing in capabilities.  This means that when humanity does reach the stars, it will inevitably be first, and possibly exclusively, with robots.

This puts science fiction authors in a bind.  Stories of humans sitting around drinking coffee and eating bagels as news comes in of all the things the robots are doing aren’t very compelling.  Some authors solve this by simply ignoring AI, or by imagining some limitation that AI development will run into that allows human characters to be at the center of the action again.  But some solve it by making the stories about the AIs, or even by making the AIs….us.

That’s the approach that Dennis Taylor takes with his Bobiverse books, the first of which is We Are Legion (We Are Bob).  The Bob in the title is Bob Johansson, a software entrepreneur who signs a contract with a cryonics company to be frozen at his death in hopes of revival in the future when medical technology improves.  Shortly afterward, he ends up getting killed in an accident.

He wakes up a century later inside a computer, as a software replicant of the original Bob, an uploaded mind.  America has become a theocracy, one that considers him in his new form to be technology rather than a person.  He is drafted into being the control system for an interstellar Von Neumann probe, a self replicating robotic spacecraft designed to reach other solar systems and build new copies of itself using local resources, and then send the new copies further out exploring, where they’ll eventually build copies of themselves, and so on.

Just as Bob is being launched, a full scale war erupts and he barely makes it out.  He then has to fight probes he encounters from other nations among the stars on his way to building BobNet, a network of replicated Bobs exploring the stars near Earth.  Eventually, some of his copies return to a devastated Earth to help the last remnants of humanity escape to the stars.

He also encounters a primitive but intelligent species on one of the planets he explores, setting up a situation that starts off similar to the one at the beginning of 2001 A Space Odyssey, but essentially seen from the perspective of the monolith.   And as the story progresses, he encounters a powerful existential threat to himself and humanity.

As a writer, I found the first book interesting, partly because, despite it’s lack of a tight plot structure, it was still very satisfying.  We follow Bob as he reaches other solar systems, fights dangers, replicates, and makes new discoveries.  There is a constantly increasing number of story threads as each new Bob comes online, and many different conflicts.  It’s a bit episodic, with the episodes overlapping with each other, but works because the concept of self replicating interstellar probes is being explored.

I think the loose structure starts to make itself felt in the second book.  I sometimes found the earlier parts of that book tedious, with some of the conflicts and issues feeling a bit like filler.  But as the second books progresses, the existential threat becomes more apparent, which adds tension and excitement back to the story.

The story is told in first person, with the specific instance of the Bob (who each have their own names often drawn from classic science fiction stories, mythologies, or other sources) and his location listed at the beginning of each chapter.  Bob is a good natured and sympathetic character whose fairly positive and humorous viewpoint keeps the story approachable, even when it gets pretty dark.

These books aren’t super hard science fiction.  The Von Neumann concept is a serious one, but Taylor introduces some magical technologies in order to tell story he wants.  These include a subspace concept that enables a reactionless drive, relativistic travel, faster than light detection and, eventually, faster than light communication.  He also has each Bob’s personality be a little different, which he doesn’t explain except to hint that quantum indeterminacy may be the cause.  And it seems like Taylor relies heavily on the panspermia concept for his aliens.

Of course, these compromises have the benefits of having each Bob be a somewhat unique character that can be in physical jeopardy, allows a more interactive community across interstellar distances, makes the aliens more relatable, and saves the story from being far more concerned with the logistics of energy production and usage than it otherwise would have needed to be.

There’s a lot to like in these books.  I’ve read the first two and expect to quickly consume the third when it becomes available later this year.  If the ideas of mind uploading, self replicating interstellar probes, and space battles appeal to you, I highly recommend them.

Posted in Science Fiction | Tagged , , , | 5 Comments

Panpsychism and layers of consciousness

The Neoplatonic “world soul”
Source: Wikipedia

I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe.  However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.

I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong.  The reason is that with a sufficiently loose definition of consciousness, it is true.  The question is how useful those loose definitions are.

But first I think a clarification is needed.  Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.

The first is one I’ll call pandualism.  Pandualism takes substance dualism as a starting point.

Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind.  Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind.  This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.

Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces.  This group seems to include people like David Chalmers and Christof Koch.

I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong.  There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation.  The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations.  It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.

The second panpsychist position is one I’m going to call naturalistic panpsychism.  This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious.  Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.

Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism.  Apsychists don’t see consciousness as actually existing.  In their view it’s an illusion, an obsolete concept similar to vitalism.  We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.

The difference between naturalistic panpsychism and apsychism seems to amount to language.  (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.)  Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language.  But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.

I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me.  But ultimately, I think both approaches are problematic.  Again, I won’t say that they’re wrong necessarily, just not productive.  But their unproductiveness seems to arise from an overly broad definition of consciousness.  As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.

Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t.  The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge.  Religion, biological life, and consciousness are all concepts that seem to fall into this category.

Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”.  But that apparent simplicity masks a lot of complex underpinnings.  Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them.  Without this ability, what makes experience “subjective” or “like” anything?

Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat.  If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.

So, metacognition seems to factor into our intuition of consciousness.  But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality.  My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.

The first layer, and the most basic, are reflexes.  The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes.  This could be considered the base programming of the system.  A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.

Perception is the second layer.  It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them.  This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.

Attention, selective focusing of resources based on perception and reflex, is the third layer.  It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain.  This layer is a system to prioritize what the reflexes will react to.

Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment.  Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet.  Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.

Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.

Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one.  It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.

Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful.  And introspection is the defining characteristic of human consciousness.  Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.

How widespread is metacognition in the animal kingdom?  No one really knows.  Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals.  A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).

But these tests are complex, and the animal’s overall intelligence may be a confounding variable.  And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias.  Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative.  It may be that only humans and, to a lesser extent other primates, have it.

So, which layers are necessary for consciousness?  If you answer one, the reflex one, then you may effectively be a panpsychist.  If you say layer two, perception, then you might consider some artificial neural networks conscious.  As I mentioned above, some autonomous robots are approaching layer three with attention.  But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.

And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify.  But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.

What about the common criteria of an ability to suffer?  Consider that our version of suffering is inescapably tangled up with our metacognition.  Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?

So what do you think?  Does panpsychism remain a useful outlook?  Are the layers I describe here hopelessly wrong?  If so, what’s another way to look at it?

Posted in Mind and AI | Tagged , , , , , , , | 77 Comments

Recommendation: The Roboteer Trilogy

I’m sure anyone who’s paid attention to my science fiction novel recommendations has noticed that I love space opera.  But as much as I love the genre, I’m often aware of an issue many of its stories have.  In order to have the characters be in jeopardy, they often ignore the implications of artificial intelligence.  For instance, I love James S. A. Corey’s Expanse books, but the fact that the characters are often depicted doing dangerous jobs that robots could be doing has always struck me as a world building flaw.

Alex Lamb’s Roboteer series, to some extent, addresses this issue.  It posits a universe where large interstellar warships have a small human crew (4-5 people), but where most of the work is actually done by robots.  In the first book, a crew specialist, a “roboteer”, mentally controls the robots with brain implants, although by the third book all the crew members are effectively roboteers.

The main protagonist, Will Kuno-Monet, is one of the early roboteers at the beginning of the first book.  His augmentations also give him access to a virtual reality, and so a substantial part of the story happens in virtual settings.

In this universe, humans have colonized other star systems, and have a faster than light technology based on the Alcubierre warp drive concept, but with constraints due to the physics of the drive that limit destinations to stars on a “galactic shell”, a thin area of roughly consistent distance from the galactic core.  In the shell, the spacetime properties are the same in front of and behind the warp ship.  Travelling between shells, where the spacetime properties vary, fouls the warp drive, making faster than light travel between the shells impossible.

This effectively puts limits on interstellar expansion, allowing travel in a circle around the galaxy but not toward its center or edge, and explains, along with other story elements, the Fermi paradox, the question of why Earth has never been colonized by aliens.  Part of the plot is the discovery of regions that serve as bridges to other shells, and new unexplored regions of the galaxy.

In the first book, Earth is ruled by a theocracy that is asserting its dominion over all the other human worlds.  Many of the colonies are resisting, but they are falling one by one.  The main characters are from a world called Galatea whose citizens engage in genetic editing, controlling the traits of their children.  Earth’s theocracy regards this and any resulting offspring as an abomination that must be eradicated.  So the Galateans see the war as one of survival.

Earth appears to have developed a new weapon.  Will and his shipmates are sent on a mission to learn about it.  It quickly becomes evident that Earth is getting the weapon technology from an alien source, a very advanced and powerful alien civilization.  But the aliens have their own agenda, one that involves assessing humanity’s worth and deciding whether to wipe it out or guide it to a higher level of maturity.  Will ends up establishing a connection with the aliens, and finds himself on a broader mission to save the overall human race.

As the series progresses, the situation for humanity becomes increasingly precarious, with a new threat introduced in the second book.  By the beginning of the third book, the humans are in a desperate fight for survival, and losing, making the tension in the third book very high.

My reason for recommending this series is its overall exploration of what it means to be human.  The early portions are dominated by the clash between different human cultures, but toward the end it becomes a sublime exploration of how human evolution may progress, looking at questions of free will, personal identity, the architecture of the mind, and the nature of happiness, particularly whether happiness achieved by altering the mind counts as the real thing.

A couple of quick caveats.

The first may actually attract some of you but leave others uncomfortable.  Religion features heavily in this series, but its depiction is consistently and relentlessly negative, particularly in the first two books.  The third book rarely mentions it explicitly, but explores religious themes, and again those themes are presented in a pretty harsh light.

The second caveat is that, although the series has a pretty satisfying ending, the overall message about reality ends up being pretty stark.  It’s one a lot of people will intensely dislike.  I enjoyed the books, but I’m not sure myself how to feel about that final message.

That said, if you like hard core but intelligent space opera, then you’ll find a lot to like here.  There’s a lot of nerd candy in these stories.  Lamb does an excellent job of exploring cool technologies and extremely strange alien cultures and biology.  Whatever my feelings about the ending, he makes the journey a lot of fun.  And he is very skilled at creating dramatic tension and suspense.  The books are thrilling adventure stories where you can often feel the desperate pinch the characters are in.

I enjoyed them enough that I’m going to keep a close eye out for future work by Lamb.  I think the blurbs on the covers from Stephen Baxter are right, he’s a major new talent.

Posted in Science Fiction | Tagged , , , | 6 Comments

Having productive internet conversations

Anyone who’s frequented this blog knows I love having discussions, and can pontificate all day on subjects I’m interested in.  I’ve actually been participating in online discussions, on and off, for decades.

My earliest conversations were on dial up bulletin boards.  Those were usually tightly focused discussions about technology and gaming.  With the rise of services like CompuServe, AOL, and eventually the web, the conversations broadened to include other topics.

BBS signon screen. Image credit: massacre via Wikipedia

A lot has changed since the old bulletin board chat rooms, but much of the interpersonal dynamics haven’t.  There have always been a mix of different types of people: those looking for cogent conversation, others wanting to sell an agenda of some sort (technical, political, religious, etc), or trolls simply looking to rile everyone up under the cover of anonymity.

Debates have always been there.  The earliest I recall were about which programming languages were the best.  (Anyone remember 8088 assembler, BASIC, Pascal, Pilot?)  Or about which computing platform was superior (think Apple II vs Atari vs Commodore).  It’s interesting how often time renders old debates moot.

One thing I’ve learned repeatedly over the years, is that you can virtually never change anyone’s mind about anything during a debate.  I can count on my fingers the number of times I’ve seen it happen, and in that small number of cases, it was always someone who wasn’t particularly committed to the point of view they started the conversation with.

That’s not to say that I haven’t seen people change their mind on even the most dug in subject, but it’s almost always been over a period of weeks, months, or years.  If a conversation I participated in contributed to that change, I generally only heard about it long after the change had happened, and then only if the conversation ended on cordial terms.

Why then participate in these conversations?  For me personally, a big part of the draw is testing my own ideas by seeing what faults others can find in them.  It’s one of the things that brought me back to online discussions, including blogging, after a break of several years.

But I’ll admit persuasion remains part of the motivation, although I’ve known for a long time that persuasion is by necessity a long term game.  The best we can hope to do in any one conversation is to lay the seeds of change.  Whether those seeds take root is completely up to the recipient.  Of course, to have any hope of changing someone else’s mind, they have to get the sense that we’re at least open to changing our own.

All of which is why I generally try to avoid getting into acrimonious debates, at least in recent years.  (Not that I always succeed.)  In my view, Dale Carnegie was right, you can’t win an argument.  Trying to win only causes people to dig in deeper and, if the argument goes on too long, causes hard feelings and wounded relationships.  Even if your argument is unassailable, people won’t recognize it in their urge to save face.

This is why my approach is usually to lay out a position, explain the reasons for that position, and then address any questions someone may ask.  If someone lays out their position, I try to ask for their reasons (if they haven’t already given them), and if I disagree, lay out my reasons for disagreeing.  As long as that’s happening in the conversation, an exchange of viewpoints and the reasons for them, I think it’s a productive one, one that I, the other person, or maybe some third party reader might learn from.

One of the things I try to watch out for is when points previously made start getting repeated.  This is easy to miss when a discussion has been going on for days or weeks.  But when we reach that point, the discussion is in danger of, or has already morphed into an argument.  Long experience has taught me that continuing the conversation further is unlikely to be productive.  (There are exceptions, but they’re rare ones.)

For a long time, I tended to end the conversation by announcing we were starting to loop and that I thought it was time to stop.  This seemed like the polite thing to do.  But just in the last year or so, I’ve concluded something many of you already knew, that the last announcement message is also counter-productive, particularly if the debate has become intense.  It’s far better to let the other person have the last word and move on.

This raises an important point, one that also took me a long time to learn and internalize.  Just because someone says something, I’m not necessarily obligated to respond.  This is particularly true if the other person is being nasty.  I always have the option of just moving on.

If I do choose to respond, I’m also not obligated to respond to every point the other person made.  Maybe the point has already been addressed earlier in the thread, or it might be a subject matter I’m not particularly knowledgeable about, or responding to it might involve a lot of effort I don’t feel like putting in right then.  Sometimes it’s a point I’m simply not interested in discussing.

Discussions about science and philosophy have a special burden, because often the topic is difficult to describe, to put into language.  That means for the discussions to be productive, everyone has to exercise at least a degree of interpretational charity.  Just about every philosophical proposition can be interpreted in a strawman fashion, in a way that’s obviously wrong and easy to knock down.  Doing so is easy but it has a tendency to rush a discussion into the argument phase.   A rewarding philosophical or scientific discussion requires that both parties try to find the intelligent interpretation of the other person’s words, and respond to that rather than the strawman version.

When I’m in doubt about how to interpret someone’s statement, I usually either ask for clarification or restate what I think their thesis is before addressing it.  A lot of misunderstandings have been cleared up with those restatements.

If science and philosophy can be difficult, political discussions are often impossible, especially these days.  But again, I find value in stating a position and then laying out the reasons for it.  When people disagree, it again helps to have them explain why.  Often what we take to be a hopelessly uninformed or selfish outlook has more substantive grounds than we might want to admit.  Even when it doesn’t, treating the other person as though they’re immoral or an idiot is pretty much surrendering any chance of changing their mind.

Not that I’m a saint about any of this, as anyone who goes through the archive of this blog or my Twitter or Facebook feeds can attest.  Much of what I’ve described here is aspirational.  Still, since I’ve been striving to meet these standards, my online conversations have become much richer.

All that said, there are undeniably a lot of trolls out there who have no interest in having real conversation.  I think one important aspect of enjoying an online life is knowing how to block jerks.  Every major platform has mechanisms for doing this, and they’re well worth learning about.  I’ve personally never had to resort to these measures, but it’s  nice to know they’re there.

What do you think?  Is my way too mamby pamby?  Too unwilling to reap the benefits of gladiatorial discussion?   Or are there other techniques I’m missing that could make for better conversations?

Posted in Society | Tagged , , , , | 51 Comments