Panpsychism and layers of consciousness

The Neoplatonic “world soul”
Source: Wikipedia

I’ve written before about panpyschism, the outlook that everything is conscious and that consciousness permeates the universe.  However, that previous post was within the context of replying to a TEDx talk, and I’m not entirely satisfied with the remarks I made back then, so this is a revisit of that topic.

I’ve noted many times that I don’t think panpsychism is a productive outlook, but I’ve never said outright that it’s wrong.  The reason is that with a sufficiently loose definition of consciousness, it is true.  The question is how useful those loose definitions are.

But first I think a clarification is needed.  Panpsychism actually seems to refer to a range of outlooks, which I’m going to simplify (perhaps overly so) into two broad positions.

The first is one I’ll call pandualism.  Pandualism takes substance dualism as a starting point.

Substance dualism assumes that physics, or at least currently known physics, are insufficient to explain consciousness and the mind.  Dualism ranges from the traditional religious versions to ones that posit that perhaps a new physics, often involving the quantum wave function, are necessary to explain the mind.  This latter group includes people like Roger Penrose, Stuart Hammeroff, and many new age spiritualists.

Pandualists solve the mind-body problem by positing that consciousness is something beyond normal physics, but that it permeates the universe, making it something like a new fundamental property of nature similar to electric charge or other fundamental forces.  This group seems to include people like David Chalmers and Christof Koch.

I do think pandualism is wrong for the same reasons I think substance dualism overall is wrong.  There’s no evidence for it, no observations that require it as an explanation, or even any that leave it as the best explanation.  The only thing I can see going for it is that it seems to match a deep human intuition, but the history of science is one long lesson in not trusting our intuitions when they clash with observations.  It’s always possible new evidence for it will emerge in the future, but until then, dualism strikes me as an epistemic dead end.

The second panpsychist position is one I’m going to call naturalistic panpsychism.  This is the one that basically redefines consciousness in such a way that any system that interacts with the environment (or some other similarly basic definition) is conscious.  Using such a definition, everything is conscious, including rocks, protons, storms, and robots, with the differences being the level of that consciousness.

Interestingly, naturalistic panpsychism is ontologically similar to another position I’m going to call apsychism.  Apsychists don’t see consciousness as actually existing.  In their view it’s an illusion, an obsolete concept similar to vitalism.  We can talk in terms of intelligence, behavior, or brain functions, they might say, but introducing the word “consciousness” adds nothing to the understanding.

The difference between naturalistic panpsychism and apsychism seems to amount to language.  (In this way, it seems similar to the relationship between naturalistic pantheism and atheism.)  Naturalistic panpsychists prefer a more traditional language to describe cognition, while apsychists generally prefer to go more with computational or biological language.  But both largely give up on finding the distinctions between conscious and non-conscious systems (aside from emergence), one by saying everything is conscious, the other that nothing is.

I personally don’t see myself as either a naturalistic panpsychist or an apsychist, although I have to admit that the apsychist outlook occasionally appeals to me.  But ultimately, I think both approaches are problematic.  Again, I won’t say that they’re wrong necessarily, just not productive.  But their unproductiveness seems to arise from an overly broad definition of consciousness.  As Peter Hankins pointed out in an Aeon thread on Philip Goff’s article on panpsychism, a definition of consciousness that leaves you seeing a dead brain as conscious is not a particularly useful one.

Good definitions, ideally, include most examples of what we intuitively think belong to a concept while excluding those we don’t.  The problem is many pre-scientific concepts don’t map well to our current scientific understanding of things, and so make this a challenge.  Religion, biological life, and consciousness are all concepts that seem to fall into this category.

Of course, there are seemingly simple definitions of consciousness out there, such as “subjective experience” or “something it is like”.  But that apparent simplicity masks a lot of complex underpinnings.  Both of these definitions imply the metacognitive ability of a system to sense its own thoughts and experiences and to have the capability and capacity to hold knowledge of them.  Without this ability, what makes experience “subjective” or “like” anything?

Thomas Nagel famously pointed out that we can’t know what it’s like to be a bat, but we have to be careful about assuming that a bat knows what it’s like to be a bat.  If they don’t have a metacognitive capability, bats themselves might be as clueless as we are about their inner experience, if they can even be said to have an inner experience without the ability to know they’re having it.

So, metacognition seems to factor into our intuition of consciousness.  But for metacognition, also known as introspection, to exist, it needs to rest on a multilayered framework of functionality.  My current view, based on the neuroscience I’ve read, is that this can be grouped into five broad layers.

The first layer, and the most basic, are reflexes.  The oldest nervous systems were little more than stimulus response systems, and instinctive emotions are the current manifestation of those reflexes.  This could be considered the base programming of the system.  A system with only this layer meets the standard of interacting with the environment, but then so does the still working knee jerk reflex of a brain dead patient’s body.

Perception is the second layer.  It includes the ability of a system to take in sensory information from distance senses (sight, hearing, smell), and build representations, image maps, predictive models of the environment and its body, and the relationship between them.  This layer dramatically increases the scope of what the reflexes can react to, increasing it from only things that touch the organism to things happening in the environment.

Attention, selective focusing of resources based on perception and reflex, is the third layer.  It is an inherently action oriented capability, so it shouldn’t be surprising that it seems to be heavily influenced by the movement oriented parts of the brain.  This layer is a system to prioritize what the reflexes will react to.

Note that with the second and third layer: perception and attention, we’ve moved well past simply interacting with the environment.  Autonomous robots, such as Mars rovers and self driving cars, are beginning to have these layers, but aren’t quite there yet.  Still, if we considered these first three layers alone sufficient for consciousness, then we’d have to consider such devices conscious at least part of the time.

Imagination is the fourth layer.  It includes simulations of various sensory and action scenarios, including past or future ones.  Imagination seems necessary for operant learning and behavioral trade-off reasoning, both of which appear to be pervasive in the animal kingdom, with just about any vertebrate with distance senses demonstrating them to at least some extent.

Imagination, the simulation engine, is arguably what distinguishes a flexible general intelligence from a robotic rules based one.  It’s at this layer, I think, that the reflexes become emotions, dispositions to act rather than automatic action, subject to being allowed or inhibited depending on the results of the simulations.

Only with all these layers in place does the fifth layer, introspection, metacognition, the ability of a system to perceive its own thoughts, become useful.  And introspection is the defining characteristic of human consciousness.  Consider that we categorize processing from any of the above layers that we can’t introspect to be in the unconscious or subconscious realm, and anything that we can to be within consciousness.

How widespread is metacognition in the animal kingdom?  No one really knows.  Animal psychologists have performed complex tests, involving the animal needing to make decisions based on what it knows about its own memory, to demonstrate that introspection exists to some degree in apes and some monkeys, but haven’t been able to do so with any other animals.  A looser and more controversial standard, involving testing for behavioral uncertainty, may also show it in dolphins, and possibly even rats (although the rat study has been widely challenged on methodology).

But these tests are complex, and the animal’s overall intelligence may be a confounding variable.  And anytime a test shows that only primates have a certain capability, we should be on guard against anthropocentric bias.  Myself, the fact that the first four layers appear to be pervasive in the animal kingdom, albeit with extreme variance in sophistication, makes me suspect the same might be true for metacognition, but that’s admittedly very speculative.  It may be that only humans and, to a lesser extent other primates, have it.

So, which layers are necessary for consciousness?  If you answer one, the reflex one, then you may effectively be a panpsychist.  If you say layer two, perception, then you might consider some artificial neural networks conscious.  As I mentioned above, some autonomous robots are approaching layer three with attention.  But if you require layer four, imagination, then only biological animals with distance senses currently seem to qualify.

And if you require layer five, metacognition, then you can only be sure that humans and, to a lesser extent, some other primates qualify.  But before you reject layer five as too stringent, remember that it’s how we separate the conscious from the unconscious within human cognition.

What about the common criteria of an ability to suffer?  Consider that our version of suffering is inescapably tangled up with our metacognition.  Remove that metacognition, to where we wouldn’t know about our own suffering, and is it still suffering in the way we experience it?

So what do you think?  Does panpsychism remain a useful outlook?  Are the layers I describe here hopelessly wrong?  If so, what’s another way to look at it?

Posted in Mind and AI | Tagged , , , , , , , | 79 Comments

Recommendation: The Roboteer Trilogy

I’m sure anyone who’s paid attention to my science fiction novel recommendations has noticed that I love space opera.  But as much as I love the genre, I’m often aware of an issue many of its stories have.  In order to have the characters be in jeopardy, they often ignore the implications of artificial intelligence.  For instance, I love James S. A. Corey’s Expanse books, but the fact that the characters are often depicted doing dangerous jobs that robots could be doing has always struck me as a world building flaw.

Alex Lamb’s Roboteer series, to some extent, addresses this issue.  It posits a universe where large interstellar warships have a small human crew (4-5 people), but where most of the work is actually done by robots.  In the first book, a crew specialist, a “roboteer”, mentally controls the robots with brain implants, although by the third book all the crew members are effectively roboteers.

The main protagonist, Will Kuno-Monet, is one of the early roboteers at the beginning of the first book.  His augmentations also give him access to a virtual reality, and so a substantial part of the story happens in virtual settings.

In this universe, humans have colonized other star systems, and have a faster than light technology based on the Alcubierre warp drive concept, but with constraints due to the physics of the drive that limit destinations to stars on a “galactic shell”, a thin area of roughly consistent distance from the galactic core.  In the shell, the spacetime properties are the same in front of and behind the warp ship.  Travelling between shells, where the spacetime properties vary, fouls the warp drive, making faster than light travel between the shells impossible.

This effectively puts limits on interstellar expansion, allowing travel in a circle around the galaxy but not toward its center or edge, and explains, along with other story elements, the Fermi paradox, the question of why Earth has never been colonized by aliens.  Part of the plot is the discovery of regions that serve as bridges to other shells, and new unexplored regions of the galaxy.

In the first book, Earth is ruled by a theocracy that is asserting its dominion over all the other human worlds.  Many of the colonies are resisting, but they are falling one by one.  The main characters are from a world called Galatea whose citizens engage in genetic editing, controlling the traits of their children.  Earth’s theocracy regards this and any resulting offspring as an abomination that must be eradicated.  So the Galateans see the war as one of survival.

Earth appears to have developed a new weapon.  Will and his shipmates are sent on a mission to learn about it.  It quickly becomes evident that Earth is getting the weapon technology from an alien source, a very advanced and powerful alien civilization.  But the aliens have their own agenda, one that involves assessing humanity’s worth and deciding whether to wipe it out or guide it to a higher level of maturity.  Will ends up establishing a connection with the aliens, and finds himself on a broader mission to save the overall human race.

As the series progresses, the situation for humanity becomes increasingly precarious, with a new threat introduced in the second book.  By the beginning of the third book, the humans are in a desperate fight for survival, and losing, making the tension in the third book very high.

My reason for recommending this series is its overall exploration of what it means to be human.  The early portions are dominated by the clash between different human cultures, but toward the end it becomes a sublime exploration of how human evolution may progress, looking at questions of free will, personal identity, the architecture of the mind, and the nature of happiness, particularly whether happiness achieved by altering the mind counts as the real thing.

A couple of quick caveats.

The first may actually attract some of you but leave others uncomfortable.  Religion features heavily in this series, but its depiction is consistently and relentlessly negative, particularly in the first two books.  The third book rarely mentions it explicitly, but explores religious themes, and again those themes are presented in a pretty harsh light.

The second caveat is that, although the series has a pretty satisfying ending, the overall message about reality ends up being pretty stark.  It’s one a lot of people will intensely dislike.  I enjoyed the books, but I’m not sure myself how to feel about that final message.

That said, if you like hard core but intelligent space opera, then you’ll find a lot to like here.  There’s a lot of nerd candy in these stories.  Lamb does an excellent job of exploring cool technologies and extremely strange alien cultures and biology.  Whatever my feelings about the ending, he makes the journey a lot of fun.  And he is very skilled at creating dramatic tension and suspense.  The books are thrilling adventure stories where you can often feel the desperate pinch the characters are in.

I enjoyed them enough that I’m going to keep a close eye out for future work by Lamb.  I think the blurbs on the covers from Stephen Baxter are right, he’s a major new talent.

Posted in Science Fiction | Tagged , , , | 6 Comments

Having productive internet conversations

Anyone who’s frequented this blog knows I love having discussions, and can pontificate all day on subjects I’m interested in.  I’ve actually been participating in online discussions, on and off, for decades.

My earliest conversations were on dial up bulletin boards.  Those were usually tightly focused discussions about technology and gaming.  With the rise of services like CompuServe, AOL, and eventually the web, the conversations broadened to include other topics.

BBS signon screen. Image credit: massacre via Wikipedia

A lot has changed since the old bulletin board chat rooms, but much of the interpersonal dynamics haven’t.  There have always been a mix of different types of people: those looking for cogent conversation, others wanting to sell an agenda of some sort (technical, political, religious, etc), or trolls simply looking to rile everyone up under the cover of anonymity.

Debates have always been there.  The earliest I recall were about which programming languages were the best.  (Anyone remember 8088 assembler, BASIC, Pascal, Pilot?)  Or about which computing platform was superior (think Apple II vs Atari vs Commodore).  It’s interesting how often time renders old debates moot.

One thing I’ve learned repeatedly over the years, is that you can virtually never change anyone’s mind about anything during a debate.  I can count on my fingers the number of times I’ve seen it happen, and in that small number of cases, it was always someone who wasn’t particularly committed to the point of view they started the conversation with.

That’s not to say that I haven’t seen people change their mind on even the most dug in subject, but it’s almost always been over a period of weeks, months, or years.  If a conversation I participated in contributed to that change, I generally only heard about it long after the change had happened, and then only if the conversation ended on cordial terms.

Why then participate in these conversations?  For me personally, a big part of the draw is testing my own ideas by seeing what faults others can find in them.  It’s one of the things that brought me back to online discussions, including blogging, after a break of several years.

But I’ll admit persuasion remains part of the motivation, although I’ve known for a long time that persuasion is by necessity a long term game.  The best we can hope to do in any one conversation is to lay the seeds of change.  Whether those seeds take root is completely up to the recipient.  Of course, to have any hope of changing someone else’s mind, they have to get the sense that we’re at least open to changing our own.

All of which is why I generally try to avoid getting into acrimonious debates, at least in recent years.  (Not that I always succeed.)  In my view, Dale Carnegie was right, you can’t win an argument.  Trying to win only causes people to dig in deeper and, if the argument goes on too long, causes hard feelings and wounded relationships.  Even if your argument is unassailable, people won’t recognize it in their urge to save face.

This is why my approach is usually to lay out a position, explain the reasons for that position, and then address any questions someone may ask.  If someone lays out their position, I try to ask for their reasons (if they haven’t already given them), and if I disagree, lay out my reasons for disagreeing.  As long as that’s happening in the conversation, an exchange of viewpoints and the reasons for them, I think it’s a productive one, one that I, the other person, or maybe some third party reader might learn from.

One of the things I try to watch out for is when points previously made start getting repeated.  This is easy to miss when a discussion has been going on for days or weeks.  But when we reach that point, the discussion is in danger of, or has already morphed into an argument.  Long experience has taught me that continuing the conversation further is unlikely to be productive.  (There are exceptions, but they’re rare ones.)

For a long time, I tended to end the conversation by announcing we were starting to loop and that I thought it was time to stop.  This seemed like the polite thing to do.  But just in the last year or so, I’ve concluded something many of you already knew, that the last announcement message is also counter-productive, particularly if the debate has become intense.  It’s far better to let the other person have the last word and move on.

This raises an important point, one that also took me a long time to learn and internalize.  Just because someone says something, I’m not necessarily obligated to respond.  This is particularly true if the other person is being nasty.  I always have the option of just moving on.

If I do choose to respond, I’m also not obligated to respond to every point the other person made.  Maybe the point has already been addressed earlier in the thread, or it might be a subject matter I’m not particularly knowledgeable about, or responding to it might involve a lot of effort I don’t feel like putting in right then.  Sometimes it’s a point I’m simply not interested in discussing.

Discussions about science and philosophy have a special burden, because often the topic is difficult to describe, to put into language.  That means for the discussions to be productive, everyone has to exercise at least a degree of interpretational charity.  Just about every philosophical proposition can be interpreted in a strawman fashion, in a way that’s obviously wrong and easy to knock down.  Doing so is easy but it has a tendency to rush a discussion into the argument phase.   A rewarding philosophical or scientific discussion requires that both parties try to find the intelligent interpretation of the other person’s words, and respond to that rather than the strawman version.

When I’m in doubt about how to interpret someone’s statement, I usually either ask for clarification or restate what I think their thesis is before addressing it.  A lot of misunderstandings have been cleared up with those restatements.

If science and philosophy can be difficult, political discussions are often impossible, especially these days.  But again, I find value in stating a position and then laying out the reasons for it.  When people disagree, it again helps to have them explain why.  Often what we take to be a hopelessly uninformed or selfish outlook has more substantive grounds than we might want to admit.  Even when it doesn’t, treating the other person as though they’re immoral or an idiot is pretty much surrendering any chance of changing their mind.

Not that I’m a saint about any of this, as anyone who goes through the archive of this blog or my Twitter or Facebook feeds can attest.  Much of what I’ve described here is aspirational.  Still, since I’ve been striving to meet these standards, my online conversations have become much richer.

All that said, there are undeniably a lot of trolls out there who have no interest in having real conversation.  I think one important aspect of enjoying an online life is knowing how to block jerks.  Every major platform has mechanisms for doing this, and they’re well worth learning about.  I’ve personally never had to resort to these measures, but it’s  nice to know they’re there.

What do you think?  Is my way too mamby pamby?  Too unwilling to reap the benefits of gladiatorial discussion?   Or are there other techniques I’m missing that could make for better conversations?

Posted in Society | Tagged , , , , | 51 Comments

The system components of pain

Image credit: Holger.Ellgaard via Wikipedia

Peter Hankins at Conscious Entities has a post looking at the morality of consciousness, which is a commentary on piece at Nautilus by Jim Davies on the same topic.  I recommend reading both posts in their entirety, but the overall gist is that which animals or systems are conscious has moral implications, since only conscious entities should be of moral concern.

From Peter’s post:

There are two main ways my consciousness affects my moral status. First, if I’m not conscious, I can’t be a moral subject, in the sense of being an agent (perhaps I can’t anyway, but if I’m not conscious it really seems I can’t get started). Second, I probably can’t be a moral object either; I don’t have any desires that can be thwarted and since I don’t have any experiences, I can’t suffer or feel pain.

Davies asks whether we need to give plants consideration. They respond to their environment and can suffer damage, but without a nervous system it seems unlikely they feel pain. However, pain is a complex business, with a mix of simple awareness of damage, actual experience of that essential bad thing that is the experiential core of pain, and in humans at least, all sorts of other distress and emotional response. This makes the task of deciding which creatures feel pain rather difficult…

I left a comment on Peter’s post, which I’m repeating here and expanding a bit.

I think it helps to consider what an organism needs to have in order to experience pain.  It seems to need an internal self-body image (Damasio’s proto-self) built by continuous signalling from an internal network of sensors (nerves) throughout its body.  It needs to have strong preferences about the state of that body so that when it receives signals that violate those preferences, it has powerful defensive impulses, impulses it cannot dismiss and can only inhibit with significant energy.

We could argue about whether it needs to have some level of introspection so it knows that it’s in pain, but it’s not clear that newborn babies have that capability, yet I wouldn’t be comfortable saying a newborn can’t feel pain.  (Although it used to be a common medical sentiment that they couldn’t, few people seem to believe that today.)

When asking if plants feel pain, you might could argue that they can be damaged, and may respond to that damage, but I can’t see any evidence that they build an internal body image.  They do seem to have impulses about finding water, catching sunlight, spreading seeds, etc, but it doesn’t seem to amount to anything above robotic action, very slow robotic action by our standards.

Things get a little hazy with organisms that have nervous systems without any central brain, such as c-elegans worms.  These types of worms will respond to noxious stimuli, but it’s hard to imagine they have any internal image in their diffuse and limited nervous system.  You could argue that their responses to stimuli constitute preferences, but these seem, again, like largely robotic impulses, although subject to classical conditioning.

But any vertebrate or invertebrate with distance senses has a central brain or ganglia.  They build image maps, models, of the environment and its relation to themselves.  Which means they have some notion of their self as distinct from that environment, and likely have at least an incipient body image.  Coupled with the impulse responses they inherited from their worm forebears, it seems like even the simplest species have the necessary components.

I often read that insects don’t feel pain, but when I spray one, it sure buzzes and convulses like it’s in serious distress, enough so that I usually try to put it out of its misery if I can. Am I just projecting?  Perhaps, but I prefer to err on the side of caution (admittedly not to the extent of letting the bug continue to live in my house).

I think people resist the idea of animal consciousness because we eat them, use them for scientific research, or, in many cases, eradicate them when they cross our interests, and taking the stance that they’re not conscious avoids having to deal with difficult questions.  Myself, I don’t think the research or pest control should necessarily stop, but we should be clear about what we’re doing and carefully weigh the benefits against the cost.

But what about something like an autonomous mine sweeping robot?  It presumably has sensors to monitor its body state, and I’m sure given the option, its programming is to maintain its body’s functionality as long as possible.  When it becomes damaged from setting off a mine, is there any basis to conclude that it’s in pain?

I did a post on the question of machine suffering last year.  My thoughts now are much the same as then, that unless we engineered the machine’s information processing systems with a certain architecture, it wouldn’t undergo what we think of as suffering.

Above, I said that to feel pain, the system would need to have strong preferences about the state of its body image, resulting in impulses it could not dismiss and could only inhibit with significant energy.  I think that’s what’s missing in the robot example.  It presumably can monitor its body state and take action to correct it if there is opportunity, but if there isn’t opportunity, it can log the issue and then calmly adjust to its current state and continue its mission as much as possible.

Living systems obviously don’t have this capability.  We don’t have the option to decide whether feeling pain is useful, to have the distress of what it is conveying go away.  (At least without drugs.)

The robot is also missing another important quality.  It isn’t a survival machine in the way that all living organisms are.  It likely has programming to preserve its functionality as long as possible, but that’s only in service to its primary goals, which is finding mines.  It has no dread of being damaged or of being destroyed entirely.

Which brings us back to the original question that Hankins and Davies were looking at.  Regardless of how intelligent it might be, could we ever regard such a robot as conscious?  If not, what does this tell us about our intuitive feeling of what consciousness fundamentally is?

I’ve done a lot of posts on this blog about consciousness.  A lot of what I’ve described in those posts, models, simulations, etc, could often be said to amount to a description of intelligence.  I’ve mentioned to a few of you recently in conversations that this realization is bringing me back to a position I held when I first started this blog, that consciousness is, intuitively, intelligence plus emotions, that is, intelligence in service of survival instincts.

But maybe I’m missing something?

Posted in Mind and AI | Tagged , , , , , , , | 29 Comments

Predicting far future technologies

Prediction is very difficult, especially about the future.
Niels Bohr

If you’re a science fiction writer, one of the things you do is try to predict what future technologies will come along.  If you’re not writing hard science fiction, this is relatively easy.  You just come up with a cool capability and throw in some plausible sounding technical jargon.  It’s like adding a magical ability in a fantasy story.  As long as you make sure the rules of magic are consistent, you’re in business.

But if you are aiming for harder science fiction, or you’re a futurist, what guiding principles can you use for making grounded predictions?  The difficulty level is actually much harder when making near term predictions, partly because your predictions will be assessed for accuracy in your lifetime, but also because it requires a pretty thorough immersion into current technologies, how they work, what the trends lines are, and what room exists for future improvement.

It gets a little easier for making longer term predictions, because instead of trying to figure out what technical breakthroughs will happen in the next few years, you’re focusing on what might eventually be possible, where the laws of physics may eventually be the deciding factor.

Of course, we could well discover new laws of physics down the road, and who knows what capabilities that new knowledge might enable?  As Arthur C. Clarke once observed, any sufficiently advanced technology becomes indistinguishable from magic for an observer from a less developed society.  We don’t know most of what we don’t know, and attempting to make predictions about future knowledge is basically just wild guessing.

But if we’re trying to be somewhat grounded, keeping our predictions to things that have a reasonable chance of being true, then it might pay to stick to known science, or at least science that isn’t too speculative.  When thinking about this, it pays to remember what technology actually is, which is the manipulation of natural forces for our benefit.  If the future technology you’re imagining isn’t based on some natural force, or a combination of natural forces, then you’re essentially positing magic.

This might be a little clearer if we think about the earliest technologies.  Many animals use sticks as tools to get food from out of tight places, what Douglas Adams called “stick technology”.  Early humans developed a technology no other animal had developed by taming fire and using it for cooking, protection, and many other purposes.  And starting with breeding dogs from wolves, humans began domesticating a number animals and for a variety of purposes and controlling how plants grow for food, again making use of existing natural resources.

If you don’t think these things count as technology, then consider plumbing.  Developed by ancient societies, plumbing makes use of the natural tendencies of water (hydraulics) for human convenience.  Or consider electricity, which Larry Niven and Jerry Pournelle in their novel, Lucifer’s Hammer, referred to as tamed lightning.

A modern car is built to harness natural forces: electricity, air flow, the combustive reaction of gasoline (refined oil, which is stored concentrated solar energy), and mechanical force.  Without these natural forces, there can be no car.  Or any other kind of technology.

For future technologies, this means we need to find plausible natural forces which could be used to construct them.  It’s easy to imagine something like, say,  a Star Trek style teleporter, until we try to envision what kind of system could deconstruct, track, transmit, and reconstruct the 7 X 1027 atoms in a human body along with their precise physical configuration.  Even if we could come up with a computational system that could store that much information and get around quantum no-clone theorems, transmitting it using any variant of electromagnetism might take orders of magnitude longer than the age of the universe.  It seems difficult to imagine such a system without resorting to new physics, in other words without reaching for magic.

I think a good starting point when evaluating potential future technologies is to ask, does it happen in nature?  Or do all of its components happen in nature?  Once we observe it in nature, the question ceases to be whether it’s possible, but whether humans can do it?  Historically, assuming that the answer to that last question is “no” hasn’t been a winning bet.

By this simple metric, it should have been obvious to people in the 19th century that eventually flight would be possible, since birds were doing it everywhere.  And the fact that meteors and other natural phenomena routinely exceed the speed of sound should have clued in early 20th century pilots that aircraft would eventually be able to do so.

So, what does this mean for individual future technologies?  Well, organic life is built on molecular nanomachinery, which seems to indicate that nanomachines will definitely be possible at some point.  (Nanomachines that exist in attack swarms?  Not so much.  What would be the means of propulsion and levitation for such swarms?)  An interesting question is whether manufacturing nanomachines could happen without mutations creeping in, something evolved nanomachines haven’t accomplished.

Intelligent machines?  If you accept that we are intelligent machines, just evolved ones, then the question is how long it will take for us to build engineered intelligent machines.   Another question is what the potential might be for those engineered intelligences.  Many people seem to assume that they could have the capacities of human brains paired with the speed of silicon processors, making them super god-like entities.

But nothing like this yet exists in nature, and we may well find that achieving the necessary capacities requires inescapable trade-offs in performance.  (For instance, maybe that much information density requires water cooled operation.)  While human minds are highly unlikely to be the most intelligent systems that can exist, we shouldn’t assume that AI (artificially intelligent) minds will automatically be thousands of times more powerful than the human variety, particularly a human brain that has itself been integrated with technology.

The criteria become more problematic when we consider things like warp drives, hyperspace, or other putative FTL (faster than light) technologies.  We have no evidence for anything in nature that travels faster than light, except for a couple of exceptions that don’t seem like much help.

One exception is quantum entanglement, but whether it counts as FTL seems to depend on which interpretation of quantum mechanics you favor, and it allows for no actual communication.  (If you try to manipulate the quantum state of one of the entangled particles, you don’t affect its partner, you only destroy the entanglement.)

Another exception are galaxies beyond our cosmological horizon.  Due to the expansion of the universe, they’re moving away from us faster than light (from our vantage point).  But those galaxies are causally disconnected from us.  Since they moved over the horizon, they can have no affect on us, nor us on them.  In other words, they’re now effectively in a different universe.  No objects that can causally interact have ever been observed to move faster than light relative to each other.

People sometimes talk about concepts such as Alcubierre Drives or wormholes.  But these concepts require speculative phenomena such as negative energy or imaginary mass to exist, which puts us back in the realm of new physics, in other words, speculative guessing.

And under special and general relativity, any FTL capability, by whatever means, effectively allows for time travel.  Is time travel possible (that is, aside from our normal forward progression)?  Again, we have no observable phenomena in nature that seems to do it, nor any identifiable method that could built on to do it.  And the absence of tourists from the future seems to hint that time travel to arbitrary destinations isn’t possible.  (People sometimes imagine a code of ethics that prevents time travelers from making their presence known, but the idea that such a code would hold for all time travelers from all future societies seems improbable.)

Dyson Sphere
Image credit: Bibi Saint-Pol via Wikipedia

But the criteria does allow for some pretty mind bending concepts such as artificial planets, stars, even black holes, not to mention megastructures such as Dyson swarms.  All of which are rarely seen in science fiction.

What do you think?  Do you agree that looking for natural phenomena is a good criteria to evaluate the possibility of future technology?  If not, what additional or alternative criteria would you add?

Posted in Science Fiction | Tagged , , , , | 13 Comments

Why embodiment does not make mind copying impossible

A while back, I highlighted a TEDX talk by Anil Seth where he discussed that cognition is largely a prediction machine.  Apparently Seth more recently gave another talk at the full TED conference, which is receiving rave reviews.  Unfortunately, that talk doesn’t appear to be online yet.

But one article reviewing the talk focuses on something Seth purportedly said in it, that uploading minds is impossible because the mind and body are tightly bound together.

Seth’s work has shown compelling evidence that consciousness doesn’t just consist of information about the world traveling via our senses as signals into our brains. Instead, he’s found that consciousness is a two-way street, in which the brain constantly uses those incoming signals to make guesses about what is actually out there. The end result of that interplay between reality and the brain, he says, is the conscious experience of perception.

“What we perceive is [the brain’s] best guess of what’s out there in the world,” he said, explaining that these guesses are constantly in flux.

…“We don’t passively see the world,” he said, “we actively generate it.” And because our bodies are complicit in the generation of our conscious experience, it’s impossible to upload consciousness to some external place without somehow taking the body with it.

Everything Seth describes conforms with most of the neuroscience I’ve read.  To be clear, the brain is indeed tightly bound with the body.  Most of it is tightly focused on interpreting signals sent to it from throughout the peripheral nervous system, and much of the rest is focused on generating movement or hormonal changes.  The portions involved in what we like to think of as brainy stuff: mathematics, art, culture, etc, is a relatively tiny portion.

And the brain appears to have very strong expectations about the body it’s supposed to be in.  That expected body image may actually be genetic.  In his book, The Tell-tale Brain, neuroscientist V.S. Ramachandran describes a neurological condition called apotemnophilia where patients want part of their body removed because they don’t feel like it should be there (to the extent that 50% of people with this condition go on to actually have the body part amputated).  It’s as though their expected body image has become damaged in some way, missing a part of their actual physical body.

If apotemnophilia is a standard brain mechanism gone awry, then a normal human mind is going to have very strong expectations of what kind of body it will be in.  This makes science fiction scenarios of removing someone’s brain and installing it as the control center of a machine an unlikely prospect, at least without dealing with far more complex issues than successfully wiring the brain into a machine.

But ultimately, I don’t think this makes copying a mind impossible, although it does put constraints on the type of environment the copied mind might function well in.  If the mind has strong expectations about its body, then a copied mind will need to have a body.  A mind uploaded into a virtual environment would need a virtual body, and a mind installed in a robotic body would need that body to be similar to its original body.  (At least initially.  For this discussion, I’m ignoring the possibility of later altering the mind to be compatible with alternative bodies.)

But doesn’t the tight integration require that we take the entire body, as Seth implies?  We could insist that copying a mind requires that the person’s entire nervous system be copied.  This would raise the difficulty, since instead of just copying the brain, the entire body would have to be copied.

Alternatively, a new nervous system could be provided, one that sends signals similar to the original one.  This requires that we have an extremely good understanding of the pathways and signalling going to and from the brain.  But if we’ve developed enough knowledge and technology to plausibly copy the contents of a human brain, understanding those pathways seems achievable.

The question is, what exactly is needed to copy a person’s mind?  If we omit the peripheral nervous system, are we perhaps leaving out something crucial?  What about the spinal cord?  When pondering this question, it’s worth noting that patients who’ve suffered a complete severing of their upper spinal cord remain, mentally, the same person they were before.

Such patients still have a functioning vagus nerve, the connection between the brain and internal organs.  But in the past, patients with severe peptic ulcer conditions would sometimes have vagotomies, where the vagus nerve to the stomach was partially or completely severed, without compromising their mental abilities.

Certainly the severing of these various nerve connections might have an effect on a person’s cognition, but none of them seem to make that cognition impossible.  Every body part except the brain has been lost by somebody who continued to mentally be the same person.  The human mind appears to be far more resilient than some scientists give it credit for.

Indeed, the fact that a person can remain partially functional despite damage to various regions to the brain demonstrates that this doesn’t stop at the spinal cord.  Which raises an interesting question, does the entire brain have to be copied to copy a human mind?

The short answer appears to be no.  The lower brain stem seems to be well below the level of consciousness and is very tightly involved in running autonomous functions of the body.  In a new body, it could probably be replaced.

The same could be said for the cerebellum, the compact region at the lower back of the brain involved in fine motor coordination.  Replace the body, and there’s no reason this particular region would need to be preserved.  In fact, patients who have suffered catastrophic damage to their cerebellum are clumsy, but appear to remain mentally complete.

That leaves the mid-brain region and everything above, including the overall cerebrum.  Strangely enough, of the 86 billion neurons in the brain, these regions appear to have less than 25 billion of them.  (Most of the brain’s neurons are actually in the cerebellum.  Apparently fine motor coordination takes a lot of processing capacity.)  It’s even conceivable that lower levels of the cerebral sensory processing regions could be replaced to match the new sensory hardware in a new body without destroying human cognition.

Obviously all of this is very speculative, but then people are often content to entertain concepts like faster-than-light spaceships, which would require a new physics, as merely a matter of can-do spirit.  All indications are that mind copying wouldn’t require a new physics, only an ability to continue studying the physics of the brain.

Unlike the singularity enthusiasts, I doubt this capability will happen in the next twenty years.  It seems more likely to be something much farther in the future, although it’s unlikely to be developed by those who’ve already concluded it’s impossible.

But is there an aspect of this I’m missing?  Something (aside from incredulity) that does in fact make mind copying impossible?

Posted in Mind and AI | Tagged , , , , , , | 30 Comments

Recommendation: Dark Intelligence

I’ve been meaning to check out Neal Asher’s books for some time.  They keep coming up as recommendations on Amazon, Goodreads, and in various other venues, and they sound enticing, like the kind of fiction I’d enjoy.  Last week, I finally read the first book of his most recent trilogy, ‘Dark Intelligence‘.

The universe described in Dark Intelligence has some similarities to Iain Banks’ Culture novels.  Earth lies at the center of an interstellar society call the Polity.  The Polity isn’t nearly as utopian as Banks’ Culture, but it’s similarly ruled and run by AIs.  Humans are still around, but in various combinations between baseline humans and ones augmented in various ways, either physically or mentally.  In this particular novel, most of the action takes place outside of the Polity itself.

The Polity has an enemy, the Prador Kingdom, composed of a brutal crab like alien species called the prador.  The Polity and the prador fought a war about a century before the novel begins, which ended with a tentative truce.  What I’ll call the anchor protagonist, the awesomely named Thorvald Spear, was a soldier killed in the war, but at the beginning of the book is resurrected from a recently discovered mind recording.

It turns out that Spear was killed by a rogue AI named Penny Royal, who also took out a large number of Spear’s fellow soldiers when it went berserk.  Penny Royal is still at large when Spear is revived, and he has a burning desire for revenge, so he sets out to find and destroy it.  His chief lead to find Penny Royal is a woman and criminal boss named Isobel Satomi, who may know the AI’s location because she once visited it to attain new abilities, which it provided, but at a cost.  As a result of receiving those abilities, Satomi is now slowly transforming into an alien predator.

Yeah, obviously there is a lot going on in this book, and everything I’ve just described is revealed in the opening chapters.  The book has a substantial cast of viewpoint characters: humans, AIs, and aliens.  Penny Royal is at the center of several ongoing threads, its actions affecting many lives.  It turns out it is regarded by the Polity AIs as dangerous, a “potential gigadeath weapon and paradigm-changing intelligence”.

There are a lot of references to events that I assume happened in previous books, particularly on one of the planets, Masada.  Somewhere in the book I realized that I had already read about one of the aliens in a short story by Asher: Softly Spoke the Gabbleduck.  He appears to have written a large number of books and short stories in this universe.

I found Asher’s writing style enticing but at times tedious.  Enticing because he enjoys describing technology, weapons, and space battles in detail, and a lot of it ends up being nerd candy for the mind.  Tedious because he enjoys detail all around, often describing settings and characters in more detail than I really care to know, making his book read slower as a result.

Asher also has a tendency to evoke things like quantum computing or fusion power as a means for describing essentially magic technologies.  Much of it is standard space opera fare, such as faster than light travel or artificial gravity.  Some of the rest involve things like thousands of human minds being recorded on a shard of leftover AI material.  This isn’t necessarily hard science fiction, although it remains far harder than typical media science fiction.

But what kept me riveted were the the themes he explores.  The story often focuses on the borders between human, AI, and alien minds.  Satomi’s transformation in particular is described in gruesome detail throughout the book.  (It reminded me of the movie, ‘The Fly’, particularly the 1986 version.)  But most of what makes her transformation interesting, as well as similar transformations other characters are going through in the book, are how their minds change throughout the process.  Their deepest desires and instincts start to change in ways that really demonstrate just how contingent our motivations are on our evolutionary background or, in the case of AIs, engineering.

Not that this book was only an intellectual exercise.  There is a lot of action, including space battles, combat scenes, and AI conflict, not to mention scenes of an alien predator hunting down humans, from the predator’s point of view.

Warning: this book has its share of  gore and violence.  I think it’s all in service to the story, but if  you find vividly described gore off putting, this might not be your cup of tea.

This book is the first in a trilogy, so it ended with lots of loose unresolved threads.  I’ve already started the second book, and will probably be reading a lot more of Asher’s books in the coming months.

Posted in Science Fiction | Tagged , , , | 8 Comments