Massimo Pigliucci’s pessimistic view of mind uploading

IntelligenceUnboundCover

Massimo Pigliucci wrote a paper on his skepticism of the possibility of mind uploading, the idea that our minds are information which it might be possible someday to copy into a computer virtual reality system or some other type of technology.  His paper appears to be one chapter in a broader book, ‘Intelligence Unbound: The Future of Uploaded and Machine Minds‘, which I think I will have to read.

Interestingly, Massimo’s paper is in response to a paper by David Chalmers which apparently supports the idea of MU (mind uploading).  I didn’t realize that Chalmers was open to something like that.  Usually I think of Chalmers as the philosopher hopelessly preoccupied with the mystery of consciousness, but it looks like he doesn’t let his fascination with that mystery preclude him from considering possibilities like MU.  This is causing me to reassess my views of him and wonder if I should be reading more of his material.

In his paper, I think Massimo makes some important cautionary points, but I think his conclusions from these points are unwarranted and over confident.  Unfortunately, the paper at the link is a scanned PDF, so I can’t paste snippets and respond to them,  so you’re going to get my own quick summation of each point.  But you shouldn’t take my word on these; his paper deserves to be read in full.

Massimo’s assertions are in bold, with my responses following.

The brain is not a digital computer.

Yep.  There are some advocates of MU who do seem to think that the brain is a digital computer, but I think anyone who has done any serious reading about the brain knows that isn’t true.  The brain appears to be a massively parallel loose cluster of analog processors.  Instead of transistors with discrete states, it uses synapses with smoothly varying strengths.

However, the brain takes in inputs from the senses, processes and stores information, and produces outputs in the form of movement.  If we built a device that did that, regardless of its architecture, we would almost certainly call it a computer.

MU depends on the computational theory of mind, which is flawed because “we now know a number of problems” which can’t be computed.

All indications are that the brain is a physical system that works in this universe.  If there are problems that modern computers can’t process, but that the brain can, that’s a flaw with modern computer architecture.  But it’s a major leap from that observation to saying that no technology could ever solve it.

If Massimo means that there are problems that could never be computed with any machine architecture ever, then he’s effectively saying that the human mind has a non-physical aspect outside of this universe, which I doubt is an assertion he wants to make.

A simulated mind is not the same as a functional mind.  Simulated photosynthesis doesn’t produce sugar, no matter how closely it models the actual physical process.  

Simulated photosynthesis produces simulated sugar.  A human mind is evolved to produce bodily movement.  A simulated mind would produce simulated movement, which might be quite satisfactory in a simulated environment.  But if the hardware running the simulated mind were connected to the right machinery, it could produce actual movements, turning the “simulation” into an arguably functional mind.

Would a simulated mind produce real consciousness?  (Whatever “real consciousness” is.)  It depends on how far down into the mechanisms a simulation goes, and at what layer in those mechanisms that consciousness actually resides.  If consciousness resides in the quantum layer, then it’s hard to see a simulation ever capturing it.  If it resides in the organization of neural and synaptic circuits, then I think it’s entirely doable, someday.

Human consciousness may be strongly dependent on its biological substrate.  In other words, human consciousness may require human biology.  Thinking otherwise is dualism, and dualism “has no place in modern philosophy of mind”.

This is entirely possible, although I think Massimo’s stand on this is filled with far too much certitude.  But even if it is possible, that doesn’t mean that human technology may not be able to someday duplicate that biological substrate.  It may be centuries in the future, but saying it will never happen strikes me as remarkably pessimistic about what human ingenuity might eventually be able to do.

Personally, I think a human consciousness uploaded into a silicon (or whatever) substrate will be unavoidably different.  The question is, would it be so different that friends and family wouldn’t recognize their loved one?  Of course, if the upload was not destructive, so that the original person was still around, the differences might be more noticeable.

I think the dualism assertion is, frankly, silly.  There’s no requirement for Cartesian “ghost in the machine” dualism.  The only type of dualism that would be required is the software/hardware dualism you accept by running Windows, Mac OS X, Linux, or whatever, on the device you’re using to read this.

Massimo feels it is self evident that Captain Kirk dies every time he steps into the Star Trek transporter.  Since transporting is effectively a type of MU, this means no one should want to be uploaded, at least not if its destructive.  Destructive uploading is hi-tech suicide.

I have to admit that I wouldn’t be eager to submit to a destructive (i.e. fatal) type of uploading.

But suppose I’m on my death bed.  Regardless of what I do, my current physical manifestation is about to end.  If I don’t upload, my pattern will disappear from the universe.  Uploading might produce an imperfect copy, but something of me would continue after I was gone, something far more intimate than my work or even my children.  That version of me would consider itself to be me.  If that’s all I had left, I think I’d take it.

It’s worth noting that, due to the body’s never ending repair and waste removal processes, the physical me that exists today isn’t the physical me from ten years ago.  Every atom in my brain has been replaced over those years.  My current mind is a very imperfect copy of that me from ten years ago.  Actually, it’s an imperfect copy of me from yesterday.  Yet, I’m never really tempted to wonder if tomorrow’s me will be the real me.

Again, I think Massimo raises a number of important cautionary points.  It might turn out that MU is impossible.  For example, despite all indications, we might discover that something like Cartesian dualism actually is reality.  Or human consciousness might be so fragile and so tightly tuned to the body it arose in that any attempt to copy it would render it non-functional.  Or it might reside in quantum layers of reality that we may never understand.  But I think these possibilities are unlikely.

My own prediction is that engineers will eventually produce something that resembles MU, that the uploaded minds will be different than the biological ones, that some people will be horrified by those differences, but that most will eventually learn to live with them, and simply come to see uploading as one of existence’s transitions.

It might be several centuries before this happens.  Even singularity enthusiasts only see it happening in the near future with the help of super-intelligent AIs.  But for many people, MU that is physically possible, but not achievable in our lifetimes, is the worst scenario, because it means that we might be among the last generations to disappear from the universe.  For these people, far better to conclude that it will never be possible.

I can understand this impulse.  But if it has any hope at all of being doable within any of our lifetimes, it’s unlikely to be accomplished by those who have already decided it is impossible.

Posted in Mind and AI | Tagged , , , , , , , , | 5 Comments

Philosophy Tech Support

(Click through for the rest, and for a caption explaining the philosophy referenced.)

via Philosophy Tech Support – Existential Comics.

Does philosophy have a responsibility to be relevant to real world problems?  This is a question often asked of science.  I think the answer is complicated, because we never know where a real world solution might come from.  Most of philosophy is a waste, but the problem is that there is no agreement on which part is useful and which is a waste, and you can never know when something that initially appears utterly irrelevant to the real world won’t turn out to have profound consequences.

That said, I’ve noticed a pattern in recent years with publications about scientific studies having a short blurb added explaining what its possible real world benefits might eventually be.  Should philosophy contemplate doing something similar?

Many might argue that no one expects mathematical proofs to have this kind of real world application, and they’d be right.  Of course, I doubt anyone would expect an abstract logical proof to have one either.  It’s only when  someone is attempting to apply math and logic to entities in the world where pragmatic applicability starts to become expected.

Personally, I think that both philosophy and science should be free to explore areas that might not have real world applicability, on the promise that many of those pursuits will stumble on pragmatic solutions.  But I can understand the other side of this argument given never ending budget pressures.

What do you think?

Posted in Zeitgeist | Tagged , , , | 17 Comments

Charlie Stross discusses life lessons at 50

Charlie Stross just turned 50 and put up a post discussing his major life lessons, things he wished he could tell his 15 year old self, which briefly are:

  1. Don’t die.  (Try not to fail at this one as long as you can.)
  2. Idiots abound.  (And recognize that correcting them is usually not your problem.)
  3. Follow the Golden Rule.  (He prefers the negative Confucian version: “do not do unto others that which would be repugnant were it done unto you”)
    1. Charlies does have some caveats for self defense and what-not.

Check out his post for the details.

I’m not quite 50 yet (I just turned 48 a few weeks ago), but I found Charlie’s list to be reasonable, although I think trying to distill all of life’s lessons to a short list is violating Einstein’s rule that things should be as simple as possible, but no simpler.  Reality is complicated, and many people are too impatient with that complexity.

Maybe that’s why at least three additional important life lessons quickly came to mind:

  1. Conventional wisdom is often wrong.  (Strikingly, this appears to include conventional wisdom among those familiar with this lesson.)
    1. Yes, I know many might see this as a detail of 2, but in my experience many smart people buy into the conventional wisdom.
  2. Most people are more concerned about your evaluation of them than they are about their evaluation of you.  This is usually true even if they are in the more powerful position.
  3. Cherish your real friends, and don’t make enemies when you don’t have to.

There are lots of others, but in the vein of prioritizing what I wish I could tell my 15 year old self, these are big ones.

What would your additions be?

Posted in Zeitgeist | Tagged , , , , | 11 Comments

A Layperson’s guide to basic brain structure!

Originally posted on gwizlearning:

Following on from last week I thought it would be useful to start with a basic look at the “geography” of the brain and what is currently thought of as an overview of function.

I include here an image I drew so must first apologise if it is not completely in proportion… Do check out other images!

Brain lobes with label

The view shown here shows the frontal, parietal, occipital and temporal lobes. It also shows the cerebellum (more on this in later blogs).

You probably already know that the brain has two hemispheres, left and right. The lobes are sub-divisions of the lobes and appear in both hemispheres

The primary responsibility of the occipital lobe is vision. Damage to this lobe leads to blindness in part of the visual field.

The parietal lobe deals with body sensations while the temporal lobe contributes to hearing and to complex aspects of vision.

The frontal…

View original 320 more words

Posted in Zeitgeist | Leave a comment

Multiverse theories: “meta-cosmology”?

Level 2 multiverse

Level 2 multiverse (Photo credit: Wikipedia)

Marianne Freiberger reports on a discussion she had with Bernard Carr on whether or not multiverse theories are science.  He has a suggestion for how we should classify these theories.

With the possibility for indirect evidence in the future, maybe we shouldn’t dismiss the multiverse as mere speculation, especially since it has many features that are theoretically attractive. So attractive that some have even suggested we change the criteria of science in order to accommodate it. “The key question is: how crucial is testability?,” says Carr. “My view is that it is crucial; you do have to be able to test a theory to make it science.” He advocates classifying ideas like the multiverse in a special category he calls meta-cosmology: outside the present boundary of science, but not on the far end of fiction. “It’s a sort of intermediate state, a state of purgatory, before you’ve decided whether [something] is proper science or not.”

“Meta-cosmology” seems like an obvious dance around the term “metaphysics”, a term physicists seem to hate having applied to any theories they discuss.  But it seems like that label makes sense for speculations about unseen and untestable realms.  Of course, accepting it means accepting that physicists engage in philosophy, as least to some extent.  We should remember that many of today’s scientific concepts, such as atomism, began as metaphysical speculation.

Personally, if it makes cosmologists happier, I don’t see a problem with referring to multiverse theories as speculative science, provided that the “speculative” isn’t dropped.  Along the lines of Carr’s reference to calls some have made to change the criteria for established science, I think doing that would, at a minimum, do damage to cosmology’s credibility.

Posted in Zeitgeist | Tagged , , , , , , , , , , , | 7 Comments

Video on what exactly a gene is

There’s a video on the evidence for evolution going around, but turns out the artist that made that video has made a number of them, including this one on the scientific understanding of a gene.

via Videos / What Exactly is a Gene? – Stated Clearly.

What’s interesting about this, is that the definition of “gene” has changed over the decades.  As I understand it, when the word was originally coined, it meant a discrete unit of inheritance, but now it refers to a cistron, a discrete string of DNA encodings that only make up a small percentage of DNA overall, with the remainder initially receiving the nickname “junk DNA”.

As molecular biologists are learning more about DNA and inheritance, it’s becoming increasingly evident that these coding sequences aren’t the whole story, that vast swaths of what was thought was junk DNA are turning out to be part of the process, which is causing many to declare that genes aren’t the whole story.  And they’re not, using the modern definition.  But by the classic definition, which would include the sequences and any supporting framework in non-coding DNA, they arguably remain the main story.

Posted in Zeitgeist | Tagged , , , , , , | Leave a comment

The attention schema theory of consciousness deserves your…attention

English: Neural Correlates Of Consciousness

Neural Correlates Of Consciousness (Photo credit: Wikipedia)

Michael Graziano published a brief article in the New York Times on his attention schema theory of consciousness, which a number of my fellow bloggers have linked to and discussed.  I’m not sure this article was the clearest description of it that he’s given, and I suspect the title biased readers to think his theory is another consciousness-is-an-illusion one, which affected some of the discussion.

I’ve written about this theory before when I reviewed his book, ‘Consciousness and the Social Brain’, and alluded to it in several other posts.  I’m doing another post on it, partially to take another shot at describing it, partially to reaffirm my understanding of it, and partially to do my small part to call attention to a scientific theory of consciousness that I think deserves your attention.

Before starting on the theory, I think it’s important to understand that the scientific evidence doesn’t point to the brain operating under any central control.  There’s no homunculus, no little person inside controlling the brain.  The brain is more of a distributed set of modules that operate somewhat independently.

The first thing to understand with the theory is the distinction between attention and awareness.  Attention is the process of your brain deciding which sensory inputs to give priority processing to.  It’s  a messy emergent process with, again, no central control.  It can be top down, such as your attention to reading this blog entry, or bottom up, such as the attention you’d give to a spider crawling up your arm.

These sensory signals are constantly streaming into your brain, each signal is constantly striving for attention.  There is an ongoing contest in your brain with signals effectively forming coalitions, coming to prominence, and then receding to the next ascendant coalition of signals.

Some philosophers of mind stop here and say that this is consciousness, and that the feeling that there is anything else, that there is an inner experience of some kind, is an illusion.  But if this is an illusion, then what is experiencing the illusion?  And how is the illusion arising?  And how are the top down attentional states referenced above developed?

The answer may be awareness.  Awareness is not attention.  Your attention can be drawn to something without you being aware of it.  This is something every magician and illusionist knows.  They often misdirect your attention, without you being aware of it, which allows them to perform seeming feats of magic.

But if awareness isn’t attention, then what is it?  According to this theory, it is information.  Awareness is a model, an executive summary in your brain of the messy and emergent process of attention.  Like any executive summary, it lacks a lot of detailed information, it isn’t always accurate, and is by nature incomplete.

Compare this to what we know about the relationship between consciousness and the subconscious.  We are conscious of many things, but a lot more things go on within our subconscious that we have only incomplete or hazy information about, and much goes on that we simply have no information on.

In other posts, I’ve used the metaphor of a city newspaper.  The city is the brain, and the newspaper is awareness.  The newspaper gathers information, summarizes and simplifies it, and then makes it available to the rest of the city.  It is a feedback mechanism that allows the components of the city to know a summary of what is happening with all the other components of the city.

Awareness serves the same function in your brain.  It’s a feedback mechanism that allows the brain to monitor its attentional state.  According to the theory, it’s this feedback mechanism, this schema, that gives us our feeling of inner experience, of essentially experiencing our experience.

Another aspect of this theory is the idea that, just as we have an attention schema for our own attention state, we also have attention schemata for other minds.  The idea is that the same brain circuitry that processes awareness for our inner experience also processes our perceptions of what others are thinking.  For example, when we watch another person look at an apple, we model their attentional state and understand that their attention is on the apple.

In other words, consciousness is our theory of mind pointed back at ourselves, and our theory of mind is our awareness feedback mechanism pointed at other perceived minds.  (I’m tempted to go off on a tangent here about the importance of understanding yourself in order to understand others, but I think I’ll save that for some other time.)

Graziano feels that consciousness has at least some control over our actions, that asserting that it doesn’t, as many epiphenomenon theories of consciousness do, ignores the main thing we can know for sure about consciousness, that we can describe it.  I think that’s why his preferred metaphor for describing the attention schema is of a general plotting strategies with a map and toy soldiers serving as a model of the real battlefield.

I’m sure Graziano has his expert reasons for believing this, but based on all I’ve read, I’m less sure about consciousness being in control, thinking that maybe a better description might be to say that consciousness has causal influence.  I think this is one reason why I prefer the newspaper metaphor.  Unlike a general, a newspaper doesn’t have control over what happens in the city (at least not directly), but it has substantial causal influence through the information that it makes available.  The city, or more accurately the various faction within the city, may or may not use the information provided by the newspaper in their decisions.

This conception also melds well with Michael Gazzaniga‘s description of the interpreter functionality which seems to be revealed by split-brain patient experiments.  These experiments are some of the indications that we have that the brain isn’t controlled by any one central point.  The mechanism producing the attention schema is the interpreter, or at least a crucial part of it.

So, why am I enthusiastic about this theory?  Well, first, it seems solidly rooted in neuroscience and psychology.  In his book, Graziano discusses the empirical support for the theory.  He admits that the support is still incomplete, and that the theory may have to be modified as more data becomes available.  This is normal for a scientific theory.

Second, the theory doesn’t invoke an unknown magical step.  For example, the integrated information theory posits that consciousness arises from the integration of information without being able to describe exactly how much integration is necessary, or why integrated entities like the internet or the tax code aren’t conscious (at least not without making counter-intuitive assertions that they are conscious but with no ability to communicate with us).  The attention schema theory sees integration as necessary for consciousness, but not sufficient by itself.

Third, the theory doesn’t dismiss inner experience as an illusion.  It’s description of a feedback mechanism actually gives an explanation for the intuitive feeling of the homunculus that we all have.

And fourth, it gives insight into the type of architecture that might eventually be necessary for an artificial intelligence to be conscious, while showing how unlikely it is that such an architecture will come about by accident.

Is this theory the natural selection of consciousness, as Graziano admits he is looking for?  I don’t know, but it feels like at least an important step toward that theory.  This theory will rise or fall on whether or not the data support it, but it being rooted in the data that is already available makes me think it’s a closer approximation of that final theory than most of the other theories that often get tossed around.

Posted in Mind and AI | Tagged , , , , , , , , , , , , | 25 Comments