Massimo Pigliucci’s pessimistic view of mind uploading

IntelligenceUnboundCover

Massimo Pigliucci wrote a paper on his skepticism of the possibility of mind uploading, the idea that our minds are information which it might be possible someday to copy into a computer virtual reality system or some other type of technology.  His paper appears to be one chapter in a broader book, ‘Intelligence Unbound: The Future of Uploaded and Machine Minds‘, which I think I will have to read.

Interestingly, Massimo’s paper is in response to a paper by David Chalmers which apparently supports the idea of MU (mind uploading).  I didn’t realize that Chalmers was open to something like that.  Usually I think of Chalmers as the philosopher hopelessly preoccupied with the mystery of consciousness, but it looks like he doesn’t let his fascination with that mystery preclude him from considering possibilities like MU.  This is causing me to reassess my views of him and wonder if I should be reading more of his material.

In his paper, I think Massimo makes some important cautionary points, but I think his conclusions from these points are unwarranted and over confident.  Unfortunately, the paper at the link is a scanned PDF, so I can’t paste snippets and respond to them,  so you’re going to get my own quick summation of each point.  But you shouldn’t take my word on these; his paper deserves to be read in full.

Massimo’s assertions are in bold, with my responses following.

The brain is not a digital computer.

Yep.  There are some advocates of MU who do seem to think that the brain is a digital computer, but I think anyone who has done any serious reading about the brain knows that isn’t true.  The brain appears to be a massively parallel loose cluster of analog processors.  Instead of transistors with discrete states, it uses synapses with smoothly varying strengths.

However, the brain takes in inputs from the senses, processes and stores information, and produces outputs in the form of movement.  If we built a device that did that, regardless of its architecture, we would almost certainly call it a computer.

MU depends on the computational theory of mind, which is flawed because “we now know a number of problems” which can’t be computed.

All indications are that the brain is a physical system that works in this universe.  If there are problems that modern computers can’t process, but that the brain can, that’s a flaw with modern computer architecture.  But it’s a major leap from that observation to saying that no technology could ever solve it.

If Massimo means that there are problems that could never be computed with any machine architecture ever, then he’s effectively saying that the human mind has a non-physical aspect outside of this universe, which I doubt is an assertion he wants to make.

A simulated mind is not the same as a functional mind.  Simulated photosynthesis doesn’t produce sugar, no matter how closely it models the actual physical process.  

Simulated photosynthesis produces simulated sugar.  A human mind is evolved to produce bodily movement.  A simulated mind would produce simulated movement, which might be quite satisfactory in a simulated environment.  But if the hardware running the simulated mind were connected to the right machinery, it could produce actual movements, turning the “simulation” into an arguably functional mind.

Would a simulated mind produce real consciousness?  (Whatever “real consciousness” is.)  It depends on how far down into the mechanisms a simulation goes, and at what layer in those mechanisms that consciousness actually resides.  If consciousness resides in the quantum layer, then it’s hard to see a simulation ever capturing it.  If it resides in the organization of neural and synaptic circuits, then I think it’s entirely doable, someday.

Human consciousness may be strongly dependent on its biological substrate.  In other words, human consciousness may require human biology.  Thinking otherwise is dualism, and dualism “has no place in modern philosophy of mind”.

This is entirely possible, although I think Massimo’s stand on this is filled with far too much certitude.  But even if it is possible, that doesn’t mean that human technology may not be able to someday duplicate that biological substrate.  It may be centuries in the future, but saying it will never happen strikes me as remarkably pessimistic about what human ingenuity might eventually be able to do.

Personally, I think a human consciousness uploaded into a silicon (or whatever) substrate will be unavoidably different.  The question is, would it be so different that friends and family wouldn’t recognize their loved one?  Of course, if the upload was not destructive, so that the original person was still around, the differences might be more noticeable.

I think the dualism assertion is, frankly, silly.  There’s no requirement for Cartesian “ghost in the machine” dualism.  The only type of dualism that would be required is the software/hardware dualism you accept by running Windows, Mac OS X, Linux, or whatever, on the device you’re using to read this.

Massimo feels it is self evident that Captain Kirk dies every time he steps into the Star Trek transporter.  Since transporting is effectively a type of MU, this means no one should want to be uploaded, at least not if its destructive.  Destructive uploading is hi-tech suicide.

I have to admit that I wouldn’t be eager to submit to a destructive (i.e. fatal) type of uploading.

But suppose I’m on my death bed.  Regardless of what I do, my current physical manifestation is about to end.  If I don’t upload, my pattern will disappear from the universe.  Uploading might produce an imperfect copy, but something of me would continue after I was gone, something far more intimate than my work or even my children.  That version of me would consider itself to be me.  If that’s all I had left, I think I’d take it.

It’s worth noting that, due to the body’s never ending repair and waste removal processes, the physical me that exists today isn’t the physical me from ten years ago.  Every atom in my brain has been replaced over those years.  My current mind is a very imperfect copy of that me from ten years ago.  Actually, it’s an imperfect copy of me from yesterday.  Yet, I’m never really tempted to wonder if tomorrow’s me will be the real me.

Again, I think Massimo raises a number of important cautionary points.  It might turn out that MU is impossible.  For example, despite all indications, we might discover that something like Cartesian dualism actually is reality.  Or human consciousness might be so fragile and so tightly tuned to the body it arose in that any attempt to copy it would render it non-functional.  Or it might reside in quantum layers of reality that we may never understand.  But I think these possibilities are unlikely.

My own prediction is that engineers will eventually produce something that resembles MU, that the uploaded minds will be different than the biological ones, that some people will be horrified by those differences, but that most will eventually learn to live with them, and simply come to see uploading as one of existence’s transitions.

It might be several centuries before this happens.  Even singularity enthusiasts only see it happening in the near future with the help of super-intelligent AIs.  But for many people, MU that is physically possible, but not achievable in our lifetimes, is the worst scenario, because it means that we might be among the last generations to disappear from the universe.  For these people, far better to conclude that it will never be possible.

I can understand this impulse.  But if it has any hope at all of being doable within any of our lifetimes, it’s unlikely to be accomplished by those who have already decided it is impossible.

This entry was posted in Mind and AI and tagged , , , , , , , , . Bookmark the permalink.

19 Responses to Massimo Pigliucci’s pessimistic view of mind uploading

  1. Hi SAP,

    I of course disagree with Massimo, but I think many of your criticisms are not quite right. He’s wrong for slightly different reasons!

    “The brain is not a digital computer.”

    Well, it isn’t physically a digital computer. That doesn’t mean that there is any information processing it can do that a digital computer cannot. Computationalism should be defended not by saying that we can make parallel, analog computers, but by making the point that the architecture doesn’t matter at all. Different architectures are equivalent in principle, the differences being only in terms of efficiency and complexity.

    “If there are problems that modern computers can’t process, but that the brain can, that’s a flaw with modern computer architecture.”

    If Massimo is alluding to problems which cannot be computed, he presumably means uncomputable problems. These are problems which have been proven to be impossible to solve by means of any computation, no matter what the architecture.

    If there were any uncomputable problems which brains could solve, that would be a fatal problem for computationalism. But there are no problems that have been shown to be uncomputable which human brains can solve. It appears very much as though uncomputable problems are completely impossible to solve by any means at all (within this universe anyway).

    “If consciousness resides in the quantum layer, then it’s hard to see a simulation ever capturing it.”

    I’m not sure this is the right way of thinking about it. I don’t think it resides at any particular level. What you mean to say is we don’t know what degree of fidelity is required to reproduce consciousness. I prefer to answer the “sugar” argument by pointing out that simulated complexity is complexity. An emulated calculator is itself a calculator. The question is whether consciousness is like these or like sugar.

    I also feel your answer to this point seems to indicate that you think it might be possible for the simulation to basically work (producing the appearance of intelligent cognition) without producing consciousness. If that is your view, I doubt it very much. I think consciousness is inextricably bound to the information processing abilities it affords. You can’t have one without the other. If the simulation is accurate enough to produce intelligent behaviour, my view is that it must also be conscious.

    “Personally, I think a human consciousness uploaded into a silicon (or whatever) substrate will be unavoidably different.”

    I don’t see why, unless as a practical matter. Speaking in principle, I see no reason why the consciousness could not be arbitrarily similar.

    You’re dead right on dualism though.

    On whether destructive uploading is death, there is no right answer, because the question is meaningless. In a world where we can create duplicates of ourselves, it’s up to us to define what death means. There is no fact of the matter on whether you personally continue to exist, because there is no fact of the matter on what you, personally are. It’s a troubling question because, psychologically, and for evolutionary reasons, we feel we ought to try to continue to exist (because we fear death), and we don’t know whether uploading achieves this goal. There are two apparently equally valid ways to think about it yielding contradictory answers. The problem is Massimo only explores one.

    For the reasons you outline (atoms being replaced and so on), I choose not to identify with my body. I identify with my mind, which I believe to be a kind of pattern (a view I assume you are sympathetic with in light of your name). Patterns can be reproduced, indeed particular copies of a pattern are not the pattern itself but merely instantiations of the pattern. The same operating system can be installed on a number of different devices, and it is really the same operating system. Back up a file to a remote server and you have really done something to preserve it, not merely duplicated it.

    For the reasons outlined by Massimo, it is difficult to imagine how we could think of our own existence in this way. He fails to do so in his paper, but there is a coherent way to approach it. I’m going to quote what I have written about it elsewhere.

    “Massimo is operating on the assumption that when Kirk is non-destructively teleported, we are left with an original Kirk and a clone Kirk. That is not my view. My view is that we are simply left with two Kirks. Neither is any more Kirk than the other, since the two are identical. Think of what happens when an embryo splits in the womb to produce a pair of identical twins. We don’t imagine that one twin is original and the other a clone. They are both as much descendants of the original zygote as each other, but take on distinct identities from the point of fission. This is how I think of the Kirk thought experiment and of mind-uploading. If I were to non-destructively upload my mind, the self pre-upload has two futures, one physical and one virtual. If I were to destructively upload my mind, I would have one future, just as I would if I had not. I think of such situations as like forking an identity, in the same way that a river forks or a software project forks. Each fork of the river is as much the descendent of the unforked river as each other, yet the forks take on distinct identities from that point on. If one fork were to be terminated immediately after the river forks (so that the forking is not even visible), this is the same thing as the river continuing unforked, and so in the same way I view destructive teleportation (or mind-uploading) as somewhat equivalent to transportation.”

    I also highly recommend this cartoon (although it seems to paint a picture more supportive of Massimo’s viewpoint than mine):

    Like

    • Hi DM,
      Excellent remarks! In general, I don’t think there’s that much daylight between our positions, except perhaps I’m a little more epistemically cautious.

      “Well, it isn’t physically a digital computer. That doesn’t mean that there is any information processing it can do that a digital computer cannot. ”
      I tend to agree, but it hasn’t been empirically demonstrated yet, and I felt my case was strong enough without contesting it.

      “But there are no problems that have been shown to be uncomputable which human brains can solve.”
      Thanks. Massimo didn’t mention (or realize) that part.

      “What you mean to say is we don’t know what degree of fidelity is required to reproduce consciousness.”
      That is what I meant. Actually, I tend to suspect that capturing consciousness “perfectly” requires going down to at least the molecular level, but capturing it “effectively” can happen at the neural-synaptic circuitry level.

      “…simulated complexity is complexity. An emulated calculator is itself a calculator. The question is whether consciousness is like these or like sugar.”
      I actually think we’re saying the same thing here. I just articulated it differently.

      “I also feel your answer to this point seems to indicate that you think it might be possible for the simulation to basically work (producing the appearance of intelligent cognition) without producing consciousness.”
      I don’t see that we can currently rule out the possibility. I agree that if a simulation *perfectly* reproduced the output of the original mind, then it must be conscious, but what about if it only *mostly* reproduced it? And how would we know the difference?

      On differences in consciousness between different substrates, I’m mostly talking pragmatically. But speaking in principle, any system that’s not the original is going to be different, to at least some degree unless, like software and hardware in modern computers, it is engineered to be interchangeable. Brains and minds aren’t engineered to be interchangeable. I think it’s excessively pessimistic to conclude this makes uploading impossible, but it is realistic to expect differences.

      “On whether destructive uploading is death, there is no right answer, because the question is meaningless.”
      Agreed, but our instinctive reactions to it aren’t, and they’re not necessarily modifiable by reasoning.

      I agree with just about everything you write from here on, but I do think the transition from organic to technological substrate will be weirder than subsequent transitions, clonings, etc. With subsequent copying, we’ll have a much higher degree of confidence that the subsequent copies are 100% accurate, and I think everything you say makes perfect sense.

      Entertaining video. Thanks!

      BTW, the science fiction novel, ‘Ancillary Justice’, has an interesting take on an issue that could arise with several copies of a mind running around.

      Like

  2. ejwinner says:

    I am with Massimo on this. The brain evolved to satisfy biological needs. The complexity of the brain has spun off these needs into a myriad of desires, but the needs remain the foundations of the brain, and hence of consciousness.

    Let’s get to the nub here. Mind uploading is the offer of an afterlife. Specifically it is a technological variant of Thomas Aquinas’ interpretation of the City of God, which I once described as:

    “Civitas Dei; as imagined by Augustine and articulated by Aquinas: The saved soul enters a community of scholars and they float around communicating their knowledge with one another.
    – Again, for all time; and, again, boring.
    Aquinas seems to depend on the temporal contingency of the earth to provide ever new knowledge from recently released souls, to mitigate this boredom. In other words, just when the community has fully shared all the knowledge it has, and every soul knows what every other soul knows, another soul pops up to announce, ‘hey guys, guess what I learned while still in a material body?’ City of God becomes an eternal library where every newly arriving soul is the new best-seller added to the shelves.”

    The way to convince me this isn’t so, would be to argue, in what way MU would improve my (afterlife) sex life; how it would get me better food, allow me to drink as much as I want, what it would feel like, how it would taste and smell. But then it might end up looking like the Orthodox Jewish/ Islamic afterlife, which I once remarked as:

    “Incarnation (of the soul) into its original body, but on a re-created paradisaical earth where everything is beautiful and nothing hurts. One gets to eat, drink, fornicate, laugh, stay stoned all the time – throughout all time. (…)
    – That ‘through all times’ gives me problems here, and will continue to haunt us in this discussion. I can’t think of any existence more boring.
    I mean, I like eating, drinking, fornicating, etc., as much as anyone, and more than some, but try to imagine having sex with your favorite sex partner throughout eternity. One doesn’t even get an orgasm, since orgasms have specific biological functions which would be nullified on a perfected earth. So ‘the old in-out’ literally just goes on and on… and on…. In out in out in out in out, etc., to infinity. Pleasure doesn’t look like so much fun anymore.”

    However phrased, the matter is not at all frivolous. First, this is why MU afterlife raises charges of dualism. Consciousness divorced of body, whether it is a soul or just synaptic patterns, inevitably leads to dualism. (DM noted, “I choose not to identify with my body. I identify with my mind” – regardless of the reasoning, this is the raw stuff of dualism, this is what dualism is all about.)

    But there’s a larger question here which should be at some point addressed. I personally don’t want an afterlife, of any kind. I’ve thought about this to the point of seeing it on the other side, where I can now ask the question, why would anybody want to live forever? Rather than being the desire that ought to be satisfied, perhaps it’s a problem.

    I’m not saying that MU is impossible (although thankfully not probable within my lifetime – which is satisfyingly finite). I’m not even saying that it would be a bad thing for those who want it. I am suggesting that there are questions yet unaddressed that cannot be elided, if it is to gain stronger claim as a defensible project.

    Liked by 1 person

    • You make many excellent points. MU would be a type of afterlife, and it does imply a type of dualism. And there are some transhumanists who have almost a religious like faith in the idea. I’d be lying if I didn’t admit that these people made me nervous.

      But I think just because something resembles a discredited concept (at least among non-believers) shouldn’t cause us to automatically dismiss it. Dualism and an afterlife may not be built into nature, but that doesn’t mean they can’t be engineered. MU doesn’t violate the laws of nature as well currently understand them, and minds exist in nature, which makes me optimistic that we will eventually be able to duplicate them. (I do think it’s likely centuries in the future though.)

      As to how it could improve your sex life, etc, I think being in a virtual reality means that you could improve it anyway you wanted. I can’t see any reason why you couldn’t have as many orgasms as you wanted, as rapidly as you wanted. (Although I could see orgasms actually losing their appeal if you overdid it.) Likewise with eating. You could eat the best meals as much as you wanted, with no health consequences.

      I completely agree with your point about eternal life. It’s hard to imagine that the boredom of eternal life wouldn’t eventually become a type of hell, with the kindest option simply being able to end existence. But I wouldn’t mind having the option to continue until I decided that I’d had enough.

      Like

  3. agrudzinsky says:

    I don’t think MU is possible. What we call “mind” appears to me as a mechanism for interaction between the environment and the body, and also between the parts of the body. As you mentioned, the result of such interaction can be movement or secretion of substances – sweat, hormones, etc. The “state of our mind” and our thoughts seem to be determined not only by synapses, but also by chemical and physical processes in the body. I think, I read somewhere that one can control dreams by changing the temperature of different parts of the brain. Apparently, injection of a drug into a computer isn’t going to cause same reaction as injection of the drug into the body. How the body reacts to external stimuli is a property of that body. I don’t think it is possible to “replicate the mind” without cloning the body. But even cloning the body won’t replicate the mind because the clone would have a different history than the original.

    Liked by 1 person

    • Keith Wiley says:

      How much body do you require for your body-dependent theory of mind? Is a quad-amputee at the hips and shoulders a person? What if we had sufficient medical and engineering technology to enable further and further survival of loss of biology, say from the torso down…or in the limit from the neck down. A head, fully healthy, but amputated from the neck down? With futuristic technology this “purported” person might actuate a robotic body and be more mobile that today’s paraplegics and amputees? Would you consider such a person to be a complete mind? What if they were blind? And deaf? What if they lost the ability to move their jaw and tongue and could not speak, even if with their robotic lungs and larynx. But their mind is as sharp as anyone else’s. They interact in society as a full person for all intents and purposes? Where does you dependency on the body end? With the last hair follicle? The last skin cell? The last cubic millimeter of skull? The mind cannot possibly depend on the biological body. It might require some sort of interaction with the physical world in order to avoid falling into an “attractor state” of mental circularity…but it cannot conceivably depend on the conventional body unless we are going to grant some sort of diminished personhood to amputees, as demonstrated above.

      Keith Wiley
      Author of A Taxonomy and Metaphysics of Mind-Uploading
      http://www.amazon.com/dp/B00NJZHGM8

      Liked by 2 people

      • agrudzinsky says:

        I guess, the debate boils down to the definition of mind. What exactly are we uploading? Perhaps, most people would agree here that mind is not just a collection of information. There is also will, feelings (pleasure, suffering), passions. Some may choose to argue, but I’d say, we do make choices based on our preferences. Can a machine be said to have will and preferences except those of it’s makers? Can we say that a machine suffers?

        You seem to have a point. How much of the body and its functions can we take away to still say that a person has mind? But let’s not stop at the body. What makes the essence of mind without which it stops being a mind? Ability to communicate or, at least, react to stimuli in a detectable way? This seems to be necessary, but not sufficient because we could say that a fire alarm has a mind. It reacts to chemicals in the air and communicates with a loud beep. Can we say that a fire alarm has will? It seems to make a decision when to beep.

        There are more questions than answers in what I say. I suspect that these questions cannot be answered. The problem with the subject of mind is that all reasoning about it takes place in our mind. Therefore, the reasoning is circular by definition. So, I doubt that we can fundamentally reach any definite conclusions here.

        Like

  4. agrudzinsky says:

    There is another fundamental problem with MU. An image is never identical with the thing the image reflects. A painting is only a canvas with paint. The “image” is not the painting – it’s what we imagine looking at the painting. A simulation is only related to what it simulates through the mind of whoever created the model, the algorithm, etc. You may see the problem. To have an “image” of a mind, one needs another mind to imagine what’s in the image.

    Liked by 1 person

    • I agree that for a human mind to work properly, at least initially, it would need a simulated body. But I can’t see that being much of an obstacle if we’re advanced enough to capture the mechanics of consciousness.

      Your other point could be more difficult. How would we ever know that the uploaded entity is the person? We can’t tell by the entity saying so, because we’d suspect that was just part of the simulation. To some extent, I think part of the answer is knowing what was involved in establishing the simulation. If it is closely modelling the actual mechanics of a brain, without a lot of legerdemain on the part of the technology, then I think it would be reasonable to say we had the person. If running the simulation required extensive “substitute functionality” (because we didn’t understand the original well enough), then I think I’d say we should be suspicious.

      Like

  5. Liam Ubert says:

    The more I think about it, the less enamored I become of MU, assuming that means a high fidelity emulation, and not just something that has the ability to superficially fool us.

    The mind body dualism problem is quite serious, not ‘silly’. If there is no duality, and most of us agree there isn’t, then either the upload must include information about the body, or, consciousness must be entirely representative of the human mind. The empirical evidence against the latter, i.e. consciousness is only an epiphenomenon, reflective of the workings of our mind and brain, is quite strong, almost ‘settled science’ to coin a cliche.

    In the absence of duality, it would be necessary to upload (essential parts of?) the body as well, which is tantamount to cloning. This creates a whole set of serious ethical and moral problems.

    What is the point of the upload anyway? So that one’s consciousness survives and can interact with friends and loved ones. Would its children love the upload? Would the upload love its children? I am not saying one should not try, I’m just predicting that it will never happen.

    There is no physical law that makes it impossible, but that does not make it feasible. I would not be surprised if there is some biological law against it.

    Liked by 1 person

    • The dualism issue seems to be one that many are finding to be a serious obstacle. Maybe I’m missing something, but while I think it’s rational to conclude that duality doesn’t naturally exist, in my view that’s entirely separate from saying that we couldn’t engineer it into existence.

      On the epiphenomenon view of consciousness, my question is, if it is an epiphenomenon, then how are we discussing it? How do our language centers get information on it and translate that into talking, keystrokes, etc? I think the evidence definitely points to consciousness not being in control, but to it still having causal influence.

      I totally agree that there are serious ethical and moral concerns with MU. But I think the technology will eventually come anyway. How society may end up dealing with it is interesting to contemplate. Charlie Stross has predicted that it would cause the mother of all religious wars, but that feels overly pessimistic to me.

      On your final point, I guess I would say that I’m totally open to being convinced of something like that, that there is some fundamental obstacle in nature that prevents it, but it would have to be based on more than just incredulity. I’ve seen too many things that I was initially incredulous of turn out to be reality.

      Like

  6. Pingback: Reaching the stars will require serious out-of-the-box thinking | SelfAwarePatterns

  7. I’m still amazed by cloning, so who knows what could happen. I don’t know enough about the brain and the science behind it to weigh in on this matter, but I do have to agree with Charlie Stross in saying it would cause the mother of all religious wars. My first thought was: Uh oh. Search for immortality = hubris. Then, of course, Dracula-Nosferatu scenarios.

    Like

    • I think the hubris reaction is very common. There’s a sense that maybe we’re playing with something we have no right to play with, that if we succeed, we may be creating something monstrous. Your example of Dracula-Nosferatu is insightful: creatures that should be dead, but continue on in some soulless manner.

      In some ways, it’s similar to the Frankenstein like fear of AI development, that we’re playing with forces we don’t understand, and that our creations may turn on us. I’ve talked about why I think the AI reaction is misguided in various posts, but I have to admit that I don’t really have an answer for the uploading one, other than perhaps to name it.

      Liked by 1 person

      • To be honest, I’d probably upload myself if I thought it would really work. And it’d probably turn out to be a bad idea, but I’d certainly do it!

        I wonder if I’d get bored in eternity. So far, it looks like I wouldn’t. But who knows?

        Liked by 1 person

        • I often get bored in my current life, so getting bored in eternity sounds likely to me. I’d like to think that the only way I’d accept eternal life is with the option to voluntarily end it when I decided that I’d had enough, but I’m not sure I’d have the discipline to insist on that.

          Indeed, once you’ve learned everything, done everything, and known everyone billions of times over, I can easily see eternal life without that termination option becoming a type of unbearable hell.

          Still, it’d be nice to have be able to continue until I’m ready to cry uncle.

          Liked by 1 person

  8. Liam Ubert says:

    SAP wrote “.. if (consciousness) is an epiphenomenon, then how are we discussing it?” Language seems to be integral to consciousness, also an epiphenomenon therefore, and specifically communicates the phenomenological contents of our consciousness. We all share this highly evolved feature, language, and it has a number of very powerful effects on us social beings. First of all, it standardizes and calibrates how we think, so much so that we sometimes suffer the illusion that we are actually sharing contents of our individual consciousnesses with others. Or, more commonly, we suffer the illusion that we are fully integrated in our culture – moved by drama, uplifted by religious passion, suffering the pain of others. Language thus is the great homogenizer and integrator. (We can both agree that an apple is green, but our subjective experience of the color of green may be different. Thus far, we have no way of directly comparing our actual subjective experiences. The same applies for hearing, smell, taste, pain happiness, sadness, anxiety, depression, anger, etc.)

    On the other hand, while each of us has their own unique genome, large parts of it are ultra-conserved, being 100% identical across individuals and even species. It appears that newborns come preloaded with schemas for sensation, learning and language. This helps to explain the rapid acquisition of prodigious vocabularies by some, for example, but the specifics of language always have to be learned from the absolute beginning.

    However, our genome could provide us with common neurological structures that could ensure that we have similar subjective experiences. We simply do not know much about these things. It does present yet another large obstacle for MU, insuperable in my humble opinion.

    Like

  9. Pingback: The mind / body dualism of ‘Edge of Tomorrow’ | SelfAwarePatterns

  10. Pingback: Biology uses quantum effects. | SelfAwarePatterns

Your thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s