Mind uploading and the philosophy of self

This video does a pretty good job at outlining the idea and stark challenges with mind uploading. (Watching it isn’t necessary to understand this post, unless you’re completely unfamiliar with the idea. It’s 14 minutes long, although the last few minutes are an advertisement.)

Kurzgesagt: Can You Upload Your Mind & Live Forever

I’m in the camp that sees mind uploading, or perhaps more broadly, mind copying, as a viable proposition. It may be that anything like the computing technology we have today is hopelessly insufficient, and new principles will have to be discovered. But if the mind is a system in this universe and operates according to the laws of physics, then there shouldn’t be anything in principle that prevents reproducing it.

That said, I’m not in the singularity camp that expects it in the next twenty years. I think it’s more likely to take a century or two, or possibly longer. Optimists for it happening in our lifetime usually focus on the idea of emulating the brain’s operations, and maybe getting by without having to learn the functional roles of all that activity. But as the video explains, that may take an unrealistic amount of computing power. Although who knows what neuromorphic and quantum computing might enable?

Attitudes toward mind uploading expose people’s philosophy of self. Some say even an atom by atom copy of themselves would not be them. Of course, given quantum no-cloning theorems, such a copy is probably not in the cards. (Although if Everettian physics is reality, such copies are being created all the time.)

It’s interesting to ponder the extremes. Many of us would see an atom by atom copy as us, at least initially. But suppose we had a technology that watched our behavior for years and created a simulation of us based on that? Such a simulation might be conscious and think of itself as us, but most of us wouldn’t regard it as us. It would be missing our most private thoughts and memories.

But now imagine we have a scan of your brain. We don’t have the computing power to emulate its full operation. But say we do know enough to create a simulation of you based on that scan. The simulation wouldn’t work exactly the way your original brain worked. But it would have all your memories and think of itself as you. Your friends and relatives might see some differences, but they might amount to the changes after someone goes through a major life event.

Is such a simulation you? It depends on your philosophy of self. I don’t think there’s a fact of the matter answer. Myself, if I had a chance to have such a simulation created, I’d do it. Although unless there was some mechanism for us to share memories, I’d prefer it not be fired up until after I’m gone.

But maybe I’m missing something?

74 thoughts on “Mind uploading and the philosophy of self

  1. The ability that distinguishes us from other animals are our stout imaginations. This is one of those things that we can now imagine but possibly might not be able to do. There are many of these things, we just don’t always know that they are in that category.

    The Netflix series “Altered Carbon” addresses many of these same issues. If we had the ability to make a back-up image of our minds, we could have a back-up system involving cloned bodies and our back-ups that would allow us to live an extremely long time. Of course, virtual environments would be part of that picture and we could retire from reality and go somewhere where there were to aches and pains and diseases, etc.

    While all of this is a great source for entertainment, I don’t think we are headed this way. We can’t even agree on things like getting medical treatment when it is needed is a good idea and climate change is potentially very, very threatening. We may be up to this imaginatively but not so much socially … at least right now.

    Liked by 3 people

    1. I wouldn’t want my clone to be forced to become me, and I suspect he wouldn’t do it voluntarily, if he had any sort of mind of his own. Of course, should these kinds of tech actually happen, my view might be a tiny-minority view, given how inconvenient it would be. (I haven’t watched Altered Carbon, so I dunno if this issue is already addressed.)

      Liked by 1 person

  2. I’m in the camp that sees mind uploading, or perhaps more broadly, mind copying, as a viable proposition.

    By contrast, I’m in the camp that sees uploading as absurd.

    Think of a computer. What it contains are ink marks on paper, or electrical charges or magnetic polarizations. What the computer contains is completely meaningless.

    Yes, it all seems meaningful to us. But the meanings come from us, not from the computer. The mind, by contrast, is full of meaning.

    If you try to upload the mind, you will have to strip out the meaning because there is nothing equivalent to meaning that can be uploaded to a computer. The important stuff cannot be uploaded.

    That’s how I see it. I’ll agree with what I think Steve Ruis is saying. We humans are far more like animals than we are like computers. It is the animal part of us, not the computer part of us, that makes us what we are.

    Liked by 2 people

      1. “Meaning” is imbedded in the structure James, and that structure “is” the biological brain; but I think you already know that.

        Is hanging around with those religious fanatics over at Kastrup’s blog site making your brain squishy?

        Liked by 1 person

      2. Isn’t the theory that “meaning” would just come along with cloned information processing?

        That’s how some would put it. I see meaning as tightly coupled to behavior. And I see behavior as finely tuned to our physical bodies. I don’t think they can be easily decoupled.

        Liked by 2 people

        1. Yeah, I’m agreeing with you. The brain doesn’t exist in isolation from the rest of the body and to some extent from the natural and social context in which it exists. Emulating the brain really turns almost into a project to emulate or simulate the world.

          Liked by 2 people

    1. That’s a pretty common view.

      The issue with meaning though, is you have to do a complete comparison. You mention that there is no meaning in the marks, charges, or polarizations. But what meaning is there in the synaptic molecules, neurotransmitters, and action potentials in a nervous system? In and of themselves, none.

      Any meaning in neural processing get’s its meaning through evolution and interaction with the environment, in the causal effects that impinge on it, and on what causal effects it’s able to bring to that environment. But that’s exactly what gives meaning to the marks, charges, and polarizations in technological systems.

      I know this is a favorite talking point of some philosophers, but I haven’t seen a clear identification of what meaning an evolved system has that an engineered system can’t have. But maybe I’m missing something?

      Liked by 1 person

      1. “Any meaning in neural processing get’s its meaning through evolution and interaction with the environment, in the causal effects that impinge on it, and on what causal effects it’s able to bring to that environment”.

        So would the uploaded mind be without meaning since it wouldn’t evolve or have any interaction with the environment?

        Liked by 1 person

    2. Do you mean “meaning” as in “linguistic/cognitive meaning” or “meaning” as in “a valuable (aka meaningful) life”?

      Your “ink marks on paper” remark makes me think linguistic, but “The important stuff” makes me think you are talking about value.

      For what it’s worth, if you’re talking about linguistic meaning, I disagree. I think computers with sensors and actuators can achieve linguistic meaning. But for value, I tend toward agreeing, though I’m not entirely sure.

      Liked by 1 person

    3. Spot on! No computer knows anything. The meaning of inputs and outputs is assigned by people.

      Was it Searle who pointed out that a simulation of digestion doesn’t digest anything?

      Liked by 2 people

  3. I agree that mind uploading is ultimately doable, but I think you have underestimated the computational requirements. Given the will still don’t completely understand the functionality of the brain, it wouldn’t be sufficient to completely copy or scan an entire brain, even quark by quark (and I’m not even talking about the Heisenberg uncertainty principle); we’d have to do it every microsecond, possibly down to every Plank-moment, from beginning to end, to replay it or understand how it varies over time. Once we completely understand the brain’s functionality, we can extract or extrapolate or interpolate the purely mental functionality from the purely biological support functionality, and simulate a far less storage-intensive and computation-intensive model of the brain. A thought-provoking post, Mike. 🙂

    Liked by 2 people

    1. Thanks Mike.

      It’s not clear to me we’d have to model the brain to that extent. My take is that a lot of the modeling could happen at the cellular level, but some of it, notably in the synapses, might have to drop to the protein level, and the genetic and epigenetic factors related to it.

      My point with the simulation was that we don’t necessarily need to reproduce things at that level, just find functional equivalents. Maybe a better way of describing this, instead of “copying” a mind would be “porting” one.

      Liked by 2 people

      1. Yes, but we have to understand the mind’s functionality better in order to be able to port it. I don’t believe biological infrastructure is a prerequisite for consciousness, but we need to understand how to disentangle the biological support components from the higher level mental support system. IMHO, of course. What do you think?

        Liked by 2 people

        1. I agree completely. It’s why I don’t think this is any sort of short term thing. We need to understand the mind and brain far more thoroughly than we do today. Most neuroscientists put that full understanding still a century or more away.

          Liked by 2 people

  4. If we uploaded a brain that is expecting a body and all of its inputs, how would the uploaded brain handle the missing inputs? I doubt it would seamlessly adapt to no body. It might enter into an NDE or OBE type state.

    This isn’t exactly trivial because I think you posted something a while way about a significant portion of brain activity being involved with such things as compensating for eye blinks and head movement. So the uploaded mind would suddenly find itself with a significant amount of programming that would be useless or perhaps even counter adaptive to its new state.

    Would it have sleep and wake cycles without neurotransmitters? Would it get hungry or thirsty? What would happen in the gut-brain axis? Would it be hot or cold? Would it feel no heart beat?

    All in all, I’m not sure the mind/brain is easily dissociated from the body. Sure I guess you could virtualize all of the body, but then you would also need to virtualize an external world for the uploaded mind to interact with. Or, maybe you could upload a subset that removes the expectation of a body. But then that wouldn’t be you. And what would it do. Ruminate on its glory days when it had a body.

    Liked by 4 people

    1. I think the medical cases of paralyzed people and other conditions show that the mind is far more robust than many people assume.

      But I also think if it’s going to be a happy human mind, it does need to interact with a body and environment. As you note, those could be virtual. Or it could be in a replacement body with sensory and motor abilities, so that it’s environment would again be the physical world.

      Backing up to the simulation does allow us to get by many of your other questions about sleep and neurotransmitters. A simulation has more options that a straight brain emulation, which as the video points out, has serious obstacles.

      Liked by 1 person

  5. I don’t think I’d consider a copy of myself to be the same person as myself. Then again, I’m not fully convinced that the person I was five years ago is the same person I am today. So I guess if I have the chance to upload my mind into a new body, I might just see that as moving on to a new phase of life.

    Liked by 2 people

    1. That’s a good way to look at it. And someone just argued to me that my desire that my simulation not be activated until after I’m gone means I don’t really see it as me. Maybe. But similar to your view, I think it would be comforting to know that some version of me would be around after I was gone.

      Liked by 2 people

        1. That, and I’ve often wondered if I would like other-me. I would know too much about him and so would recognize every polite fiction, every nervous tick, etc, for what they were. And he would know entirely too much about me-me for comfort.

          No, best to keep him as a backup until he’s needed.

          Liked by 1 person

    2. I think the question for me is would I go with the mind. In other words, would the mind be me? If not, then it is just another chat bot on the web. How could we upload a mind and my consciousness goes with it? And then if we could, what is the status of my conscousins here in my human mind? Mike.

      Liked by 2 people

      1. From original-you’s perspective, you would not go with the copy. But from upload-you’s perspective, you would remember being original-you. Upload-you would perceive yourself as having simply been moved from being in original-you’s body to whatever new situation you were in (virtual environment, new body, etc).

        Who’s right? Original-you? Or upload-you? They’re both right, each for themselves. Our notion of a person would have to adjust to this reality.

        Liked by 1 person

    3. > I don’t think I’d consider a copy of myself to be the same person as myself.
      Your copy also will not consider you the same person as themselves. But you and they both will consider your common past self as the same person.
      And what if the copying process is done in a way to hide which instance is the copy and which instance is the original? Would that information even matter?

      Liked by 1 person

  6. Would a desire to upload our minds come from our fear of mortality? Doesn’t our fear of mortality come from a DNA programmed sense of automatic self-preservation?

    DNA’s sole “goal” (if goal it could be said to possess) is existence in perpetuity. Would DNA’s goal be served by the copying of an existing manifestation of cosmic interpretation (our current mind-state) into another persistent mind-state? Maybe. Can we transcend DNA’s mindless directives? Have we already?

    A copy always leaves the original. What of a copy of a copy? Does it cease to matter after the nth iteration?

    I’d have to wonder about Purpose and its influence on the justification behind copying our images (software word here) into an operational medium. I personally consider the Universe to be Absurd and without Purpose. The association here being that if the whole is without Purpose, so too are the parts.

    If given the opportunity, right this instant, to copy my mind into an alien, River World construct or galactic quantum computer, I’d pass.

    (Too many questions? Or not enough?)

    Liked by 3 people

    1. I definitely think fear of mortality is part of it, although I’m not sure that’s it exactly. We know our biological self will end. We might develop technologies to stave it off for a while, but eventually entropy will win. Of course, it will also eventually win in the case of an uploaded entity, but we might have staved it off far longer. Eventually the heat death of the universe may end every possible extension.

      I do think our desire to go on as long as possible, to remain a part of the story as long as we can, comes from programming, ultimately resulting from genes. That programming was put there because it does benefit “selfish” genes. But as Dawkins himself pointed out, when genes stumbled on sapience, they created both a boon for themselves but also a system that may transcend them.

      We already frustrate the “goals” of our genes with things like birth control. It allows us to satisfy our genetic programming without providing the benefit to the genes that originally put the programming in.

      Uploading seems like it would be a disaster for our genes. But genes aren’t conscious. They have no ability to react to rapid developments. They depend on mutation and natural selection to adjust for changing conditions. The development of technologies like birth control, and possibly mind uploading, is simply too fast. The genes may have seeded their own extinction.

      I have no desire to copy my mind into River World, or any other environment that I don’t have some knowledge of beforehand. But if we are uploading people into a virtual environment, it should be possible for the living to visit that environment and see what they’d be jumping into.

      Not that I think anyone alive today is likely to get that chance, at least unless someone invents life extension technologies sometime soon. But my view leans Epicurean, so I won’t exist to care that I didn’t make it.

      Liked by 2 people

      1. I think I’ve read far too many sci-fi novels to have an original thought on the concept. To be expected I suppose. As an assembly of information, collected and stored in biological RAM, I am, if nothing more, a biased recording of perceived events, many having come from the likes of science fiction’s finest.
        Would I want my recording to continue posthumous? Does a DVD exhibit free will?

        Liked by 2 people

        1. Of course, your uploaded self might just be a snapshot of the maybe the last ten minutes of your life and you might get caught a Ground Hog Day loop of the doctor’s immobilizing you with a tranquillizer to take the MRI. That would be a bitch.

          Liked by 2 people

          1. Of course, “I’d” be dead and dust. It would be the skipping DVD that would suffer. Or not; a forever reset might have eased Phil Conner’s early angst. However, his inner Camus would never have been realized. I’ve often considered Groundhog Day’s main character to be the epitome of Sisyphus, achieving happiness in such a state—nirvana.

            Liked by 1 person

  7. As you know, I *am* in the singularity camp, but I also do not think mind uploading will come in the next 20 years. People who equate mind uploading with the singularity do not understand the singularity.

    I think I’ve said this before, but by the time we could upload a mind, or a reasonable facsimile (say, duplicating the function of each neuron without simulating molecules), our capabilities will be such that uploading will be just an act of vanity, akin to making a full body portrait of yourself today. It (likely) won’t serve any important purpose.

    *

    Liked by 3 people

    1. It’s worth noting that the earliest writings about the singularity were pretty dark. Rather than uploading, they had humans simply being replaced by machines. Writers like Kurzweil were the ones who developed it into more of a sunny optimistic vision, sort of a rapture for nerds.

      I don’t think people would see wanting to continue existing is an act of vanity. Or maybe I should ask, what would make it an act of vanity instead of an understandable act of survival? Are you thinking people might modify themselves so they wouldn’t care anymore whether they survived?

      Liked by 2 people

      1. My first source of singularity type literature was K. Eric Drexler’s “Engines of Creation”, which essentially described what nanotech can do, and more specifically, why immortality can be engineered (barring war,accident,foul play, etc.). Later I came to Vernon Vinge’s use of the term “singularity”, which simply means the point at which you cannot reliably predict the future. Kurzweil provided a good explanation of an expected timeline of the development of artificial intelligence, which seems on track as far as I can tell. Unfortunately, some of his other ideas, like uploading, got roped into the “singularity package”. Finally, David Deutsch’s “Beginning of Infinity” provides pretty good support for the whole shebang.

        The point about vanity isn’t about existing. As I’ve indicated, immortality comes with the package. I suppose you could make a point for back-ups, but I don’t think that’s what we’re mainly discussing. And then there is the discussion about whether the backup is “you”. I think our understanding of identity will need some radical re-work when copies become a real thing. For me, a backup would not be “me”, and the point of generating a backup would depend on how that entity can serve a useful purpose. And this brings us to the purpose of any individual life, which is a whole ‘nother discussion.

        *

        Liked by 2 people

        1. I was thinking about Vinge and his use of the singularity as the point at which futurists and science fiction writers could no longer plausibly make predictions about what might happen. (As if they were ever all that accurate anyway.) I recall his point being that the future of humanity, including its survival, could not be taken for granted through such a process.

          But I was also thinking about someone like Arthur C. Clarke. In his book on the future, he predicted we’d eventually be replaced by machines. But we should be okay with that, because the machines would be our children as much as our genetic offspring are, and no one bemoans being replaced by their children. Something you might expect from the author of Childhood’s End.

          It’s worth noting that another sci-fi author who started out writing about the singularity in the upload sense, Charlie Stross, later turned to writing it in Clarke’s sense, of humanity being replaced by a race of robots, who just happened to look and behave a lot like humans. (Have to have a story after all.) In the second book of the series, biological humans are referred to as “fragiles” because they die so easily.

          Okay, I see where you’re coming from. You’re expecting life extension in our current bodies, so uploads would be superfluous, except as backups. Still, accidents happen, so I’d want those backups to happen regularly. And things like space travel will always be easier in the form of a machine than as a biological human, so I can see utilitarian reasons for uploading as well.

          I could see humanity diverging in a number of ways. Some would have your attitude. Stay in a biological body enhanced with nanotech. While others would explore other options.

          Liked by 2 people

  8. Simulation is a technical action. Also, by definition, a simulation is far from being the exact copy. As with any technical task, the very important part is an assessment and validation of a final state. A related question is who are the judges. A close reminder of this is a Turing test.

    Here all the questions arise. How we define that simulation is successful? To which degree? How we test it and compare it with the original? How intrusive this test would be for a source individual? The list goes on, and on. When simulation would be possible? All timeframe of such estimates are also dependent on the validation procedure.

    No wonder that our discussion is mostly not about how to do the simulation but about what does the simulation mean. Here would be major disagreements and problems.

    Liked by 2 people

    1. Definitely validation would be a thing. Ultimately we could never know what the experience of a simulation is, until we are one. All we could do is test runs, and see how well the resulting system works. I imagine there would be years of animal testing and fine tuning before it doing it on humans. (At least in general. Rogue scientists will do rogue things.)

      Someone has already done a simulation of a c-elegans worm, put that simulation in a small robot body, and showed it moving around much like a worm. Of course, that robot didn’t have to actually find food, try to reproduce, or do many of the other things the living version would have. And it’s worth noting that the simulation was based on a generic connectome, a composite of hundreds of individual worms, not the connectome of any one worm. Baby steps, or really more baby finger and two flexing.

      Liked by 2 people

  9. Leaving aside the practicalities, imagine a copy of you being taken for the wrong reasons by the wrong people. Hell could be created just as easily as heaven. Like you, if mind is purely physical then I do not see any objection in principal to re-creating it in another medium. I do worry about being created and not wanting to be! Rather like being born – I had no choice in that and in retrospect might have passed on the opportunity given the choice.

    Liked by 2 people

    1. Now that’s Banks, notably Surface Detail. A world of mind copying does raise the possibility of having to protect your identify, at a much more primal level than we talk about today. Death would no longer be the worst thing that could happen to you. Of course, if it’s happening to a copy of you, you might be horrified by it, but it would be more distress about the fate of someone very close to you rather than one of you yourself suffering.

      Notions of self become very complicated in such a world.

      Liked by 1 person

        1. So much in that novel. The slave girl who gets killed and downloaded into a new body. The woman who gets trapped in one of the hells and evolves into an angel of death. The soldier who fights in many different physical manifestations, and turns out to be someone we already knew.

          I really wish Banks was still with us.

          Liked by 2 people

    1. From the transcript:

      And he said, “Oh, Rushkoff, you’re just saying that because you’re human.” As if it’s hubris, right? “Oh, I’m just defending my little team.” And that’s where I got the idea, “All right, fine, I’m a human, I’m on team human.” And it’s not team human against the algorithms or against anything other than those who want to get rid of the humans. I think humans deserve a place.

      Indeed, or as Steve Martin said:
      “Sure I’m self-centered. I feel it would be impractical to be centered somewhere else.”
      There is no need to be ashamed of one’s starting-point. All journeys begin “here”, and if you “just can’t get there from here” (to quote an anonymous comedian) then you can’t get there.

      Liked by 1 person

      1. A lot here depends on how we define “human”. Is a ported human mind still human? I personally think so. But if we define “human” as only the biological species, Homo sapiens, then it isn’t.

        Of course, the longer such a mind goes on, the less attachment it might have to its original biology. But then, if we figure out a way to live indefinitely, a human might resemble a natural human less and less over the centuries. And that’s before we get into things like genetic engineering.

        Anyway you cut it, the traditional sense of the word “human” will almost certainly have to change, or its role in language replaced by some other word.

        Liked by 1 person

    1. The “sideshow” is a straw man – or should be, except that there are philosophers who endorse it. But there are also philosophers who don’t – like me – but whose views are ignored in favor of the easily attacked and vaguely similar position.

      More later, time to get back to work!

      Liked by 1 person

      1. Instead of a “sideshow”, there is a – let’s call it brainshow. Hallucinations happen in the brainshow, but they are not epiphenomenal, not shunted to the side. They plow on through and have noticeable effects, such as the person saying “I just hallucinated a dagger!”

        Beyond hallucinations, the brainshow includes subjective qualia accompanying many (but not all) objective perceptions of external reality. When I feel how cold or hot a fluid is, I get a convolution of internal states of my body (e.g., cold-adapted) and the external reality of the fluid. By keeping track of the temperature of my recent environment, I can compensate for the subjective perception and, with practice, get really good at measuring the actual temperature of the fluid. Alternatively if I know the objective temperature of the fluid, I can use my sensation to estimate the temperature at which my skin became adapted.

        When I approach closer to a red apple, my experience gets larger and larger amounts of red. But I don’t perceive this as the apple seeming to grow. The brain automatically parses subjective from objective data. It seems reasonably likely that there is evolutionary value in this ability. First, such a dual awareness lets you learn to avoid being fooled by certain illusions. This potentially beats being hardwired to see only the objective reality as it probably (in the evolutionary ancestral environment) is, because the probabilities might change with circumstance. Second, if you know how things seem, you are in a good position to deceive others.

        Liked by 1 person

    2. I personally agree with the illusionists ontologically, but I don’t like their terminology. See this recent post: https://selfawarepatterns.com/2020/11/28/the-problem-with-the-theater-of-the-mind-metaphor/
      And this one: https://selfawarepatterns.com/2020/02/11/do-qualia-exist-depends-on-what-we-mean-by-exist/

      The TL;DR is that a lot of what illusionists say don’t exist, I say only exist subjectively. The problem with calling them an illusion is that the word “illusion” implies “mirage”, but these things have adaptive causes and effects. I think it’s better to say they’re part of our adaptive subjective model, but that the objective reality is far from obvious from the impressions they provide.

      Liked by 1 person

  10. I admire you for having the courage to ask the big questions. Most people do not even bother. But asking the right questions is the start of knowledge. Sooner or later you will find the answers if your questions passed through your heart.

    Liked by 2 people

  11. Interesting food for thought, Mike. I like the way you’ve framed the question of selfhood.

    “Such a simulation might be conscious and think of itself as us, but most of us wouldn’t regard it as us. It would be missing our most private thoughts and memories.”

    Exactly. It also would be missing our experiences, including the numerous little things I’ve learned and the manner in which I learned them. And if we were to create an atom by atom copy, and create a perfectly matching simulation of the environment, including a replication of all my little experiences…what would be the point? If that’s what it takes to replicate me, why would I want to live the same life again and again? It’s like that movie Groundhog Day.

    Liked by 2 people

    1. Thanks Tina.

      Definitely no one wants to be stuck in Groundhog Day hell. The idea would be to have new experiences and go on living after our original selves had ended. Our copied self, assuming their source material included all our memories, would think of themselves as us, as just having gone through a transition.

      Of course, for us after the copy, that wouldn’t be the reality. It’s why I really wouldn’t want my copy running until after I was gone. I don’t want constant reminders that I’m the old version. But if we wait until after original-me kicks the bucket, then copy-me will see himself as a continuation. And it’s easy for original-me to think of myself as becoming copy-me, as long as copy-me isn’t around making me feel different.

      But it definitely centers on our philosophy of self. How do we conceive ourselves? What’s a continuation? And what’s another being? I don’t think there’s a fact of the matter answer. Just how we personally decide to think about it.

      Liked by 2 people

  12. This is why we might want to upload our minds… Not for our cognitive immortality but for our descendants’ continuity: https://thereader.mitpress.mit.edu/chatting-with-the-dead-chatbots/

    I signed up for the one referenced Replika’s site, and was soon chatting with a bot. This was simple enough, but I can imagine somehow uploading all of my written words and ideas and preloading such a thing with “me” so that my kids can keep “me” around. The whole GPT-3 (4,5,…?) news seems to speak to us having near replicas of ourselves mimicked in silicon. I can see myself making a copy of my mind, if nothing more than to ease the feeling of loss my kids will experience when I’m dead.

    Liked by 1 person

    1. Interesting. Thanks!

      That fits the scenario I discussed in the post of something that observed our behavior and emulated us to others. Eventually such constructs might even be conscious. But without our deepest memories, most wouldn’t see them as us.

      But it’s not hard to imagine a future scenario where an emulation is a hybrid of something like this and what could be gleaned from a postmortem brain scan, which would blur the distinction. Most of us who die before that technology is ready can only hope to be reproduced based on what we leave behind.

      All that said, I have to wonder what kind of fidelity something like this has. How noticeable is the difference for anyone who really knew them?

      Liked by 1 person

      1. What this article did was to redirect my focus away from my own brain and the continuation of me and direct it toward those who might benefit from a near-facsimile of me.
        Initial fidelity of such a representation would no doubt be a deal breaker. But if it started out as a more of a diary/biography where other’s could wax nostalgic about my past, that would be a start. If they hung around and actually chatted and got somewhat true-to-form responses, then who knows. But, it’s the perspective switch that I found revealing.

        Liked by 1 person

        1. Good point. It makes me think of all the parents who worry about what would happen to their kids, or other dependents, if they should die. If you can leave an avatar behind that would at least resemble you, then it gives them something that might be pretty significant. It might not put food on the table, but maybe it could provide guidance and moral support.

          Liked by 1 person

  13. That would be very interesting. The upload would take over where you left off practically. Imagine what one could accomplish in an eternity. Cool blog.

    Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.