Mind uploading and continuity

As a computational functionalist, I think the mind is a system that exists in this universe and operates according to the laws of physics. Which means that, in principle, there shouldn’t be any reason why the information and dispositions that make up a mind can’t be recorded and copied into another substrate someday, such as a digital environment.

To be clear, I think this is unlikely to happen anytime soon. I’m not in the technological singularity camp that sees us all getting uploaded into the cloud in a decade or two, the infamous “rapture of the nerds”. We need to understand the brain far better than we currently do, and that seems several decades to centuries away. Of course, if it is possible to do it anytime soon, it won’t be accomplished by anyone who’s already decided it’s impossible, so I enthusiastically cheer efforts in this area, as long as it’s real science.

There have always been a number of objections to the idea of uploading. Many people just reflexively assume it’s categorically impossible. Certainly we don’t have the technology today, but short of assuming the mind is at least partially non-physical, it’s hard to see what the ultimate obstacle might be. Even with that assumption, who can say that a copied mind wouldn’t have those non-physical properties? David Chalmers, a property dualist, sees those non-physical properties as corresponding with the right functionality, so for him AI consciousness and mind copying remain a possibility.

One objection that I often hear is the break in continuity. Most people, including most philosophers, feel like a copied mind just wouldn’t be them. Often they’ll acknowledge that we have breaks in continuity every night. Or the Ship of Theseus issue, that the matter that makes up our brain is being constantly recycled and refreshed, so that the matter that we were composed of years ago is not the matter we’re made up of today. But uploading seems like an abrupt shift from one set of matter to another, a very different kind break.

Interestingly, this is an issue for the original mind, not the copied one, who should be able to remember being the original. But to the original, a new being seems to have been created that acts like them, but isn’t them. (Assuming that the copying process doesn’t result in the destruction of the original, a possibility that itself might make people reluctant to volunteer, at least before they’re on their death bed.)

I was reminded about these issues when listening to David Eagleman’s interview of Max Hodak. Hodak is the founder of a company that works on neural prosthetics, with current focus on helping people with vision issues. His ideas on how to make progress in this area are fascinating. They involve growing new neurons to interface with the brain, rather than directly attaching technology to the brain, with all the autoimmune issues that typically result. (You don’t have to watch this video to follow the rest of the post, just embedding for reference.)

Ep 90: What’s the future of connecting our tech to our brains? | INNER COSMOS WITH DAVID EAGLEMAN

Toward the end of the discussion, Hodak discusses the longer range implications, such as substrate independence, aka, mind uploading. He notes that the technology to link two brains together should be possible in the near future, and sees that as possibly getting around the continuity issue. He seems to envisage it working sort of like a Vulcan mind merge.

I’m personally not sure that’s what would happen, but it highlights some possible workarounds for the continuity concern. It raises the possibility that in a mind upload scenario, the original biological mind could be linked with the copy, possibly being able to experience both sides of the divide.

Another possibility often covered in science fiction is the ability of various copies of a mind to share memories with each other. It’s much easier to think of those other copies as you if you can remember being them. But there are a couple of obstacles to making this work.

One is that a biological brain isn’t like a commercial computer. It doesn’t have a data port or addressable memory. So simply copying things in, as we see in the Matrix movies, isn’t really feasible. The brain’s neural network is constantly changing, but there’s no mechanism for those changes except for experiences in the normal fashion. This means there’s no way for the biological version of me to simply receive my digital twin’s memories.

The second issue is that, even in the digital copies, it’s not clear how to copy memories from one version of the mind to another with fidelity. Consider that learning and forming memories in a neural network means changing the synapses (connection weights), often in a distributed fashion throughout the network. And those changes are relative to the current state of that network. So two copies of a mind, once they start to have different experiences will diverge pretty quickly, changing what a memory means for different versions.

One possible way around both issues might be recording the signals coming in on the sensory pathways to the mind, and then allowing another copy of that mind to play those signals back. This could give the second copy something like the experience of the original. It would involve an enormous amount of data, although if we’re able to record minds, we’re probably able to handle it.

However it seems like it would be far more time consuming than the sci-fi versions of memory swapping, since the receiving mind would have to take the time to undergo the experience. It might also be a very strange experience, since the receiving mind would be experiencing all the sensory effects of any actions taken by the sending mind during the original experience, without going through the same volitional thought processes.

Still, with implants similar to the type Hodak is discussing, we might imagine the original biological mind being able to share in the experience of being the copy, either though a link with the copy, or with sensory playback, which might address the continuity concerns.

Unless of course I’m missing something. What do you think? Are there conceptual difficulties I’m overlooking? Or other possibilities that might alleviate the continuity concern?

59 thoughts on “Mind uploading and continuity

  1. Uh, I think Step One would be decoding the human brain. What are the signals, how are they sent, how are things stored? We aren’t close to understanding this at the moment, but unless we know what it is that is to be transfer to another substrate, we won’t know how to build that substrate nor how to transfer the information from one system to another.

    Speculations about such a transfer are fun but very, very, very premature. They do serve as a guide for future research, however.

    Liked by 3 people

    1. It is speculation, and I did acknowledge in the post this is probably about a long term proposition rather than anything that’s going to happen in our lifetime. But I don’t think it’s any more premature than speculation about interstellar exploration, Dyson spheres, or many other concepts I occasionally ponder. It can also be approached in the spirit of a philosophical thought experiment. And sometimes it’s just fun to speculate.

      Like

  2. Nice post! As a fellow computational functionalist, I want to emphasize a point or two.

    Keep in mind that Singularitarian =/= belief in mind uploading. That’s Kurzweil projecting his desires into the concept.

    As a (Vingean) Singularitarian I accept that mind uploading may be possible in principle, but not in practice. The nuances of an individual human mind are such that you would have to simulate essentially all of the molecules in the brain and probably the body (because hormones circulated in the blood have an impact). It is likely the informational requirements exceed what’s available in the universe.

    And I think you are correct in dismissing the idea of shared memories. As you say, long term memories involve the creation and reorganization of synapses at least, and I don’t see how you could make that work.

    On the other hand, sharing experiences may be a real possibility, and soonish. We know about the twins who share some thalamus, and thereby share some experiences. The thalamus acts as a relay from one area to the next, eg., from retina to cortex, and then from that cortex to a higher order part of thalamus to a higher order part of cortex, and so on. I could see co-opting a little used part of cortex to receive from an outside source, which then uses the normal machinery to send that input to the next part of the thalamus and up the chain. Guess I’m gonna have to watch that video you linked.

    *

    Liked by 2 people

    1. Thanks!

      On the technological singularity camp, right. I originally wrote it in a way to make that clear, but it made the text flow awkward, and I was exceeding my word count, so it got streamlined. But as someone who often gets lumped into the Kurzweil camp just for talking about this stuff, I totally understand where you’re coming from.

      I disagree that we’d have to simulate every molecule to simulate a mind. A lot of the molecular machinery in neurons, such as the way action potentials propagate, are just repeating patterns. Once we understand how they work (and we have a decent understanding of that part already), we can abstract what we need. There may be some regions where the individual molecules become important, such as synapses, and possibly gene expression, but I don’t see a reason to assume every one is.

      Also, as I noted a few years ago, if we back off of having to have a perfect copy, always bearing in mind that my mind isn’t a perfect copy of itself from yesterday, much less last year, in favor of just an effective copy, then things become much easier, albeit still very hard with our current knowledge and technology. I think the informational requirements only exceed the universe if we insist on an atom by atom copy.

      For an example of pushing this to extremes, check out Eric Switzgebel’s short story on it: http://www.unlikely-story.com/stories/the-dauphins-metaphysics-by-eric-schwitzgebel/

      I wouldn’t call this version a real copy. I think the person’s private memories need to be included for that, but imagine if we had an AI designed to mimic a person, fortified with a streamlined network based on approximate scans of their brain.

      I think you’d find the video interesting. (There are also audio versions, which is what I used, although the visuals in the video help.)

      Liked by 1 person

      1. My point was we’d have to simulate every molecule to get the exact same mind. If we take shortcuts then we get a slightly different mind. Eg., the gut biome may actually have some effect on your mood. Is your mood part of your mind? For myself, if we get to the point where we can come close, technology will be such that I don’t see the point. I guess maybe it would be like life insurance, not for your benefit but for the benefit of those who depend on you.

        re: the video — Wow. I didn’t realize how close we were to a real interface. I’m wondering if Hodak is aware of and/or interacting with Michael Levin. Getting cells to grow into a functional role based on their environment is Levin’s shtick.

        *

        [come to think of it, there was an old SNL skit based on the life insurance idea, except the new daddy was just a different daddy, not a copy of the old one.]

        Liked by 2 people

        1. Life insurance could be a way to think about it. Accidents happen, and it seems like the probability would increase with an extended lifetime. There’s also the fact that every physical system is subject to entropy, so no body could last forever. Of course, that’s also true for information copied between physical instantiations, but it seems like the day of reckoning could be delayed exponentially farther into the future.

          Didn’t Levin’s name get mentioned in that discussion? I’m sure he knows who Levin is.

          Liked by 1 person

  3. What a coincidence, I have been thinking along exactly the same lines today. You will surprised to hear that I agree with most of what you have to say. I don’t feel that consciousness is “just” physics and I do not feel that it is an illusion. It is, perhaps, an emergence. As you know I lean towards consciousness potentially being a fundamental element in the universe which arises given sufficient complexity.

    But enough with the details of stale arguments.

    I was reading an article about that young chap who has had a Neurolink chip uploaded into his brain and who can now operate a computer on signals from his brain.

    Imagine the reverse happening – signals from the outside operating on his brain. Ian Banks or what!

    I happen to suppose that eventually we may merge with AI. I happen to hope that may be the case.

    Now here is something you may find very odd and uncharacteristic. My meditation leads me towards an ever clearer understanding of my own character, my own strengths and weaknesses. While this is quite different to claiming that it gives me actual knowledge of how my brain works it does nonetheless make me ever more certain that my “being”, my consciousness is indeed “plastic”, changeable.

    That my techniques are making physical changes in my brain. That my efforts are making me a different person, or at least more the sort of person I want to be.

    I am not waxing lyrical Mike and I am not claiming spooky metaphysics (for a change). I am suggesting that mind and matter are perhaps not as clearly separated or divided as we might think.

    You might say that I am claiming mind can effect matter. Well, as our poor friend from Neurolink might tell you, that is indeed the case. Exactly “what” it is that is emitting signals and altering and directing the computer is another matter. In one sense it is just the electro chemical signals.

    And back to me – I have no doubt that if I had an MRI it would show physical changes in my brain of the same sort as have been recorded by research into long term meditators. You might say that that is little different from a decision to pursue exercise building up my physical muscles. But even the latter is still a mental decision affecting matter.

    Mind uploading, mind continuity – yes. And mind changing too.

    Perhaps the only difference in our views is that I always sense wonder in it. Your approach is always more phlegmatic and clinical.

    Anyway I ramble but your thoughts are interesting.

    Liked by 2 people

    1. You do surprise me, but your reference to Banks reminded me that you’re no stranger to these concepts. Right, signals coming from outside affecting the brain is what might enable some form of continuity in these scenarios. But given the nature of how minds work, it will never be airtight.

      My approach in these posts does tend to be logical. It’s a manner I picked up from the bloggers and philosophers I admired prior to starting this blog. But my thinking prior to these posts is just as emotional and wonder (or revulsion) filled as anyone’s. I just don’t depend on that to make my point since I know many have exactly the opposite emotions. If I can’t express it in a logical manner, I consider it not entirely cooked yet and hold off (well, usually).

      Interesting comment, as always!

      Liked by 1 person

  4. I tentatively agree that there’s no reason a mind couldn’t be uploaded to another substrate, but I’m hesitant to say a digital substrate is a possibility. I think a number of organic characteristics may actually be essential.

    I haven’t yet thought this through properly, but I suspect the thermodynamics of dissipative structures may be an essential component. This seems to undergird life, biological minds, and all of the mind-like self organising behaviours we see in nature. I don’t suppose it would be sufficient to simulate it either, since then the reality isn’t the spontaneous self organisation of energy flows, but a forced/artificial imitation of that that wouldn’t self sustain.

    I also think that it may require a living substrate, based on the physical self organisation and responsiveness required. Switching ones and zeros seems extremely insufficient to me. I also think the brain may be more analogue rather than digital.

    I suppose I think the nature of the substrate of our minds is extremely relevant, and the cells and organic components of them likely play a meaningful role.

    Or it may be that we could do an upload, but the new substrate would have serious implications and change the nature of the mind significantly. Like how there’s a change when you digitise old photos.

    Supposing it were possible, I wonder if it might be less appealing if the original survived. I would feel very awkward having to deal with another “me”, and it would make the uploaded “me” seem less like it’s truly myself.

    Liked by 3 people

    1. The importance of a biological substrate is a pretty common take. I’m open to the possibility, but I need to have specific reasons for it. One possibility, perhaps related to some of the points you make, is that reproducing mental processing may never be as efficient as it is in biology. Maybe it’s right at the thermodynamic limit.

      Of course, as we get better at engineered life, we can expect the boundary between biology and technology to get increasingly blurred. At some point the distinction might become to what extent it evolved naturally vs was engineered in some manner. If life is required, is engineered life sufficient? Again, there would need to be specific reasons for me to accept “no”.

      I definitely wouldn’t want to have uploaded-me around until bio-me was gone. Assuming the process had been proven, I’d be content to think of it as me waking up after a medical procedure. If I’m wrong, I wouldn’t be around to regret it. (Well, unless I woke up in an afterlife and found out I should have been making offering to Osiris this whole time.)

      Liked by 1 person

      1. Yes, I think engineered life would be a much more promising potential substrate. Although I suppose the difficulty with transferring a mind to a different biological substrate is that it’s nature as a mind is developed in the process of it living. The mind isn’t something separate from the brain for it to be neatly transferred from one to another. We would have to build a whole new brain (or other organic system), in a state as if it had undergone all it’s formative experiences. We wouldn’t be transferring those experiences so much as faking their effects, a bit like doctoring evidence at a crime scene. Still, I suppose if we could, it would from that point go on more or less as if it had had those experiences.

        It’s seeming to me now as if the idea of mind uploading requires a degree of dualism that may be unwarranted. The information takes the place of an immaterial soul that might move between physical bodies that house it. But the information and its physical embodiment seem to me to be largely inseparable. At least, not without some loss or transformation.

        Liked by 1 person

        1. It could be argued that putting a system in a state as though it had had formative experiences is essentially what happens when artificial neural networks are trained on extensive data, then copied and made available to the public. And in many ways, it’s a continuation of the fact that every time we read a book, we’re reading a copy of something that someone ultimately had to painstakingly create (assuming an AI didn’t write it). To a medieval scribe, printed books might have seemed like something faking the creation of a hand crafted manuscript.

          The dualism point is a frequent criticism of mind uploading. But it seems like the same kind of dualism as the software running on the hardware you’re using right now to read this. It’s basically a dualism of functionality and a specific implementation of that functionality. Again it seems like something we have long accepted with books. And a key difference from substance or property dualism is it’s completely mechanistic throughout.

          I do agree that copying an analog system inevitably involves loss. But then dynamically continuing such a system, as biology does, also involves that kind of loss. So I think the standard is not to avoid all loss, but keep it below the variances of normal biological change. And if it’s converted to a digital substrate, the requirement might be to ensure the quantization noise is no higher than the variance noise from the analog substrate.

          Liked by 1 person

          1. I don’t think that’s the right way to look at ANNs. The training process isn’t trying to simulate formative experiences, so much as it’s trying to mould the system towards giving the right kinds of responses.

            The book example is very illustrative, but there the important information is separable and replicable. If we took an example like a famous painting, there is a real loss between the original and the forgery. We cannot extract the essence of the painting from its medium and transfer it to a new one. It’s embodied in its particular medium, and the process by which it was made is a crucial part of it being what it is.

            Re software and hardware, my understanding is that the brain does not fit nicely with these categories, and that’s exactly the issue I’m trying to get at. The brain’s physical structures are constantly changing themselves, taking on the roles of both hardware and software, such that we cannot separate the two. I don’t have an issue with dualism in this sense for computers, and actually I think the general hylemorphic picture of form and matter is basically correct, but I don’t think we can separate the two when it comes to brains.

            I’ll give one more analogy: imagine trying to transfer the river Thames to a colony on Mars. The river is more than just the information and functionality of its structure and dynamics. It cannot be separated from its embodied and historical context. You could make an imitation easily enough, but it would always be an imitation and not the river Thames.

            Liked by 2 people

          2. Aside from different degrees of sophistication and self reference, what would you say is the difference between “moulding the system” and “formative experiences”?.

            On the painting example, a 16th century painter certainly couldn’t copy another painting without loss of information. But today a high resolution photograph of that painting seems to do a very good job. Of course, it’s not the exact same experience as seeing the painting in a museum, standing next to the canvas, seeing the surface, smelling it, etc. But as far as conveying the same information, and generating the same emotions the artist originally meant to evoke, the photograph seems to work pretty well. Or am I missing something?

            I agree that the brain makes no distinction between hardware and software. That’s an innovation of technological computing. But it’s that very innovation that makes it conceivable for the functionality of a brain to be reproduced in a general computing substrate. Put another way, we can’t load a different mind into an organic brain. It’s not that kind of system. But that doesn’t mean the brain’s functionality can’t be loaded into another system that is so equipped.

            I agree that a river can’t be separated from its structure and relations, at least without also reproducing much of its geographical context. And that’s also true of minds, to an extent. But we have to remember that minds are the control system of motile organisms. It can exist in many more contexts than the river. I do agree that a copied mind would need a body, although I think it could be a virtual one.

            Liked by 1 person

          3. Re the difference between “moulding the system” and “formative experiences”, I mean that the training of ANNs isn’t about trying to simulate the effects of particular experiences. As far as I know, we don’t have a complete enough understanding of the workings of the brain to do that. Instead, we have a target output that we are trying to shape the ANN to produce. We’re not trying to input any simulated experiences, eg of actually reading all its training data. We’re more trying to evolve a system that reflects that data.

            The photo does do a good job, but it’s still just not the painting itself, but merely an image of it. That’s why people still go to museums. The photo abstracts away an image of the painting, but the full particularities and history and this-ness of the painting cannot be captured. “The medium is the message.” That won’t stop us taking photos or producing copies of an artwork in other mediums, but these are not the original, even though there is a kind of kinship between them.

            I think the question comes down, at least in part, to whether we consider a mind as a universal or as a particular. If it is a universal, such as a particular pattern or process considered in the abstract, there is little issue with transferring or reproducing it. If it is a particular, such as a particular pattern/process as embedded within the wider structure of the world, then the greater the abstraction, the less the continuity and unity.

            I think that’s what I was trying to get at with the Thames example. The Thames is defined less by any abstraction, and more by its this-ness – its geographical and historical context.

            Re the mind as a “control system” for the body, I think this may be backwards, and again subtly dualistic. It places mind over matter, rather than mind in matter. We can instead look at the mind as “belonging to” the body and being the expression of the body’s self coordination. That seems to fit better with the evolution of minds.

            Liked by 1 person

          4. On moulding vs experiences, I guess it comes down to what we think experiences are. But going back to the original point you made above, that producing a system in the same form as one that had had experiences, I think it remains true that those experiences are still in the causal chain for the next system. That chain just has an extra step, the copying one. If the original hates orange juice because it once made them sick, then it seems like the copy’s hatred of orange juice will still be because of that illness.

            In other words, this isn’t a swamp man type situation. https://en.wikipedia.org/wiki/Donald_Davidson_(philosopher)#Swampman The copy isn’t completely unrelated to the original.

            On universals vs particulars, is there any reason a mind can only be one or the other? Consider Microsoft Windows. It’s not a universal, but it is a widely copied pattern. It’s just that some patterns are standalone, like organic minds in their natural state. But just because that’s their natural state, I can’t see that it follows that it’s the only state it can be in.

            Of course, even between copied minds, there would immediately be differences since minds are continually changing patterns, but a copied mind is going to be much more similar to the original than any other mind out there, except of course for more recent copies. We could almost think of a copied mind as a type of fork (to use an open source software term). https://en.wikipedia.org/wiki/Fork_(software_development) Two forks of the same project are much more alike than something developed completely independently.

            If a control system counts as dualism, then again, it seems like every robot, self driving car, or other autonomous system with a control center is dualist.

            Liked by 1 person

          5. Re moulding vs experience, I actually don’t think it comes down to what experiences are, except in a trivial sense. I think under any idea of experience, it doesn’t make sense to see the training phase as simulating it. It’s more like simulating knowledge.

            You make a very good point about the experiences still being in the causal chain, and I really appreciate the swampman reference. There’s certainly some degree of causal continuity, and that does strengthen the sense of identification.

            Is Microsoft Windows not a universal? I think it might be, but I’m not sure. I think minds as actual real things are particulars, as they necessarily exist at a particular location in a network of relations, and that network of relations is essential to it being what it is. I think universals are abstractions from the complexity and interrelatedness of reality, as well as abstract potentials that may be actually realized in multiple ways.

            Re dualism, I think practically all technology is dualist, in this sense. It’s mind being imposed upon lifeless matter. I’d contrast this with art, which I think is much more organic and works with its matter. Of course there’s not always a clear line between technology and art so there’s some nuance missing here.

            I think if I try to be more consistent with my recent post about death, I have to grant that there’s at least some validity to seeing an upload as your self/an extension of your self.

            Liked by 1 person

          6. Honestly I’m not sure how the concept of universals is used in the literature. Does having a second copy make something a universal? Consider a traditional author who writes a book with pen and paper. When they finish, their book seems like a particular, at that moment. But once they type it up is it a universal? Or when it gets published (and all the editing has taken place)?

            If we’re talking in platonic terms then even the first copy seems like a universal. But then so is a mind, and not only that, but every state the mind is in, will ever be in, or could ever be in. (Which is one reason I don’t find contemporary platonism a particularly productive outlook.)

            Good point on your post about death. I didn’t even think about that.

            Liked by 1 person

          7. I *think* a universal is basically any idea that isn’t limited to a particular instantiation. Anything you can actually point to is a particular, and the universal is what it is a particular example of, that might have been realised in a different way. A particular is somewhere, sometime, but a universal is a possibility for any place or time. That’s how I understand them anyway.

            My understanding is that the mind considered in the abstract would indeed be a universal at each stage, but only considered in the abstract. In actuality, every real thing is particular and constituted by its place amongst everything else. The same with the book.

            Yeah I felt a bit silly when I realised how much I was arguing against myself XD

            Liked by 1 person

      2. one example:

        ”What were we talking about?”

        the gradual degradation of brain signals, biological, electrical, and chemical, seem to be related to recollections of near recent vs temporally distant conversations.

        these physical properties would need to be simulated as well, otherwise the hormonal layers would get out of sync with the electrical layers, such that emotions would be too brief or linger longer for the digital conscious mind.
        race conditions between simulation layers might even render an otherwise healthy digital copy dysfunctional.

        Liked by 1 person

        1. Thanks for commenting!

          I agree the physical properties need to be simulated, or at least their functional equivalents. That qualifier gives us more flexibility.

          Race conditions as in what happens in a multithreaded or multi-process environment with shared resources? I think those conditions result from having code separate from data. That’s the paradigm with standard technological computing. But brains don’t make that distinction. Data and action are one. I did a post on this a while back you might find intersting.

          https://selfawarepatterns.com/2023/10/21/the-unity-of-storage-and-processing-in-nervous-systems/

          Liked by 1 person

          1. I was thinking more of things like hormones in the body like adrenaline, or pain signals. If the simulation of these physical properties aren’t accurately timed, then things like fear or pain could dissipate too quickly, degrading the fidelity of the duplicated consciousness.

            Liked by 1 person

          2. True. Particularly since things like fear or pain involve an interoceptive loop through the body to have their full effect. Which just reinforces the idea that for the new mind to be like us, it will almost certainly need at least a simulated body. Although it is worth noting that paralyzed patients retain most of their cognition, so there is a lot of variance to work with. (And I can imagine paralyzed people being among the earliest volunteers for mind uploading, even if it’s destructive.)

            Liked by 1 person

  5. Interesting topic, Mike. One thing I’m not sure of is whether our sense of personal identity consists in psychological continuity. That in itself is problematic enough, as you’ve pointed out here with the various practical problems involving memory transfers. But even supposing memories could be carried over to my duplicate, I’m not sure that would suffice for me to say “That’s me”.

    This reminds of Parfit’s teletransportation and brain transplant thought experiments. According to the lecturer I watched a while back on the Great Courses, Parfit concludes what matters most to us is survival (including memories) over identity. It seems like this what you’re assuming here as well, but I think this may be overlooking another possibility, that what matters is identity, or the unity of experience. As I see it, many things can happen to me over the course of my life such that it’s difficult to see what gets transferred from one moment to the next, and it’s hard to believe any of the contents that get transferred remains throughout my whole life. So yeah, Ship of Theseus. So what is the explanation for our sense of self? I think it’s the unity of experience (which I realize probably sounds like nothing at all, and it may be nothing if you think of it in terms of memories and content).

    As for breaks in continuity, I’m not sure we ever experience them. A paradoxical thing to say, I realize, since every night I go to sleep and then wake up and it seems many hours have passed. But I don’t think this is really a break in continuity—it’s an assumed break in continuity, given that we know a certain amount of time has passed during which we can’t remember what happened. But if you look at this from the experience side of things, where’s the break? All along there has been my experience. It was certainly never anybody else’s experience. The contents of me may not be clear or persisting, but the unity of experience has never been disturbed, and so it seems this unity is the necessary condition of selfhood.

    Perhaps “mind melding” is possible to some degree, though there will be many questions here as well, but I think there will at some point be a boundary that cannot be crossed without some sort of destruction of self (or selves), the self being the unified experience. If someone made a copy of me with all my memories and such and introduced me to myself, I would say, “That’s not me.” And presumably my copy would do the same, even if we had the same memories and physical makeup. Imagine if we not only had the same memories and manner of thinking about them, and physical makeup and so on, but everything else we went through in life was the same (not sure how this could pan out, practically speaking, but whatever). I still wouldn’t say, “That’s me.” That one could die, and I would still be alive, or vice versa. I wouldn’t feel like I would survive in any meaningful way if you told me I have five minutes of life left but my exact copy would go on living. It’s not just that we occupy different places, but that we inhabit our own separate unified experiences…even if those experiences are the same content.

    Mind boggling stuff!

    Liked by 2 people

    1. Thanks Tina.

      “But even supposing memories could be carried over to my duplicate, I’m not sure that would suffice for me to say “That’s me”.”

      We can look at it from two perspectives: original-you and duplicate-you.  For duplicate-you who did receive original-you’s memories, they remember being original-you.  So for them, they’re going to feel like the same person, at least initially.  But that will fade very quickly as the two you-s start having diverging experiences.  

      But what if they could share memories or experiences on an ongoing basis?  So original-you gets to remember being duplicate-you yesterday and duplicate-you gets to remember being original-you, with a daily sync happening.  Suddenly the sense that there’s one overall person might be much more powerful. 

      Of course, as I noted in the post, sharing memories this way may not be possible or feasible. Would sharing sensory experiences be enough? I don’t know.

      Definitely this is along the same lines as the teleporter dilemmas.  Does Captain Kirk die every time he is transported?  Does it matter whether his exact atoms are transmitted and used in the reconstruction or if he’s just constructed with whatever’s available at the destination?  (Star Trek kept it vague on which way it worked.  There are two episodes I know of where duplicates get created, in one a good and evil version of Kirk, and in another a complete duplicate of Riker.)

      Right, we don’t experience a break in continuity.  We really only infer them based on the clues from the change in circumstances.  I know when I wake up, if there isn’t daylight coming in or a clock nearby, I have no idea whether five minutes or hours have passed.  It’s even more pronounced when put under anesthesia.  But that’s exactly what I imagine a copied mind would feel like when first started.  If the virtual environment were enough like their original, they might not even realize the change.

      Definitely mind boggling stuff.  If it ever becomes reality, society would have to change the way it currently thinks about personal identity, ownership, and relationships.  

      Like

      1. “We can look at it from two perspectives: original-you and duplicate-you.  For duplicate-you who did receive original-you’s memories, they remember being original-you.  So for them, they’re going to feel like the same person, at least initially.  But that will fade very quickly as the two you-s start having diverging experiences.”  

        Our intuitions about this vary depending on whether we get to meet our double. If we get to co-exist, that would be disconcerting for both of us because we know we’re not the same person. Imagine if you knew you could live forever as a duplicate of yourself, but in order to do so, you would have to die right now. Would you do it? I wouldn’t. I wouldn’t believe my duplicate was really me.

        “But what if they could share memories or experiences on an ongoing basis?  So original-you gets to remember being duplicate-you yesterday and duplicate-you gets to remember being original-you, with a daily sync happening.  Suddenly the sense that there’s one overall person might be much more powerful.”

        This rubs up against my main problem with mind melding and duplicates. If my duplicate had different experiences (and how could she not?), and we exchanged information about what happened to each other, it’s hard to see how that could amount to one unified self. How would those different experiences feel like one person? To me it would feel like I’m watching someone else’s stream of experience as if it were a video. There would always be the “I” accompanying all of Tina’s experiences, but Tina would be observing “Tina 2’s” experiences. Maybe I’d be thinking, “You had a good day. I’m jealous.”

        The conjoined twins case comes to mind here. One likes ketchup, the other doesn’t. They both taste ketchup even though only one is eating it—crazy!—but the one who isn’t eating it is disgusted by the taste. How would you unify that experience of disgusting ketchup with the experience of delicious ketchup?

        “But that’s exactly what I imagine a copied mind would feel like when first started.”

        Maybe so. But who is the copied mind? Me? I’d be more inclined to think I’m just dead and someone else gets to have all my memories. Which is creepy.

        Like

        1. “Imagine if you knew you could live forever as a duplicate of yourself, but in order to do so, you would have to die right now. Would you do it?”

          It would depend on how sure I was of the duplicating process. If I was confident in it, then yes, I’d do it. I’d see it as me just moving over to the duplicate. If I’m wrong, it’s not like I’d be around to regret it. Granted, this would be a lot easier if the duplicate wasn’t active until after I was gone.

          I made this point to Jim. One of the issues is we have both innate and learned aspects of our model of self. The innate parts are hard to change. So seeing another copy of myself makes it very hard to see them as me. I do think being able to remember being them makes a difference, but I’ll admit that’s very hard to imagine. It’d be a bit like going back in time and meeting ourselves. It’s just too alien to our experiences up to this point.

          “I’d be more inclined to think I’m just dead and someone else gets to have all my memories. Which is creepy.”

          It does seem easier to cope with if we’re about to die anyway. I could see a popular approach, if feasible, being to ensure our mind is regularly backed up, but without the backup coming online until the original is gone. Whether to view that as leaving a legacy for friends and family, or survival, might amount to how well we can reconceptualize our autobiographical self.

          Like

  6. The premise of all this speculation is that the mind is separate from the body. Ignatius has already picked up on this point, but I think it deserves closer consideration. It’s as if the body were the vehicle, with an occupant we call a “mind” — and if you build an identical vehicle, then the occupant gets to be in both places. This is so disconcerting that we are tempted to wonder whether the first vehicle must necessarily be destroyed in the process, or whether each vehicle must be so physically intricate as to guarantee its uniqueness, or whether some other bizarre constraint must be placed on reproducing the vehicle, perhaps involving its “memory,” whatever that adds to the question.

    Entertaining though it may be, all this activity is founded on a confused way of thinking about “minds,” bequeathed to us by Descartes. It seems pretty obvious that if you build two vehicles you will build two minds, and they are not the same mind any more than they are the same vehicle.

    Liked by 4 people

    1. On dualism, I’ll repeat part of my response above to Ignatius.

      The dualism point is a frequent criticism of mind uploading. But it seems like the same kind of dualism as the software running on the hardware you’re using right now to read this. It’s basically a dualism of functionality and a specific implementation of that functionality. It seems like something we have long accepted with books. And a key difference from substance or property dualism is it’s completely mechanistic throughout.

      On whether the original vehicle is destroyed, it does seem to make a difference to our intuitions. As I noted above to Ignatious, I don’t think I’d want my copy running while I’m still alive. At least unless there could be some way found for us to share memories. If I can remember being copied-me, and vice versa, then it’s much easier for us to think of ourselves as one person. But without that, I agree that as original-me and copied-me’s experiences diverged, we’d very quickly see ourselves as separate. But if original-me is gone when copied-me first comes awake, there’s nothing to dissuade copied-me from considering himself a continuation of original-me.

      I think this is far from anything like the dualism Descartes had in mind. He exempted the mind from mechanism. This doesn’t. It’s treating of mind as functionality goes places traditional dualism doesn’t seem to want to touch.

      Like

      1. I think the confusion becomes clearer if we consider that no one needs to write philosophical papers about the problem of identical software on two computers, or identical information in two books. There is no philosophical problem worth considering. Yet when we talk about minds, suddenly it’s a problem. Is the mind in two places? How does it have continuity? Which one is the original? The only way to get substance out of such discussions is to suppose, implicitly, some difference that makes the “minds” case worth talking about. If by way of explanation we just retreat to the books-and-computers cases, we’re claiming that there is no problem, without throwing any real light on why we ever felt compelled to talk about it in the first place.

        Suppose for the sake of argument that we build a computer that has a mind, and then we build another just like it. Is there any puzzle about where “the computer’s mind” is? Do we think for a moment that the two computers have “the same mind,” in any interesting sense? If not, then why do we get so wrapped up with such questions about our own minds, when we talk about reproducing them in other substrates?

        The reason is that we think there’s something special about our minds, and we’re convinced there’s nothing special about the software or the substrates. This leaves us with a nagging, poorly articulated problem. What’s special about minds, I submit, is the sense of “self” that keeps coming up in these discussions. This is never going to go away. We’re not comfortable with the idea of the self being transplanted, and yet in the frame of the discussion we’re haunted by the possibility. The frame of the discussion, of course, is that the self, like the mind, is transplantable thing, separate from its substrate.

        Liked by 1 person

        1. I think you’re right that there’s a widespread conviction that the mind is different. And it is in some ways, but I can’t find any evidence for a categorical difference. (A point I would expect a panpsychist to be onboard with, but I frequently guess wrong about which view panpsychists take.) I think the real issue is that minds are us in the most intimate way imaginable, and we struggle to be objective about it.

          The self is definitely an issue. In my view, the self is a model, with innate and learned aspects. Antonio Damasio makes a distinction between three layers of self, which he calls the proto-self, core self, and autobiographical self. There’s probably not much we can do to change our core self model, but the autobiographical self is something we can learn to reconceptualize.

          Having a copy of ourselves in front of us would, I think, cause a deep reaction related to our core self. But as long as the copy only comes online after the original version is gone, it’s easier for the original to regard the copy as a continuation of their autobiographical self, if they so choose.

          In the end, whether the copy is us or another self is not a fact of the matter. It comes down to how we want to think about it.

          Like

          1. Some of us struggle to be objective about minds or selves. Others, sighing in exasperation, struggle to get the subjective aspect taken seriously. If you ask anyone, they will say they have a sense or experience of “me.” You yourself, Mike, have a sense of “me.” How much more evidence do we need? But this is anecdotal evidence, and what’s worse, supposedly “private” evidence — not to be found under a microscope or inside an equation, so it doesn’t count. This despite having all the force and immediacy of a stubbed toe.

            We keep coming up against this problem, and I guess it’s the way it must be. Apparently I can’t speak for all panpsychists, but my take is that this clear “evidence of a categorical difference” is exactly what a panpsychist would point to — despite its being disputed as evidence. The categorical difference is precisely the sense of “me” that goes with existing.

            What I especially like about the panpsychist approach is that it cuts through all the weirdness about copies. If you have a separate instantiation, you have a separate “me.” It’s as simple as that. New Kirk will point to old Kirk and say, “That’s not me,” and old Kirk will point to new Kirk and say, “That’s not me,” and both will be right. Where’s the puzzle?

            I suppose we should go far enough along this path to ask whether the new Kirk, having all the memories of the old Kirk, might insist he’s the old Kirk. If he does, he’s mistaken. Everyone else knows that. Only when we entertain delusions about the self being separable from its substrate do we start feeling puzzled about these questions of “mind transference.”

            Like

          2. Certainly I have a sense of self. We don’t disagree about the sense itself. We just disagree, I think, about what’s behind it, what causes it. For me it isn’t anything fundamental, something simple and irreducible, but an adaptive product of an organism modeling itself.

            My point about panpsychists and categorical differences is that I perceive the core of panpsychism to be the idea that there is no categorical difference between what happens in brains vs what happens everywhere else, that it’s just a matter of degree. If so, then I wouldn’t think examples like software and books would be out of band.

            On copies and the self, fair enough. I’ll let what I’ve said stand.

            Like

          3. If the self is caused by a pattern of some sort which may be freely reproduced, then the self may be freely reproduced, leading to questions of mind transference. We still have the option to say that when a pattern is reproduced, it forms a different self. But then the question arises, what makes it a different self? The answer is the same as what makes one copy of a book different than the next: the instantiation.

            But if the instantiation is essential to differentiating one self from the next, then the binding between the self and its physical instantiation takes on a deep importance, one that goes to the metaphysical level. Panpsychism faces the implications. I am my body, and it is metaphysically impossible to reproduce me, or my mind, or whatever, as a different body. The structuralist alternative seems to be that theoretically this is possible, since the mind is only a structure, which may be reproduced in any body. This leads to conundrums of the sort being raised in these threads.

            It’s true that there is no categorical difference here between minds and books. Two copies of a book are not the same book, because they have different instantiations. If we think of a book as having a “self,” this self has something to do with its content, but should not be confused with its content. The book is itself because of its paper. And every book is a self, in the sense of being self-contained. That’s not to say it experiences itself, but the panpsychist position (one panpsychist position anyway) is that its raw existence is defined by relationships which need to be understood as experiential, if we are ever to explain experience at all.

            Like

          4. It seems to all come down to what we mean by certain words. The word “book” for instance, outside of context, seems ambiguous. If I say, “Hand me that book,” then the context makes it clear that I’m using it in the sense of a single instantiation. But if I ask, “Have you read David Chalmers’ ‘The Conscious Mind’?” it becomes clear that I’m referring to a category of instantiations, many of which might be patterns of bits (as my copy is), itself a category of instantiations.

            Even a mind, which right now is usually associated with a particular body, admits to being something of a category. Consider when we compare early Wittgenstein to later Wittgenstein, or early Frank Jackson to later Frank Jackson. Were these the same mind? The atoms in their brains would have been completely recycled between their early and late phases. We can talk about the continuity between them, but that assumes a copying operation isn’t itself a continuity bridge.

            All of which is to say, reality seems to delight in complicating our long held categories and definitions. They never seem to have the scope we think they do.

            Like

          5. I think it’s simpler to say that it all comes down to the word “same.” One always has to specify the same what. There’s a sense in which, if you could copy Kirk’s body and mind, the new Kirk would be the same as the old Kirk. But when the old Kirk protests, “That’s not me!” it’s a sign that he’s thinking of a different “what.”

            What “what” is the old Kirk talking about? It has to do with his being a subject. I feel no hesitation in saying that Wittgenstein, or Frank Jackson, or you, or I, continue as the same subject, and indeed occupy the same body, continuously from birth to death. When I make this claim, I have in mind a certain referent to which “the same” applies uncontroversially. Since the constituents of the body are constantly changing, I can’t mean the same constituents; nor can I mean the same relationships, or the same outer form. All of these change over time. What gives the body its continuity, in the sense that we all take for granted, is its unity from the viewpoint of an experiencing subject. What makes it “the same” across all these changes is a felt experiential consistency, which is somehow perfectly evident to us notwithstanding periods of sleep, and which therefore probably runs deeper than consciousness.

            Liked by 1 person

  7. I’ve found the idea of uploading “minds” weirdly akin to paranormal beliefs about ghosts and telepathy – a sort of supernaturalism for the materialist. Scott Aaronson has speculated that cloning a mind would be impossible because the measurement of states of matter in the brain without disturbing the brain and its states would be impossible. Perhaps doable would be a partial snapshot of brain states, but the information would needed to be run through a translator to run on a digital computer which would lead to further degradation of original “mind.” The result might be a interesting chatbot that could imitate the cloned “mind” at the point of cloning.

    Liked by 1 person

    1. If mind uploading did require supernatural posits, then I’d lose interest in the concept. But it’s exactly it’s possibility under the laws of physics that make it interesting.

      The no cloning theorem does guarantee that an atom for atom copy of a brain is impossible, but having read extensively on neuroscience, I don’t see any evidence a copy would have to go down that far. Of course, there could be new evidence at any time that changes that.

      The conversion from analog to digital processing does mean the digital system has to have enough capacity and resolution so that the digitization noise is less than the variance noise from the original analog system. But we’ve been doing this for decades with things like music and movies.

      It is possible we’ll discover that part of the brain’s operating principles are incompatible with contemporary computational paradigms. But if the mind is a physical system, then in principle we should eventually be able to build a system that can reproduce its actions. It might well take centuries to reach this step, but I haven’t found grounds to rule it out as forever impossible, at least not yet.

      Liked by 1 person

  8. Thinking about this took me off on the tangent of how a new generation of thinking system relates to its predecessors. In humans, we reproduce and our brain structure is copied, with combinations and mutations in DNA giving it a modified structure. Then learning takes place in which the child picks up a lot from its parents and teachers but also adapts to the new circumstances and technologies it is exposed to. Unlike with mind uploading, this generational change is crucial to long term survival.

    In machines, we have the option to replicate hardware and software exactly, but more generally hardware moves to a new more powerful generation, while software is improved and made more efficient. Training of the system may start again, or progress incrementally, using new training datasets.

    Maybe as AI and LLMs (Large Language Models) come more into mainstream use, a clearer strategy will be needed for how we partially inherit performance from previous generations, whilst allowing for incremental adaptation to new realities.

    Liked by 1 person

    1. That’s sounds similar to an issue Charlie Stross pointed out years ago. That uploaded minds would have a static design, one that would eventually get left behind by improving AI architectures.

      At least unless those minds were willing to be reconstructed in the new architecture. But of course if they are, then the continuity becomes more precarious. Still, if the alternative is becoming less and less relevant, I could see a lot of people going for it.

      Which is another way of saying that the distinction between evolved and engineered minds might eventually become so blurred as to be meaningless. It leads to a future that would be unimaginably strange, the technological singularity in the original sense of something we can’t imagine or predict.

      Like

  9. I’m trying to picture this.

    I’m looking out the window at tree limbs on an overcast day. Birds are chirping.

    Suddenly my mind is cloned.

    I (cloned mind) am now looking … at a row of computers on a raised floor. The AC is running.

    Or, do we need to install some more software to make a work: a body program that allows me to eat and exercise, a world program to provide external stimuli, perhaps a sleep program – do I get tired and sleep?

    Liked by 1 person

    1. Probably so on simulating an environment, although if we put the new mind in a robotic body that can move around in the world, maybe not.

      Good question on sleep. We still don’t fully understand it yet. It might be that a well running mind needs the time for logical consolidation, apart from the physiological cleaning that happens in sleep.

      Liked by 1 person

  10. When it comes to copying or uploading the human mind, the only objection I can think of is quantum uncertainty. We may not fully understand how our brains work, but some amount of brain activity must be quantum in nature. Heisenberg suggests that we may not be able to measure that quantum activity, or at least we may not be able to measure it accurately enough to copy it.

    In every other respect, I agree: copying and/or uploading a human mind should be possible. The Heisenberg uncertainty principle is the only barrier I can think of that might get in the way.

    P.S.: Sorry if you end up with duplicate comments from me. I got a weird error the first time I tried to comment on this post.

    Liked by 2 people

    1. The no cloning theorem does state that a complete quantum copy of the brain is impossible. So the Star Trek style transporter duplication isn’t in the cards. (At least unless someone can invent a real Heisenberg compensator.)

      But it’s worth noting that the mind, in its own operations, are subject to the same rules. So anytime a memory gets rewritten, it can’t be a perfect copy of the previous version. Of course, the brain’s memory operations have limitations long before quantum factors become an issue.

      So we can’t make a perfect copy. The question is whether we can make one close enough to the original that the original, the copy, or their friends and family, can’t tell the difference, or at least can’t tell it enough to be an issue.

      No worries on duplicates, although I didn’t see any. WordPress has been changing things again, and I’m not wild with the latest revisions. Although I did try the AI art thing for the first time with this post (something they copied from Substack), to which I can only say, meh, it’s better than nothing.

      Liked by 1 person

      1. I never thought about what you said about memories. From now on, I’m going to blame all my forgetfulness on quantum mechanics.

        I guess I agree that a copy doesn’t need to be a 100% perfect copy. And just how important are quantum effects inside the human brain anyway? I’m sure we don’t have a clear answer to that yet. Maybe it’s a negligible issue.

        But if mind copying technology existed today, and if I got something in the mail offering 30% off my first brain upload at the brain upload center, these are the issues I’d want to ask the sales clerk about before I got the procedure.

        Liked by 1 person

        1. Hey, everyone else blames quantum mechanics, so why not!

          Quantum mechanics really doesn’t factor much in mainstream neuroscience, at least other than in the scanning technologies. There are lots of people looking for it more generally in biology. But the popular press tends to overplay what’s been established.

          I think my main concern with the mind copying business would be seeing lots of testimonials, or being able to talk with people on both ends of the process. Not sure I’d want to be one of the pioneers on this one, at least unless I was about to die anyway.

          Liked by 1 person

  11. Sorry I’m late to the party. Ignatius and AJOwens have hit upon one issue which seems to make substrates important. In philosophy terms, I think they’re emphasizing the type/token distinction. Oneself is not a type (a universal, a repeatable pattern) – it’s a token (an individual thing). I’m not particularly bothered by that distinction, myself. I’d happily be split into two persons, if my left brain and right brain could regrow their counterparts like a planarian worm does. If I could be duplicated, I would risk my life to save my duplicate’s life without a second thought. But I don’t have a good argument why I don’t care about there being a unique individual entitled to call himself me. And neither do you, AFAICT.

    But I have another issue which seems to make substrates important. It’s that I care about what kinds of experiences I have (or my successors, in “uploading” cases). And that seems to depend on the particular mechanisms by which I perceive sights, sounds, tastes, etc. Blind people can see (in some sense) with the help of vibrating prosthetics on their skin, hooked to cameras. But that’s a different experience than my visual experience. Function doesn’t define subjective experience.

    How far down in size scales does the dependence of subjective experience on physical structures go? I think the only way to answer that is to think about how concepts glom onto the world. One crucial fact about that is that the world contains robustly clustered properties and relations and processes. Approximately all rabbits have hearts, lungs, fur, skin, ears, etc. etc. All electrons are negatively charged and attract protons. Etc, etc. And approximately all mammals feel pain when stabbed by thorns. These common properties allow us to attach words to these repeating patterns, and understand one another. They also allow us to form concepts and remember them. By default, the meaning of a word or concept is given by the entire panoply of properties and relations that have explanatory power in collecting (e.g.) the rabbits into the “rabbit” group and letting us distinguish them from other groups. At least, that’s how I think it works.

    Liked by 1 person

    1. No worries on timing. I sometimes get comments on posts that are years old. I welcome discussion whenever it happens.

      Definitely I can’t logically prove a copied mind is the same self, because there really is no fact of the matter on it. Selves aren’t fundamental entities. (At least not for a reductionist.) Assuming the copying technology is good enough, it comes down our our personal philosophy. All we can do is talk about what might make it easier or harder to think of the copy as us, and that’s pretty much the approach I took in the post.

      On function not defining subjective experience, maybe the approximately equivalent functionality you describe wouldn’t define it. But that approximation loses a lot, a lot of specific functionality. If the causal interactions of the visual system that go on to effect behavior and episodic memory can be reproduced, that is, the detailed functionality, then it should be a very different story. (At least under functionalism.)

      You could of course argue that all we’ll ever get with a mind copy is approximate functionality, too approximate to reproduce the same experience. In terms of what effects the experience could have on behavioral capabilities, that seems like an empirical question. In terms of experience that doesn’t have those effects, how would the copy ever know the difference? For that matter, how would the original?

      On how far down in size scales, along the same lines, my take is it has to make a difference to the rest of the system, and ultimately behavior. If it doesn’t, then how does the rest of the system ever know it?

      Like

      1. I failed to highlight the difference between ship-of-Theseus tinkering versus build-a-whole-new-being-and-then-“upload”. My bad, because it’s pretty important to my thinking on this.

        Ship of Theseus: suppose I’ve lost a few neurons that somehow are particularly vital. Doctors take stem cells from me and grow new neurons, insert them in place, and somehow know how to train them (or maybe the rest of my brain does that automatically). Here it seems highly plausible – ignoring the implausibility of the setup of the scenario – that my mental life continues very much intact. After all, the new devices are human neurons, with my DNA, and the vast majority of my brain is the original equipment, and “knows” exactly how to interact with the new stuff.

        Radically New Being: A new humanoid robot is built with a quantum computing CPU and GPUs, and software that leaves GPT-4 very far behind. It has vision and olfaction sensors that far outstrip the color and chemical discriminations that humans can make. Etc. It is proposed to “upload” my mind into the robot, after which it will behave like me, modulo being able to detect more shades of color and odor and being able to calculate much faster.

        In Radically New, I see no reason to think that the robot’s “sensations” of color and smell are anything like mine. They are less similar even than the “sight” of the blind person with the vibrating prosthetic. At least that person is mapping the scene in a human brain. If original-me is dying, I’d be glad to have an intelligent agent out there who will look out for the interests of my wife and child. But I wouldn’t look forward to that life.

        Like

        1. I think we have to remember that when the brain gets them, sensory signals are just that, signals.

          If the mind is being copied into a body with radically different sensory capabilities, then it seems like the engineers handling the copying have some decisions to make. They can try to translate the sensory signals into the way human minds process them. Or they can modify the mind to deal with the enhanced capabilities.

          My preference would be to start off with the translation, with the option to then add the enhancements as desired, giving my neural network time to assimilate and deal with them. And maybe remove them if I’m finding it too disorienting.

          But let’s say I did turn on the enhancements and got used to them. If I then remembered my life before the copy, how would I remember it? In the old way of perceiving or the new one? I suspect it would be in the new one. A while back I noticed that when I remembered car rides from when I was a kid, I tended to reconstruct that memory in terms of the way cars are today, or at least the way they were much later in life. I think before long, the copied mind wouldn’t be able to even remember the difference. Although they likely would remember intellectually that there was a difference.

          Liked by 1 person

          1. The Persian Immortals were so-named because, whenever one of these soldiers were killed or seriously wounded, another would take their place. According to some versions of their history, the designated replacement for each position was set in advance, and would inherit the title. Let’s rewrite history and say that each Immortal thought of himself this way, too. He would think thoughts like “if I am slain in this battle, I will learn how my opponent defeated me, and study how to overcome him.” Where the second “I” in this sentence refers to the Immortal of the same title after the previous human body was replaced.

            If you were drafted into such an army next year, would you think that way? Are there any limits on to whom you will attach an “I” thought? If there are, how does the robot in Radically New earn your “I” thought? Is it just because he (thinks he) remembers your life and can accurately report many facts about it?

            Liked by 1 person

          2. I would say if the copy has my memories and proclivities, then I would intellectually regard it as me, at least initially. If we both continued existing,with diverging experiences, then I’d be progressively less inclined to regard us as the same person.

            In the Persian army scenario, as described, I wouldn’t think of my replacement as me. On the other hand, if they were a copy that had my memories and dispositions, I would regard them as a backup, and take comfort from their existence. But our evolutionary programming is what it is, and I’m pretty sure I’d still be scared as hell when going into combat.

            Liked by 1 person

          3. Sure, if my replacement had my memories and dispositions (copied from me) and was human, I’d take a lot of comfort in that too. But that’s in part because I know what it’s like to watch a sunset with human eyes and hear Otis Redding or Beethoven with human ears. That’s a life I could relate to. But the robot in Radically New doesn’t perceive with remotely similar equipment – the brain is not a quantum computer. (Or if it is, if Hameroff and Penrose are right, then switch the scenario appropriately.)

            It seems to me that the robot’s inner “life” (interesting word there) is literally inconceivable to us.

            Liked by 1 person

  12. It seems to me we’re going to have to understand where the seat(s) of consciousness live in the brain and how they function before we can really make inroads in developing artificial consciousness. As for mind uploads or mind melds, I believe those issues are secondary. First things first, and baby steps are key to making progress and not getting side-tracked. Lt’s first catalogue all the mental functions necessary to simulate functional consciousness down to the level of operable algorithms. We don’t even know what, how, or why anesthetics work on the brain. Once we find out what turns off and on consciousness and where the “switch” is, we can find out how it connects to the rest of the brain. We need to see whether we can copy the brain’s functions in silicon before we make it do things the brain has never done before, like uploads and mind-melds.

    Liked by 1 person

    1. I’m more in the camp of not seeing this as only a linear thing. Various people can investigate different things simultaneously. There are people working to understand anesthetics (and there has been progress in recent years for at least some anesthetics), others to understand visual perception, attention, memory, etc. It’s messy, but I think we can let a thousand flowers bloom and make progress everywhere we can.

      I noted in the post that we’re a long way from understanding everything we need to before mind copying becomes feasible. But we talk about things like interstellar exploration before working out every prerequisite technology. I don’t think it’s too out of bounds to discuss mind copying, as long as we’re honest about the most likely timeframes.

      Liked by 1 person

      1. I am not bothered by talking about mind meld or copying; only that we’re on rather shaky ground trying to discuss what it would be like and the problems or benefits of it. I’m afraid readers will believe its all too complicated to know where to start; that’s why I prefer to discuss first things first. I’m not so sure about the benefits or the dangers of artificial consciousness or artificial super intelligence, but I’m pretty confident in breaking down or decomposing the problem of how to build working models. I’m sure that you do too. Of course, programmers are not sufficient. We need brain and neuro researchers too to know the functions to be simulated.

        Liked by 1 person

Leave a reply to James Cross Cancel reply