X-Men: Days of Future Past, and multiple instances of a mind

X-Men_Days_of_Future_Past_posterThis weekend, I watched X-Men: Days of Future Past, which I enjoyed.  This post discusses some aspects of that movie, most notably the ending, so if you haven’t seen it yet and don’t want to be spoiled, you might consider skipping it until later.

In the movie, mankind is in a devastating war with the mutants, with the Sentinels (killer robots) relentlessly finding and destroying the mutants.  In the process, much of Earth has become a wasteland.

One of the mutants has the ability to send a person’s consciousness into their past self, with knowledge of the future, and the ability to alter that future while they are in their past self.  In an attempt to stop the Sentinel war before it begins, Wolverine’s consciousness is sent back to 1973 to work with Xavier’s and Magneto’s past selves (the two main mutant leaders, and enemies in most of the films) to prevent the event that enables the war.

Right off the bat, this consciousness going back in time notion is similar to the one in Edge of Tomorrow which I discussed in a previous post.  Just as in that movie (and it countless other sci-fi scenarios), having someone’s consciousness go back in time is inherently dualistic (as in mind-body dualism).  But Xavier’s powers themselves seem inherently dualistic, so this isn’t necessarily a change in assumption from what’s always been there.

Anyway, after many trials and tribulations, our heroes succeed in altering time.  The Sentinel war winks out of existence.  I was impressed that the film went out of its way to make sure we know that the war timeline disappears.  So there’s no possibility that a version of our heroes continue to suffer the consequences of the war in a separately existing timeline.  (As some of us discussed, this was something the Edge of Tomorrow film failed to make clear, even possibly implied the opposite.)

The war timeline is replaced with one where all of our favorite characters are still alive and apparently living in a much better reality.  Wolverine, after his adventures in the past, wakes up in this new future and is astonished to find Jean Grey, his unrequited love, still alive (along with Scott Summers to make sure he keeps his hands off her).  He immediately goes to Xavier’s office and reveals that he is the Wolverine from the alternate timeline, whereupon Xavier welcomes him and they begin a long talk.

And here is the reason for this post.  There remains a tragic event here.  The Wolverine that lived from 1973 to the present, dies.  (The film only shows the alternate Wolverine for a few seconds in 1973 when war-future Wolverine’s consciousness is momentarily yanked back.)  Oh, we don’t see him die, but we know he winks out of existence because his consciousness is replaced by the one from the war timeline.  The people who knew and loved that Wolverine, who had had shared experiences with him throughout the years of the new timeline, are bound to view it as a shocking loss, something the film ignores.

Of course, you could point out that all of the surviving characters at the end of the war timeline die when that timeline ends, and that would be true.  But since their entire timeline ends, there’s no one around to mourn them.  That isn’t true for the peacetime Wolverine.  Who knows how much better his life might have been in the alternate four decades (we learn he is a teacher), or how any friends will react to this (more) war weary version of Wolverine from the war timeline, who may no longer know many of them.

And if mind-body dualism is true, how is it even possible for there to be multiple versions with different memories?  Would that mean that each timeline has it’s own version of disembodied soul for each person?  If so, which one might go on to an afterlife?  Or would there be an afterlife for each timeline?  The film almost avoids these difficulties by not having the continued existence of that other timeline, except they let them leak in by having two versions of Wolverine in the film.

Of course, the films have made Wolverine the main character.  (They’re actually getting a lot of mileage out of that character, and Hugh Jackman.)  And having the version of the character that experienced all the losses we’ve seen over those films, be the one to see a reality where all of that tragedy is undone, makes the final resolution in this film all the better.  Unless you’re peacetime Wolverine, or one of his friends or lovers.

The movie is fantasy and I’m almost certainly applying much more thought to this than the filmmakers did (or most of the audience), although it would be interesting to see any character fallout in a sequel.

But this fictional dilemma raises an issue a society might have to actually deal with someday if mind uploading, or any other kind of mind duplicating or cloning technology, is ever developed.

Which version of a mind is the one true one?  Is that even a reasonable question anymore?  And if they fork at some point, how far do the alternate versions have to go before we regard the end of a particular version as a death?  Would such a mind ever really be dead if at least one instance of it was still around?

If one of those versions committed a crime, who should be held responsible for that crime?  (If we only held the particular version responsible, what’s to stop anyone who wanted to commit a crime from spawning a version of themselves for just that purpose?)

Many might be tempted to say that the issues involved are simply too difficult.  That’s it’s easier to conclude that such mind duplicating is impossible.  Of course, reality is under no obligation to make things easy for us, as it has frequently demonstrated throughout history.  But it raises the interesting possibility that a society might be so disturbed by these questions that they make multiple instances of a mind taboo, a forbidden practice.

15 thoughts on “X-Men: Days of Future Past, and multiple instances of a mind

    1. I tend to agree. Not because I’m worried about paradoxes or violations of causality, but because we don’t have a flood of tourists from the future. Of course, we can’t rule out time travel or time communication being possible once someone has built a receiver or receiving gateway of some sort.

      Like

  1. Yeah, I think you’re analyzing a lump of coal as if it were a diamond. 🙂 Time travel stories are kind of the “funny money” of science fiction due to major problems of causality. (I can’t remember the author or story, but one I read really highlights the problem of time travel in that we get time travel by having the information given to us by time travelers who are using technology that was once brought to them by time travelers…. well, you see the problem. Killing grandpa is the least of it!)

    One question that popped into my mind was: how was it possible to yank Wolvie back to a timeline that never sent him? That war Wolverine should have vanished once he accomplished the task of altering the future.

    Of course, the altered future never sent him, so who went back to fix things? 😀

    One serious theory is that, if TT was possible, and you went back, you’d instantly create a new timeline ala MWI. You would be free to kill your grandparents or bring them new information without contradiction, since you’re no longer in your timeline. But you can’t ever alter the future you came from. (The whole topic of the unalterable past and the undetermined future, separated by the sliding moment of “now” is a fascinating one.)

    By coincidence, I’m re-reading John Varley’s The Ophicuchi Hotline (1977) which has a reality of fast cloning (six months to full grown adult) plus the ability to download and upload minds. Periodic recordings of your mind allow a form of immortality, since if you’re killed, a clone is grown and your last recording is downloaded.

    Two points I found interesting about the book. Legally, only one version of any given mind is allowed to be corporeal at one time. Having two (or more) of you is easy to do, but highly illegal (death penalty illegal). The second thing is that, as an active, ever-changing “device”, the snapshot of your mind has to be exactly that: an instant snapshot of the entire mental state at one instant. Otherwise the picture isn’t accurate — like trying photograph a fast car with a slow shutter.

    It occurred to me that (assuming mind downloading is possible at all, which I doubt) this could represent a real limit. You can’t read the mind out like a tape, you have to capture the instantaneous state in one given instant. (I also wonder if Heisenberg might put insurmountable limits on mind downloading.)

    On the flip side, I can also see a society having a very pragmatic approach to cloning and not viewing it as anything more than having children.

    Like

    1. I love the phrase “analyzing a lump of coal as if it were a diamond”! It perfectly expresses the sentiment I often feel for commentaries on ancient works, where the commentary is often far more sophisticated than what it’s analyzing.

      That occurred to me too, that Wolverine’s consciousness shouldn’t have been able to cross timelines, and I agree that that’s before we even get to paradoxes. The film did try to make clear that the war timeline disappears, that it doesn’t continue MWI style. Of course, if MWI is true, then both timelines exist and every other variation of possibilities.

      The Ophicuchi Hotline sounds interesting. A few years ago I read Richard K Morgan’s Takeshi Kovacs novels (warning: they are very noir), where everyone’s mind was actually recorded in an implant at the brainstem and so could be easily moved between bodies (or androids), but having duplicates active at the same time was illegal (also death penalty, in other words, taboo).

      On uploading, the snapshot or Heisenberg objections don’t trouble me too much. They may indeed be barriers to a perfect copy, but not necessarily to an effective copy, and remember that my mind isn’t a perfect copy of my mind from last month. For quantum physics to be a real barrier, the mind would have to be crucially dependent on details of a sub-quantum reality. It couldn’t just be dependent on standard quantum physics that average out, because that could be duplicated.

      I think the children comparison could be good. It might be that you could make a clone, but once you do, you have a critical responsibility to provide for it and may be responsible for its actions for a time. I’d say that maybe there’d be a problem with deadbeat cloners, but cloning wouldn’t have the strong evolutionary impulse that intercourse does.

      Like

      1. Oh, I have no doubt the film follows the usual idea of time travel: change the past and the (one and only) timeline changes. It’s just that the idea is inherently contradictory under close examination.

        I thought the recent film, Looper, was one of the best time travel stories I’ve seen in a while (it does follow the pattern I just mentioned). For a story built on an impossible concept, it did very well and was a pretty watchable film.

        I think Primer is another one that tries to get it right, but I’ll have to see that one a bunch more times before I really underestand it. Even the inforgraphics I found that try to delineate it are just as confusing as the movie.

        “On uploading, the snapshot or Heisenberg objections don’t trouble me too much.”

        The second one might be formidable. In the Varley book, the technology handles the need for the instant snapshot, and I agree it’s an engineering problem. I just liked the idea you need to capture everything at the same instant. It makes sense; you need to capture the state at a given moment, not spread out over several. You would otherwise end up with parts that don’t synch with each other.

        The importance of being able to capture the complete quantum state could be a bigger deal. We don’t have any idea what’s involved, so it’s hard to predict.

        The degree to which the quantum state affects our conscious minds is completely unknown (and as you know, many suspect “free will”, if it exists, must involve the quantum state as the only source of (apparent) randomness we know).

        Put it this way: a brain “box” is an engineering challenge we will solve in the foreseeable future. Whether we get a mind out of it remains to be seen (maybe it will have to spend years “growing up” and learning).

        But downloading an existing human mind into a machine strikes me as a much more significant technical challenge mostly because, at this point, we really don’t even have a clue what’s involved.

        Like

        1. I enjoyed Looper. It also was inherently dualistic, but in its case, it showed it by consciousness transcending timelines. I watched Primer and missed almost every significant detail, although I did appreciate its rigorous approach. (I subsequently learned about the missed details reading the Wikipedia article. The friend who had recommended the movie to me was upset that I didn’t watch if five times like he did.)

          I do know that many hope quantum mechanics will rescue free will. I’m a compatibilist, so I accept free will regardless, but it’s hard for me to see freedom in the mere unpredictability that QM may provide.

          On mind copying, I wouldn’t say that we don’t have a clue. If the mind is completely dependent on the brain (which I think we have good reason to conclude it is, but I know you disagree) then we know we have to record the state of the brain, although we don’t currently know how far down we need to go. Is the connectome (neurons, synapses, etc) sufficient? Or do we have to go down to the molecular level, or even the quantum level? (I wouldn’t be overly surprised if the brain uses quantum effects for its processing, but I would be surprised if it stored information in it.) I personally suspect that a perfect copy would require the molecular level, but that we don’t have to go down that far for an effective copy.

          I’d say only time will tell, but even if it is ever done successfully, I fear some people will regard copied minds as philosophical zombies, as abominations, and reject them. Particularly if there are differences between the copied and original, which I’m pretty sure will be unavoidable.

          Like

          1. All good questions!

            To be clear, my last four ‘graphs above involve transferring, and having active within a machine, a human mind. The Kurzweil Dream. That is what we do not have a clue about, because we do not have a clue about the nature of human consciousness.

            Memories are information, but consciousness is a process. The same difference as between your computer’s memory chips and CPU. I agree we’re zeroing in on the memory chips, but the CPU is a more difficult task (“the hard problem” as Chalmers terms it).

            Merely downloading memories (as in the Varley book) is a lesser task, although it raises interesting questions about how those memories are put back in (book hasn’t touched on this so far). It raises questions along the lines of the ones you mentioned.

            Growing a clone from your DNA is almost possible today. We’ve done it with sheep. But that clone’s brain would be tabula rasa. Your memories — your whole history — is imprinted in your brain, but not in your DNA.

            So the question arises how to install memories in a blank brain. No normal growth process would provide them. (Some believe RNA might be connected with memory and might allow memories or education to be “injected”.)

            In the book, you’re unconscious for the recording process. When you wake up, you’re either in the same body moments later or in a new body at some point in the future. (No machine consciousness in the book so far, either.)

            Absolutely agree about the likely legal, political and social quagmire! Just consider stem cells (or the older conflicts about abortion).

            An interesting question might be how to tell the difference between a “regular” mind and a philosophical zombie. (FTR: I’m among those who consider the concept of philosophical zombies incoherent.) We only know that we ourselves (on an individual basis) are not… we take the word of others that they also are not.

            Since a p.z. would report that it experiences red, how could we tell?

            I think for me, it would involve describing why a piece of art moves them and especially when they can create their own art which moves others. I believe art is unique to the intelligent conscious mind, so it will be one of the litmus tests I’d apply. (I presume they would be unable, and that’s central to my thinking the concept isn’t coherent.)

            Write a story or piece of music that connects with me, stirs and moves me, and then I’ll believe I’m dealing with a mind and not a mindless box of diodes.

            Liked by 1 person

    2. Good point on philosophical zombies. If we understand the architecture of consciousness, and have access to its internal processing, we might be able to ascertain whether a mind is a behavioral zombie (although neurological or soulless zombies seem untestable). Of course, many people insist that anything that can adequately mimic the behavior of consciousness must have a consciousness architecture embedded somewhere in its algorithms. (I’m agnostic on that one.)

      On memories and DNA, there have been studies showing that an acquired fear of a particular smell can be passed from a mother mouse to its offspring, although it’s a pretty limited effect and a pretty primal memory.
      https://selfawarepatterns.com/2014/07/29/learning-the-smell-of-fear-mothers-teach-babies-their-own-fears-via-odor-animal-study-shows/

      Art might be a good criteria for copied minds, if the original mind was artistically inclined. But if they were not articulate, they might struggle to explain their feelings toward that art. Although showing appreciation for the same art that the original did might be a good indication. (Caution is called for however, since stroke victims have been known to radically change their pre-stroke preferences.)

      For AIs in general though, I think an engineered mind could be self aware, but not have any appreciation for human art. Art, it seems to me, relies on shared backgrounds. It’s why art from radically different cultures often seems bizarre to us. AIs wouldn’t have that shared background with anyone, except possibly other AIs.

      That raises the interesting question of whether AIs might have their own art, something that we might find utterly incomprehensible.

      Like

      1. “Art, it seems to me, relies on shared backgrounds.”

        To some extent, but great art transcends the details and targets the heart of human experience (although one could say that is the shared experience). For example, many find ancient cave art oddly compelling. Ancient Greek and Shakespearean comedies are still funny today even though the only really shared experience is the human condition itself.

        In fact, comedy may be an even better litmus test. Recently I read about a study that posited humor as a possible driving force in the evolution of human intelligence. (There is a theory that needing to cope with large social groups was a primary driver in the development of intelligence, and humor has great power in helping groups get along. There is also that many women list “sense of humor” as an important criteria in a mate.)

        The idea that an AI could “get” a joke, let alone make one, seems as preposterous to me as AI appreciation (let alone creation) of art.

        I’m definitely “from Missouri” on that one! :\

        Like

        1. You know, I’m thinking back to all the AIs I’ve seen in sci-fi, and I can’t recall many of them having a sense of humor. (Many were humorous, but not intentionally by the character: C3PO and Marvin come to mind.) I don’t think people expect AIs to have humor. The one exception that jumps out to me is Iain Banks’s Culture novels, where the AIs often show a lot of humor. (They had unique and entertaining personalities, were very powerful, but also very benevolent. A combination that Banks admitted might well be too good to be true.)

          Liked by 1 person

          1. That’s an excellent point! (We can add Star Trek‘s Data to the list of unintentionally funny AIs. In fact, there’s an episode where a hologram of Joe Piscopo (IIRC) tries to instruct him on how to tell a joke. Unsuccessfully.)

            It wouldn’t surprise me if there were some intentionally funny AIs in Keith Laumer’s or Harry Harrison’s work (or maybe Alan Dean Foster’s), but I just stood looking at my library for several minutes and didn’t see any that stood out.

            I wouldn’t wonder that many do see the humor as a dividing line between human and not-human. For most, the implication that they lack of sense of humor is a pretty big insult.

            The more I think about it, the more I think that — if hard AI is possible at all (which as you know I find dubious) — it may not be possible for it to roll off the assembly line (so to speak) and be “turnkey” ready. It’s possible some analogue of “a lifetime of learning and experience” will be necessary (perhaps shorted via machine speed?).

            If so, that suggests each AI might end up slightly different, just as humans do. It could be that figuring out hard AI ends us in the same quandary as human cloning — we just end up making lots of individual new beings to deal with (again, I’m citing Star Trek‘s Data who was given “human” status in an early episode).

            Several authors (SF and non-fiction) have explored the idea that, if we do create truly intelligent conscious machines, what right would we have to bind them to our will? It would be slavery all over again.

            Like

    3. Hmmm. If we do have to educate AIs (which strikes me as a reasonable possibility), we shouldn’t have any trouble copying them afterward, once we have one that meets our needs. Unlike human minds, we won’t have a messy evolved brain (with no software/hardware distinction, data port, or convenient tools) to contend with. So there might be several “lines” of AI (similar to the Battlestar Galactica Cylons, if you ever watched that show).

      Of course, once copied, the minds would start diverging, unless we set them up to replicate memories between them. Even then, the order that the memories were experienced seems like it might still lead to personality differences. (The series I recommended the other day, Ancillary Justice and Ancillary Sword, actually deal with this issue.)

      The issue of what would make an AI a fellow being that we shouldn’t enslave is an issue I’ve been kicking around for a post that I’ll hopefully do sometime soon.

      Like

      1. You’re assuming a brain machine would be like a computer with debugging tools and data ports. It might not work like that. (Of course, we’re deep into speculation here, so who knows!)

        Consider the difference between manufacturing video and audio tapes versus manufacturing CDs and DVDs. The latter are made with content already on them — the information state is part of the manufactured thing.

        But tapes needed to be recorded. They have large banks of machines that do high-speed copying, but the process is still required to lay down the information on the blank matrix.

        Given that our brains start off blank and accumulate experience (and they’re the only example of a conscious thing we have), it’s not unreasonable to supposed brain machines might need to also start blank (like tapes).

        Ultimately, I suppose it depends on how the mind is encoded in the physical state of the brain and how prone that state is to manipulation. For instance, even if we understood the brain well enough to “read out” memories, installing those memories in another brain might be a challenge.

        Like

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.