I recently finished reading Max Tegmark’s latest book, ‘Our Mathematical Universe‘, about his views on multiverses and the ultimate nature of reality. This is the third in a series of posts on the concepts and views he covers in the book.
The previous entries are:
Tegmark’s Level I Multiverse: infinite space
Tegmark’s Level II Multiverse: bubble universes
Tegmark postulates four levels of multiverse. This post is on what he calls the Level III Multiverse, the many worlds interpretation of quantum mechanics.
In the early twentieth century, one of the mysteries of science was the constant speed of light. The speed of light was constant no matter how it was measured. This was in contrast to the speed of sound, or the speed of just about anything else, which varied depending on the speed of the observer.
Albert Einstein accepted the experimental evidence of the constancy of the speed of light, and explored its implications. If the speed of light was always constant, then something else had to give. Something that factored into that speed had to vary, something like mass, length, and time. Exploring those implications led to the special theory of relativity.
For several decades now, one of the mysteries of science has been wave / particle duality. We have strong evidence that light behaves like a wave, and strong evidence that it behaves like a particle. We have similar evidence for electrons and just about any other subatomic particle, as well as atoms themselves and even large molecules under isolated conditions.
The shape of a wave is modeled in a mathematical concept called the wave function. The particle will appear somewhere in that wave. There is no known way to predict where in the wave any individual particle will be found. All that can be known are probabilities of it appearing at various locations within the wave. Bizarrely, once the position of the particle is observed, once it is measured, all trace of the overall wave instantly disappears, with only the particle remaining.
Just to be clear, this is freaky strange, and no one is certain why it is so. Reality at the quantum level appears to be wavelike, to the degree that the wave can physically interfere with itself when split, but suddenly, instantly, becomes particle like when we look at it. As strange as it is, this has been confirmed for decades by extensive experimental data. It is reality.
There are a number of interpretations of what is happening. The oldest, and for a long time the most popular, is called the Copenhagen Interpretation. It is basically is a minimalist interpretation that says that this is simply reality, and that when a particle’s position is measured, the wave function “collapses”. Prior to the collapse, the particle exists in what’s called a superposition. It exists in multiple locations at the same time, but once the position is known, the existence of the particle in all but one of those locations disappears.
There are several other interpretations. All of them must throw one or more aspects of common sense reality under the bus in order to make sense of the data.
In the 1950s, Hugh Everett came up with a new interpretation. Everett accepted the mathematics of the wave function, but was troubled by the lack of anything in those mathematics that predicted a wave function collapse. The only reason that the wave function collapse is thought to exist is the fact that we only observe the particle in one location once it is measured.
Everett asked, what happens if the wave function, in fact, never collapses? If the wave function predicts two locations for the particle, then the mathematics say the particle is in both locations. Of course, we don’t observe it to be in both locations. So then, what’s going on? Similar to when Einstein was contemplating the constant speed of light, something else has to give.
According to the mathematics and our sensory data, we should see the particle in only one of the locations and we should see it only in the other one. No, the second “only” in the previous sentence is not a typo. We appear to have two realities in which we observe the particle. Prior to the measure, there was only one reality. After the measure, there are two.
In multiverse parlance, the many worlds interpretation asserts that our universe is cloned every time what appears to be a wave function collapse happens. Given that this happens an uncountable number of times per second throughout the universe, and given the large range of possibilities for each particle’s position, the number of universes being created every second is sublime.
The randomness of the particles location then is an illusion, created by the fact that we only observe the location particular to our universe. But the wave function unfolds unabated with the particle existing in each location in a different universe.
This means that there are an uncountable numbers of you in these alternate universes, where each quantum result is manifested. In other words, every random event that could happen, happens in some universe, and there are an uncountable versions of you living every conceivable version of your life.
In Tegmark’s framework, this is the Level III Multiverse. It is a superset of the Level I and II multiverses, although as formulated, there’s no particular reason that its existence, or non-existence, is dependent on the other ones. If all three levels exist, then Level III includes all the multiverses in the lower levels and reality continues to expand at an astounding rate.
Tegmark does note some similarities between the Level I and Level III multiverse. In both, there are an infinite number of you living every possible variation of your life. The result of every quantum possibility should be manifest in one of the Level I universes. Of course, if they were one and the same, it would mean that remote regions in the Level I multiverse were in some way quantum entangled with each other.
Tegmark also speculates about reconciling the Level II and III multiverse, but doesn’t currently see a way to do it.
Over time, support for the many worlds interpretation has grown in the particle physics community, although Copenhagen continues to hold a plurality in most polls. The question is, is there any way to test this idea? Brian Greene in ‘The Hidden Reality’ identified the possibility of the uncollapsed wave interfering with itself across universes, although he notes that observing this would be extremely difficult.
Tegmark proposes another one, although it’s not one that anyone is liable to volunteer for. The quantum suicide or subjective immortality thought experiment involves setting up a gun with a trigger set to fire if a random quantum event takes place, with a 50% chance of taking place in the first second. The experimenter then puts their head in front of the gun.
In 50% of the universes, the experimenter dies within the first second, but in the other 50%, they live. For each second, the probability of the experimenter being alive goes down. After a couple of minutes, the probability of the experimenter still being alive is infinitesimal. However, in at least some portion of the alternate universes, the experimenter lives on.
From the subjective point of view of the experimenter, the longer they live, the higher the probability of the many worlds interpretation being true. After a few hours, increasingly unlikely events (misfire, power outage, meteor strike, etc) begin to happen to prevent their death. If an experimenter subjectively survived this ordeal for several hours, they could have a high degree of confidence in the many worlds interpretation. (Of course, in virtually all universes, they would leave behind grieving friends and family who would be less convinced.)
Tegmark then points out that, if either the many worlds interpretation or infinite space scenario is true, then a version of each of us will, despite its improbability, live long enough to outlast all of humanity. In other words if is true, subjectively, you will live long enough to know it is true, at least assuming you recall reading this. Each of us may live to be the last human in our own improbable universe, knowing the truth of the multiverse.
In the next post, we will get into the main idea of Tegmark’s book, the mathematical universe hypothesis.
28 thoughts on “Tegmark’s Level III Multiverse: The many worlds interpretation of quantum mechanics”
Wow! Definitely the most riveting thought experiment I’ve heard of in a long time. This has been an awesome series.
Great, concise article once again distilling the essence of Tegmark’s chapter and communicating the most important and interesting ideas.
I think the Copenhagen Interpretation is losing popularity, and in particular it is not popular among scientists who actually care which interpretation makes sense. It’s still taught in textbooks and so scientists who don’t much care about interpretations of quantum mechanics tend to opt for it by default. At least that’s my understanding.
The Copenhagen interpretation, as popular as it has traditionally been, is now widely regarded as incoherent nonsense (again among those who bother to think about it).
There’s no reason to believe the wavefunction collapses because if it didn’t collapse we would still see exactly what we see today. It’s a useful heuristic to calculate what any given observer should see, but there is no motivation whatsoever to think that the wavefunction collapses objectively, no clear description of when it ought to collapse and no mechanism by which it collapses.
The biggest problem with it is that there’s no definition of what constitutes an “objective” observation, so it inevitably gets bound up in all kinds of spooky woo nonsense from the likes of Deepak Chopra about the mystical powers of consciousness.
The Many Worlds Interpretation is elegant, simple and if anything matches our observations even better than the Copenhagen interpretation. It’s hard to make sense of how a quantum computer works in the Copenhagen interpretation, but on the MWI it can be visualised as a computer collaborating with uncountable copies of itself in other universes to each do part of a calculation in parallel, combining the results. This may seem fanciful but it at least makes sense of the awesome potential of quantum computing. Tegmark argues that building such computers provides evidence for the MWI. I’m not sure all physicists would agree, but I do think that is an interesting perspective to take and one worth considering seriously.
I used to favor the de Broglie Bohm interpretation myself, but the more I’ve learned about QM, the more agnostic I’ve become about all of these interpretations. That said, I might feel differently if I understood the actual mathematics.
On quantum computing, Tegmark is the only one I’ve read so far assert that it would be evidence for the MWI. Superpositions exist in many interpretations. But then, I have to admit that I’ve never managed to understand how holding data in a superposed state improves things if you lose half of it as soon as you try to access it. Maybe something about the way you do access it or use it implies that the additional data never disappears?
The de Broglie interpretation makes more sense than the Copenhagen, but I still don’t like it as it posits the existence of an entity which is not required, namely the particle. This interpretation has both the particle and the wave, but in fact everything we observe the particle to do can be explained by the wave, so there’s no reason to believe in the particle.
It strikes me as an interpretation motivated chiefly by our intuition that matter ought to be made of little billiard balls, an intuition which has no real justification. That matter could be made out of something as apparently abstract as a wavefunction is just about unintuitive enough to strike me as probably being correct.
David Deutsch originally proposed quantum computing as evidence for the MWI, not Tegmark. I haven’t been able to find anyone who disagrees with this (although I’m sure there are), but then since quantum computing hasn’t really been proven to work it’s not much reason to believe the MWI is true.
“I have to admit that I’ve never managed to understand how holding data in a superposed state improves things if you lose half of it as soon as you try to access it.”
Well, say you’re looking to solve a problem like bitcoin mining, which involves testing lots and lots of numbers until you find one with some special property. You could have all your infinite quantum clones test one number each in parallel, and then the computer would select from those the one that actually works. So you can throw away all the data you’re not interested in because the job is to find one very special piece of data.
Quantum computing is therefore not a general purpose solution to computing arbitrary problems but could help to greatly speed up certain kinds of jobs.
“and then the computer would select from those the one that actually works.”
But my question is, how could you know which one worked without looking at it and causing the wave function collapse / decoherence / world cloning and losing all but one of them? Or is that the part that still needs to be worked out?
No, that works. You are left with only one pattern of bits, which is the answer you’re looking for. The bit that still needs to be worked out is the engineering problem of how to build a system that won’t decohere before you get the answer.
So, imagine you’re computing some function and you want to know what pattern of bits will give the greatest value. Let’s suppose you have two bits as an input.
For a classical computer, you compute:
f(0,0) = 2
f(0,1) = 3
f(1,0) = 0
f(1,1) = 4
You can see that 1,1 gives the greatest value, and you also retain knowledge of all the other values.
In quantum computing, you lose all that knowledge and you only know that 1,1 is the answer you’re looking for. I can’t explain in detail how this is done, but in principle it’s because the system is set up so that only the solution with the greatest possible value can be observed when the output is measured. All other solutions cancel out by interference (or some analogous mechanism).
It’s not too different from what happens with slit experiments. You let the electrons pass through all the slits, which can be viewed as computing a (let’s suppose) very complex interference pattern. To compute this classically, you would have to compute all possible paths (an infinite number) and then compute how they would interfere with each other.
But the quantum system does this for you naturally. You allow it to decohere by placing a screen on which to capture the interference pattern. You have now collapsed the wave function to a classical state which represents the results of an infinite number of possible paths being traced out by electrons in superposition. You lose the information representing all the possible paths, but you keep the answer you’re interested in (the interference pattern), which is the only one possible given the ways the different universes interfere with each other.
So nature does naturally what requires an unbounded amount of classical computation to predict. Quantum computation is basically arranging nature in an artificial ways so that these computationally expensive outcomes can be interpreted as solutions to problems.
Excellent explanation. Thanks! That does help. So, the trick it to have it go through the gates (or whatever they’re called in quantum computing) without decohering until the end. That explains why most experimental quantum computers these days operate a few degrees above absolute zero.
The Copenhagen interpretation as I was taught it at university is the doctrine that the wavefunction collapses when an ‘observer’ measures its properties. This precipitated a lot of pseudo-science mumbo jumbo about observers (humans) being special. It’s why Schrodinger postulated the absurd Cat experiment – to demonstrate that the Copenhagen interpretation is absurd.
The Copenhagen interpretation is disproved by common sense (the universe functioned perfectly well for billions of years without any observers around to measure it), naturalism (humans aren’t special in Physics) and by quantum computers.
Quantum computers do not require parallel universes. They work perfectly well if wavefunctions are allowed to collapse at the end point of the calculation.
I am agnostic about type III universes, but they do seem extraordinarily wasteful, and I see many reasons to object. There is nothing in the mathematics either to suggest collapse of a wavefunction or many universes. And how can such universes account for action at a distance? As I see it, the level III universe interpretation is a theory that lacks any explanatory or predictive power.
My big ‘head hurt’ about the idea is that every possible outcome requires a unique universe, and these must have existed since the Big Bang, and they must all continue indefinitely and since the universe is infinitely large (type I) (and possibly type II) that means that the number of events is infinity * infinity * infinity * infinity. That makes my head hurt, and I studied QM at university, so I am at home with the bizarre and improbable.
What I recall reading about the Copenhagen interpretation was that the philosophy behind it was to stick to the empirical observations and not assume anything beyond that. In other words, it supposedly was meant to be conservative and minimalist, the “shut up and calculate” interpretation. It was formulated in the golden age of logical positivism.
In that sense, the part about conscious observers was unfortunate (ironically going beyond those empirical observations) since it’s given generations of woomeisters ammunition.
I have very little problem with “shut up and calculate”, but the Copenhagen interpretation is not that. These days at least, the Copenhagen interpretation is synonymous with observation causing objective waveform collapse.
“is disproved by common sense”
Don’t rely on common sense. There’s no reason to believe that common sense is of any use when thinking about quantum phenomena. In particular, we can’t be sure that “the universe functioned perfectly well for billions of years without any observers around to measure it”. It could have been in a superposition until the first observer appeared to collapse the wavefunction and at that point it would have looked just as if it had always behaved classically.
“naturalism (humans aren’t special in Physics)”
It’s not quite disproved by naturalism unless we have a magical account of what constitutes an observation. An observation could simply be an interaction with some complex or large system. Humans don’t have to be special. But that ambiguity/vagueness is a deep problem with it, agreed.
” and by quantum computers.”
Really? I don’t see how, unless you also think that quantum computers mean parallel universes. But…
“Quantum computers do not require parallel universes.”
I’m not sure this is true. Quantum computers work by the interactions of numerous parallel superpositions of the same quantum system within the computer. Now, you perhaps don’t have to posit entire other universes, but you do need to at least posit parallel versions of that system, meaning that we have multiple instances of one macroscopic physical object occupying the same space. Parallel universes helps to make sense of this.
” They work perfectly well if wavefunctions are allowed to collapse at the end point of the calculation.”
Talking of waveform collapse like this smacks of the Copenhagen interpretation, does it not? And yet you say that quantum computers disprove the Copenhagen interpretation?
“they do seem extraordinarily wasteful,”
What do you mean by wasteful? I just don’t get this. You’re implying a violation of Occam’s razor, but the number of concepts/principles proposed by the MWI is fewer than any other interpretation.
“There is nothing in the mathematics either to suggest collapse of a wavefunction or many universes.”
There’s nothing to suggest objective collapse, but there is something to suggest many universes. The MWI comes directly out of the maths. If a particle in a superposition interacts with another particle, that second particle is now also in a superposition et cetera. At least until an observation is made, at which point we pretend the thing collapses.
But if we do away with this convenience and take the math seriously, we must regard all particle interactions as equivalent. So as soon as you observe a particle in a superposition, all the particles in your body and those you affect are in a superposition. This propagates outwards until the whole universe is in a superposition. A superposition of universes is an effective multiverse.
But you’re right, taking the math at face value, there is no multiverse, there is just one universe which consists of a wavefunction describing the superposition of all possible states of the universe. This is the real universe. The universe we perceive is just a collapsed projection of this, and there are an infinite number of alternative such projections.
So you could say there is only one universe containing one Schrodinger’s cat and it really is both alive and dead at the same time. However, if we were the cat, we might prefer to think of ourselves as living in one universe and our counterpart dying in another.
To all intents and purposes, from the perspective of observers within this massive wavefunction, there is an effective multiverse.
“And how can such universes account for action at a distance?”
Why should they? That’s a separate problem in my view. If action at a distance is required then action at a distance happens. I don’t see why it needs an explanation any more than why opposite charges attract. Perhaps it’s a brute fact of nature.
“that means that the number of events is infinity * infinity * infinity * infinity.”
You probably know there are different cardinalities of infinity, so I don’t think this is really a problem.
“There’s nothing to suggest objective collapse”
Well, except for our observations. Something happens at that measurement / interaction, whether it be a collapse, universe cloning, particle pilot-wave divorce, consistent hallucination of all experimenters, or whatever.
The MWI implies that nothing special happens when we make an observation. It’s just continuing the pattern of superpositions begetting superpositions into our brains and not imposing some magic barrier that suggests that once we see it it *really* collapses and the propagation of superpositions ceases. There is no reason to believe that there is such a magic barrier and every reason to think that this propagation continues as normal.
On the MWI, the universe doesn’t split just when we make a measurement. It was already split. The electron really did go through both slits at the same time. There is no objective difference between the electron going through both slits unobserved and going through both slits observed. There is only a difference from the point of view of the observer, which enters a state of superposition at the point of observation just like any other interaction. This is why it seems to us that the act of observation forces the electron to go through one slit only. It in fact always goes through both, but if we make an observation we can only see one at a time because we are ourselves placed in a superposition of contradicting observations corresponding to having clones in two universes.
There’s a missing ingredient in both wavefunction collapse and multiverse explanations. Suppose we have a quantum system in an indeterminate state. We make a measurement (OR the system interacts with another system) (OR some time passes) and the system is now in a determinate state.
We can say either 1) the wavefunction collapsed or 2) in another universe the system is in a different state, depending on our theoretical interpretation.
BUT, what caused this to happen? There is an ‘event’ here. What caused it? There is no cause.
The ‘event’ does not even have to be local, as demonstrated by entangled particles.
I think I answered this in my answer to SAP above. Nothing special happens when we make a measurement. That’s the beauty of the MWI. The quantum system remains in an indeterminate state and now we have become part of it by interacting with it.
Another way to explain it is that it doesn’t make sense to say that a system is in an objectively determinate or indeterminate state. Whether a state is indeterminate only makes sense from a particular subjective frame of reference.
Take Schrodinger’s cat again. From the point of view of the scientist, the cat is in an indeterminate state. It’s both alive and dead.
From the point of view of the cat, however, it is very much in a determinate state. From the cat’s point of view we might say that the universe has forked, and the living copy of the cat considers itself to be in a determinate state of having survived.
But the universe hasn’t forked from the point of view of the scientist. Instead, the part of the universe containing the box is in an indeterminate state, and it only forks when he opens it.
But from the point of view of his colleague down the hall, the part of the universe containing the lab in which the experiment takes place is in an indeterminate state until the scientist calls her and tells her the cat survived. To her, the point where the phone rings is where the universe forks.
The universe never really forks, or alternatively it’s forking constantly. Whatever way you look at it It’s one big seething mess of superpositions causing superpositions endlessly. No one particular interaction causes a split any more significant than any other. Significant forkings/collapses exist only with respect to particular reference frames.
Your explanation attracts me, although I don’t really understand it. Is there really any forking, or was the universe created already infinitely forked? It seems to me that the idea of infinite distinct universes is a strange one. Are they countable?
Whether there is forking or not depends on what you like to call reality – the overall wavefunction or what you perceive as a classical universe, and on whether you prefer to think of it as two universes which diverge or one which splits. I think reality is fundamentally mathematical, and mathematically these are both the same. There is no fact of the matter, there is only whichever perspective you prefer. From an objective perspective, the universe/multiverse is just an ordinary quantum system in a superposition of states. That’s what’s actually happening. How you choose to label it and describe it is up to you.
I’ve made similar points on the Level I post where I argue that if I have an identical clone some massive distance away which has exactly the same configuration of atoms, then there is no meaningful sense in which we are distinct, to the point where I don’t much care if one of them blinks out of existence as long as the other continues to exist (apart from the grief this would cause, which could be solved by having that earth blink out of existence instead). I consider both copies to share my identity. Similarly, if two universes are completely identical, then I don’t think there is any meaningful sense in which they are distinct universes. So a split can be seen as a divergence and vice versa.
I don’t know if the many “collapsed” states of the universe are countable. It probably depends on whether the universe is fundamentally digital (i.e. at Planck scales) or continuous and on whether space is infinite or not. If not then I’m not sure even the Level I multiverse has countable universes. I’m not good enough at mathematics to be able to give a clearer answer than that.
DM, I understand the point you’re making. (I almost included something like it in the post, but couldn’t figure out a way to keep it brief.) But still, we perceive the superposition up until a certain point, then we don’t. Per the MWI, that point may be where our brains go into superposition (relative to what’s being observed), or it may be something else, but it is definitely a change, an event.
I’m agnostic about the MWI, but I think assuming that the universe works this way, like all QM interpretations, throws an aspect of commonly accepted reality under the bus. In the case of the MWI, it’s counter factual definiteness, and that’s a pretty major assumption.
Sure there’s a change or an event. The beauty of the MWI is that this change is exactly the same kind of change that we detect all the time when we let superpositions interact. The change is of no greater significance, in fact it’s perfectly prosaic and no different from what we can empirically prove happens all the time.
“In the case of the MWI, it’s counter factual definiteness, and that’s a pretty major assumption.”
I don’t think so. There is no reason to take this intuition seriously, for the same reason that there is no reason to take seriously the intuition that the sun goes around the earth or that the earth is flat or that there is no universal speed limit. If our intuitions were false, reality would still look exactly the same and we would still have these false intuitions. As such, I think the major assumption is in fact in taking this intuition seriously in the first place.
MWI assumes only that we should take our experimental observations seriously and not postulate extravagant new unfalsifiable hypothesis such as objective waveform collapse in order to rescue this intuition.
Reality at the fundamental level is going to be pretty unintuitive. It is so with relativity, it is so with all interpretations of QM and it is so with the MUH. I’m sure there’s lots more where that came from.
For the other intuitions you list, we have conclusive evidence that they’re false. At this point, for the MWI, we have an interpretation of evidence that throws an intuition that you, Tegmark, and others are comfortable dispensing with (counter factual definiteness) to preserve one you’re less comfortable dispensing with (determinism).
As I said above, every interpretation has to throw some aspect of commonly accepted reality under the bus. I can’t see a reason (yet) to prefer one over the other. I hope there’s evidence that shines light on this in my lifetime.
I disagree that the motivation for the MWI has anything to do with preserving determinism. I’m quite comfortable with indeterminism.
Rather, the motivation is that the MWI seems to me to be the only coherent model that doesn’t postulate entities with no experimental evidence (the parallel universes in the MWI are nothing more than the ordinary superpositions of QM taken at face value).
I think, like so many things in science, it’s a matter of judgment which explanation is more coherent or taking things at face value. I can see the logic for the MWI, but I can also see the logic for some of the other interpretations. It wouldn’t surprise me much if the eventual answer doesn’t match any of them.
I can’t really see the logic for the other interpretations, or rather I can’t see how they can be deemed more plausible than MWI. I agree that the truth could be stranger than any of them.
Copenhagen interpretation: postulates objective waveform collapse on observation, which is a physical process for which there is no evidence and no clear proposed mechanism.
Bohm/de Broglie: postulates a distinction between particles and pilot waves, again which has no evidence.
There is evidence for the MWI in that it’s a straightforward consequence of the physics we have already established. The only way it could be false is if there is some new physical principle we don’t yet know about. The equations of QM we already have both entail the MWI and correspond with observations. It would take a modification of QM (such as equations to govern objective waveform collapse) to show it to be false.
You must tire of going through this with me but I’d love it if somebody could give me a good reason to reconsider this position. How can any other interpretation compete with the simplicity and power of the MWI?
No worries. That’s a lot of the reason I blog.
As in many of these discussions, I fear it won’t resolve until there is more conclusive evidence. Until then, this is something of a philosophical argument, which rarely has conclusive winners or losers. All I can give you is my perspective that spreading superpositions seem just as incredible to me as disappearing superpositions.
One seems far more simple and powerful to you, and I can totally see why you feel that way. But I’m also very aware that the universe doesn’t care which one seems more aesthetic to us. I’m reminded of the current hand wringing over SUSY, which despite its aesthetically pleasing mathematics appears to be in trouble.
I just want to clarify that I am not arguing that MWI is true, I’m arguing that it is the most plausible and parsimonious interpretation proposed to date. It could be wrong, but I can’t see any reason to prefer any other interpretation over it given what is currently known.
I don’t find disappearing superpositions inherently incredible. I only know that there is no evidence that they disappear and no model of how this might work. Given that superpositions spread, it requires additional assumptions to explain how this process is limited. What then is the reason to believe that they do? Bald intuition simply doesn’t count in my view.
It’s like a disagreement between Aristotle and Newton, where Aristotle argues that objects in motion actually do come to rest eventually if we observe them long enough. OK, says Newton, but what other than your intuition motivates this and how does it work?
On SUSY I have no particular view.