Note: I answer the questions asked here in a later post.
I’ve written about the bizarre nature of quantum physics many times, providing a lightning primer back in May on three major interpretations: Copenhagen, pilot-wave, and many worlds. The many worlds interpretation (MWI) is often summarily dismissed by people, often along with visceral shudders or high doses of outrage. I understand the discomfort. When I first read about the interpretation in detail, it seemed like an over the top extravagance, an attempt to solve the measurement problem by throwing the multiverse at it.
But as I noted in the primer, the original version of the interpretation by Hugh Everett is actually quite austere in what it postulates. It actually removes the postulate of the wave function collapse of Copenhagen, and then follows the wave dynamics described by the Schrödinger equation. The result is a deterministic and local explanation of experimental observations, but also all the additional extravagance many find objectionable.
Over time, most of my original objections to the MWI have been resolved. But one continues to be a nagging source of doubt: the question of where all the energy for this branching reality comes from. The most common explanation is that it’s simply the original energy of the universe constantly being diluted. But that assumes that energy can be infinitely divisible. Which could be true, but seems discordant with the original discovery of quantum physics, that energy is discrete.
One way to avoid this issue is to postulate that each interaction does not in fact result in a new world, but that all the worlds are already there. This appears to be the approach of David Deutsch.
Deutsch is a legendary advocate for the MWI. I know many of you find Sean Carroll’s aggressive selling of it objectionable. If so, I advise staying away from Deutsch. Deutsch leans heavily and unapologetically into multiverse terminology. His view is the MWI is the only explanation of quantum physics that makes any sense, and he is pretty scathing in his assessment of the alternatives, along with the philosophies behind them.
But it’s worth noting that Deutsch’s version has some striking differences from the original MWI. He sees the particle, not the wave, as the more fundamental reality. The wave like dynamics that we observe, even when only one particle is sent through the experiment at a time, results from interference with other versions of the particle in parallel universes, universes which are exactly like ours except for the position of this one particle.
In this version of the MWI, the “splitting” is not one reality becoming many, but the divergence of parallel realities away from each other. We don’t run out of other universes to have interference with because there are an infinite number of them, and so an infinite number of each variation.
(In some ways, this version of the MWI is similar to another interpretation developed some years ago: the many interacting worlds interpretation. The material for that interpretation doesn’t mention Deutsch, so the resemblance might be superficial.)
In this version, the energy concern disappears. The original energy isn’t being diluted because the other universes already exist. I have no idea if Deutsch gravitated to this version due to energy, it’s just a benefit I see of his approach.
Unfortunately it seems to raise a couple of new serious questions.
If there are an infinite number of parallel universes out there, what led to their existence? In the original MWI, the worlds can be seen as a consequence of taking quantum physics seriously. In this version, they become an additional postulate, somewhat undercutting the MWI’s claim to parsimony.
And what controls which universes interfere with each other? If there are an infinite number, it can’t be all of them that are still the same, since that number would itself be infinite. Infinite interference seems like it wouldn’t allow any dynamics at all, just a solid block of whatever the particle is, much less the wave interference patterns we observe. That means it must be some subset, implying there’s a higher dimensional topology of some sort between universes, which itself seems like a new thing to be explained.
So I’m not sure how much Deutsch’s modifications improve the situation, or how directly motivated the additional postulates are from the data. It seems like looser speculation than the tighter extrapolation of the original.
So for me, the energy concern remains. It’s worth noting that all of the interpretations have their own issues, so I don’t see it as invalidating the MWI, but it’s what currently prevents me from regarding it as the default.
But maybe I’m missing something?
142 thoughts on “David Deutsch’s version of many worlds”
The conservation of energy is also my main objection to the many worlds interpretation.
LikeLiked by 1 person
What if the total amount of energy in the world is zero? A quick google pulled an article with this quote, theoretically from Hawking:
“ the gravitational field has negative energy. In the case of a universe that is approximately uniform in space, one can show that this negative gravitational energy exactly cancels the positive energy represented by the matter. So the total energy of the universe is zero.”
So I don’t think asking “where is the energy coming from?” is necessarily a legit concern if the world you’re creating has net zero energy.
LikeLiked by 1 person
The issue is that each new branch starts out at a particular time and location and expands from there. So although the energy balance in the overall universe might be zero, I’m not sure it would be in any particular branch. It might be eventually, with eventually being infinitely far into the future.
LikeLiked by 1 person
I wasn’t familiar with Deutsch’s interpretation, so this was interesting. I had many of the same questions you came up with at the end. How do you decide which subset of the infinite universes are allowed to interact, for instance? And also, since different wave functions can be written by humans to ask different questions about the same universe, how the hell do we make sense of that? One group is measuring momentum, and the other position. Doesn’t that make this astonishingly complicated?
On the issue of energy, it just doesn’t bother me really. Haha. Because if there are parallel timelines they could each have their own energy right? And we can’t explain where the energy for our own timeline comes from. It’s just there. So, whose to say every parallel universe doesn’t have its own supply? On the one hand I understand the objection, but on another it seems like applying a classical constraint to a clearly much wilder situation, and is there even a basis for that?
Maybe each parallel universe could balance the energy flows within it’s track in a classical sense, right? Or no?
LikeLiked by 1 person
The fact that various wave functions could be used for the same phenomena bothers Jim Baggott as well. It’s one of the reasons he cites for leaning toward antireal interpretations. My biggest issue with the antireal ones isn’t their antirealness, but that they lean on that antirealness to justify an incomplete account, one that allows them to avoid metaphysical costs. As pragmatic tools for predicting observations, they may work well enough, but I find arguments that they’re the final answer unsatisfying.
I think the energy issue is only a concern if we see new branches constantly being created, per the standard MWI. If the branches / universes all exist already, and we’re just talking about interference between them, then it’s not an issue. (But of course brings in the other issues we both see.)
I’m not sure on the energy balance thing. It might depend on whether spacetime is ultimately quantum.
[Mike, do you have a source for where Deutsch considers Rovelli’s relational QM?]
I’m personally inclined toward an epistemic view of the wave function, sorta like the function for determining the orbit of a planet. You can develop an equation for the center of mass of a planet, and use that to determine the orbit, but that doesn’t mean there really is a thing which is the the center of mass. We don’t need to have new theories about how a center of mass comes into being.
LikeLiked by 1 person
My source of information is Deutsch’s books. (Which I have to admit I’ve only partially read so far, mainly just to get his take on the MWI.) I just did a search of “Rovelli” in both of them but didn’t get any hits.
But Deutsch is pretty dismissive of antireal interpretations. He sees them as simply hiding from the implications of quantum physics. He uses the example of dinosaur fossils. We don’t regard dinosaurs as an interpretation of the fossil record. The fossils don’t only come into existence when the paleontologists dig them up. They’re not an accounting of the paleontologist’s interaction with the strata.
Of course, someone could respond that Deutsch is papering over the special difficulties associated with quantum phenomena. His response would probably be that the special difficulties amount to metaphysical implications many simply don’t want to be true.
The problem I see with the center of mass comparison is we know exactly what causes such a center, and all its effects. We understand how it maps to the physical reality. There are no causal holes in our account. That’s not true with an antireal version of the wave function. For the comparison to be valid, we would have to have no knowledge of planets or other mass bearing objects. We might then argue whether a planet actually exists or is merely an accounting device.
Actually, we don’t know exactly what causes a center of mass, because we don’t know what causes mass. As Philip Goff is fond of saying, we don’t know what matter is, only what it does, and one of the things it does is mass-type things. And Goff uses this “causal hole” to insert consciousness. Goff wants to say what it *is* is consciousness. Similarly, the wave function describes how electrons will interact with other things, but not what causes that interaction. Deutsch wants to say interference from other universes causes some of it. I don’t mind people saying *maybe* that’s the way it is, if they want to look for ways it makes a difference, i.e., something they can test. But it seems wrong to me to say that’s the way it probably is, given that the assumptions are both wild and not specifically necessary to explain what happens.
I suspect the problem is that it is really hard to let go of intuitions which we develop at the medium scale (our size) when considering the really small (quantum size). Ladyman and Ross get at this in their book “Everything Must Go”. The evidence is that we still talk about particles when talking about electrons, etc. Particles are the smallest things people can see, but they are still things with a surface, an inside and an outside. Electrons and protons and photons are not particles. They don’t have a surface or an inside and outside. All we know about them is they have a probability of interacting with other things. When we shoot electrons at a barrier with two slits, some will interact with the barrier, others won’t. But for the ones that don’t, that doesn’t mean they had to go through one slit or the other. That’s particle thinking. It just means that that particular barrier is gonna weed out some, but not all, of the electrons. And it turns out that the ones that tend to go through are of a type which, when they interact, there is a wave-like pattern regarding “where” they interact which is very different from a particle-type pattern. All of the electrons have that pattern, but oriented at different angles, so they would tend to go past barriers with slits at those angles.
The bottom line is that I think it would be more fruitful to try and figure out what kind of a thing would act like that (strings?) than to assume they really are particles and then have to come up with weird ways to make particles act like that.
I can’t say I’m much of a fan of panpsychist reasoning about matter. The way matter behaves doesn’t really seem to leave room for volition. Even if we grant quantum randomness, that randomness still seems to follow very rigid rules.
Our intuitions are definitely part of the problem. But in this context, many of the people saying that use it as an excuse not to provide a full or coherent theory. Often the implication is that we need to redefine what a scientific theory is. It’s not a description of reality, it’s about how we interact with reality. That’s fine as far as it goes, but we should be honest about what we’re doing, bracketing issues we either don’t have solutions for, or solutions that we’re not willing to accept. (Ironically because they severely violate our intuitions.)
As I noted in the post, I’m not committed to MWI, but for alternatives, I want real solutions, a full causal account, a priori justifications for a posteriori observations. If someone wants to say it’s unreasonable to expect that from quantum mechanics, then I think the onus is on them to explain why, particularly given that we have a class of theories that appear to do it.
I haven’t read Ladyman and Ross, but it sounds like they’re attacking strawmen. Everyone who’s done their homework and uses the word “particle” in this context understands we’re not talking about dust motes, but something that at times displays properties of a localized point, and at others of a diffuse and expanding wave. It’s fine to speculate that maybe it’s some kind of eleven dimension object that somehow fulfills both those roles, as long as it provides a coherent account. I’m not aware of anything currently in that category that solves the measurement problem.
The problem is that a solution that actually tries to solve this by describing reality is going to be very counter-intuitive. QM will not let us by without shaking our conceptions of reality.
Two questions: first, I’ll assume you are one of the people who think electrons at times display the properties of a localized point, at other times the properties of a wave. What do you mean by displaying? I’ll assume you mean that these properties are displayed at all times, but how you measure them determines what type of property is being displayed. So for example, do you think sometimes the electron goes through one of the slits, and at others it goes through both? If I cover the ends of a baton with paint, twirl it over my head, and then throw the twirling baton at the wall, it will bounce off the wall leaving at least one point-like paint mark, possibly two. Is the baton then displaying the properties of a localized point, possibly two localized points?
Second question: what is your understanding of the “measurement problem”? I honestly don’t know what that refers to.
Final comment: you say you want a full causal account. Do you realize that you can never, in principle, get a full causal account? Sometimes you can get a better account. At one point water was a black box, but we opened the box and found oxygen and hydrogen atoms. Those were black boxes, until we opened them and found more black boxes (protons and neutrons). It will, by necessity, be black boxes all the way down until we reach a box we can’t open because there is not enough energy in the universe, and then we will be stuck with that black box, and we will never have a causal account of that black box. All we will have are the equations that describe what it does, and to ascribe some intrinsic nature to that box, trying to explain how it does what it does, is pointless folly.
By “display” above, I meant produces experimental results consistent with that form.
The properties are not displayed at all times. Prior to the measurement, there appears to be a wave dynamic. After the measurement, we have a localized entity that we refer to as the “particle.”
In the two slit experiment, it can be done where only one particle is fired at a time. When that happens, we still get the diffraction pattern. That implies that, when manifesting wave properties, the particle does go through both slits and interferes with itself on the other side. Of course, put a detector at either of the slits and the wave dynamic disappears, at least after the detector. Then the particle only goes through one slit, as a proper particle-like entity should.
The problem with the baton analogy is that doesn’t match the results. The baton isn’t a wave and can’t go through two slits concurrently.
On the measurement problem, it’s the issue that quantum entities (particles, etc) have wave like dynamics until they are measured, then they behave particle-like, an apparent shift in the physics. The act of looking at it, or measuring it, changes the system, apparently causing the wave to collapse to a particle. (Unless we carefully hide the evidence, then nothing changes.) But if you’re completely unfamiliar with it, the best thing I can do is recommend a video.
When I say I want a full causal account, I don’t mean down every level of abstraction to base reality (if there even is such a thing). All I mean is I want to know why each step takes place without reference to brute force a posteriori facts. Such an account might still have self contained black boxes. Although I hope we open as many of those boxes as we can. And we should be open to the possibility that there may be many ways of opening them, even if a brute force option isn’t in the cards. No one thought the EPR paradox would ever be testable until John Bell found a way.
So you are missing the point of the baton analogy. I’m not saying it can go through two slits. I’m saying it has point like properties and spinning baton like properties. If the only way we can ever measure it is by paint on a wall, the baton will look a lot like a tennis ball. But in no sense does the baton collapse into a tennis ball when it hits the wall. You’ll be able to tell the difference, though, with a single slit experiment. If the slit is not oriented in the same direction as the spinning baton, the baton won’t go through. Also, if your baton gun randomly spins the baton clockwise or counterclockwise, depending on the velocity of spin and velocity out of the gun and distance to the wall, you will get a weird pattern: you will get two equal zones of point hits, oriented the same way as the slit. You can shoot the batons one at a time, and they will look like tennis ball hits, except they will have two clusters instead of one.
The point is, the batons do not collapse into tennis balls when they hit, but if you squint and look sideways, it might look like they do, because everything really small looks like a tennis ball, because all you can measure is point-like spots on a wall.
So again, we should be trying to figure out why electrons (and other things) interact the way they do, and saying they collapse into point particles is like saying batons collapse into tennis balls. It seems a pretty extreme explanation.
Well, if you can find an actual shape or other coherent entity that can fulfill all the observations, as Al-Khalili says in the video, there’s a Nobel waiting for you. Positing that there should be something like that is the easy part. Actually identifying it is another matter. As I noted to Eric, coming up with these interpretations in a way that is compatible with all the evidence is far harder than it looks.
If I can break people out of the intuition that particles play any part, that will be a good start.
In traditional MWI, there is no particle, just the wave. The particle dynamics appear due to the fragment of the wave that becomes entangled with the environment in the branch we’re in.
So mission accomplished! 🙂
> The way matter behaves doesn’t really seem to leave room for volition.
That’s not what panspychism says. There is no volition.
There’s a lot of versions of panpsychism. All they really have in common is the idea that consciousness can be explained as a brute contingent fact about the universe that we live in — that it just so happens that this universe can support consciousness, but that it is logically possible for a physically identical universe not to host consciousness at all (and instead to host p-zombies).
Then you get into the different ways this idea could work.
One of the most popular ways proposes that particles, as well as bearing mass and charge etc, bear the building blocks of qualia. This is sometimes described as particles having proto-consciousness, but I think that description does more harm than good. The idea is really just that it is possible for particles to come together in such a way as to create consciousness not in virtue of their functional properties but in virtue of these panpsychist properties they have (what I reffered to as the building blocks of qualia). In a universe where particles did not bear panpsychist properties, there would be no consciousness even though everything would still behave the same way from an objective point of view.
Another way it could work (this is being touted by Philip Goff at the moment) is that the fields of physics (Higgs field, electron field, electromagnetic field etc) rather than particles bear these panpsychist properties, and these properties can create raw qualia in virtue of the interactions and fluctuations between the fields. These experiential qualities pervade the universe in what must be a chaotic mess, but in certain circumstances (i.e. brains, or perhaps other systems with high-phi in the IIT sense) they can give rise to a new subjects which put some order on the mess and experience them as consciousness.
Either way, panpsychism properly understood does not propose that particles have little minds and that they make decisions affecting behaviour. Particles are almost always supposed to behave just as standard physics predicts.
I’m not a panpsychist, but I don’t find it as crazy as some people think. I think the problem with panpsychism is only that we don’t need to take qualia seriously as real phenomena in need of an explanation. If a p-zombie would believe itself to be experiencing qualia, then I don’t think we have any reason to believe we are not p-zombies. That’s really the beginning and the end of the problems with the broad idea of panpsychism as far as I’m concerned.
I realize panpsychism is a variety of views, and I do try not to strawman them. The versions I find more plausible are the ones that simply take a certain perspective on what consciousness is and find it in the simplest things. It’s not a view I find productive. Whatever its subject matter, it’s not the one I’m interested in. It’s talking about a version of consciousness that seems to lack emotion, perception, attention, or memory. But it isn’t a view I can say is necessarily wrong.
But the stronger view, a view I often call pandualism, posits that there is that something extra, as you note, an extra fundamental force or something. That one I do think is wrong. But as you said, it’s not as easy to dismiss as many would claim. Still, I think it’s an inherently epiphenomenalist view, which again I don’t find productive.
But in the end, philosophers have made a hopeless definitional mess with consciousness, with the result that it lies in the eye of the beholder.
If I had to choose a single book which has influenced my life more that any other over the past 20 years it would be David Deutsch’s Fabric of Reality. It will not surprise you to hear I found the discovery of the thoughts of Frank Tipler an important and gripping part of the book. As in every corner of science Deutsch and Tiple have there detractors. Which is amusing given that the state of our knowledge, much as it may have advanced in recent years, remains small indeed compared to what we do not know. Until and unless the MWI is proved or disproved I will retain my fondness for it. As for Tipler, his imagination is astounding. Any reader of science fiction will relish in the idea of these powerful gods living in a gap of infinity at the very end of the universe. Again in the absence of any evidence either way, I choose to believe such gods are a strong possibility. Somewhere, sometime, some place.
LikeLiked by 1 person
I haven’t gotten to those parts of the book yet. (I basically just read enough to get Deutsch’s version of the MWI.) I will note that I found Tipler’s paper on how locality is preserved in the MWI enlightening, and it seemed fairly grounded.
Skimming the parts of the book where Tipler is discussed, it seems like Deutsch is good with his straight physics, but sees his mixing it with theological musings as unfortunate. Apparently the Omega Point is tangled up with the big crunch, which was still viable cosmology in the 90s when the book was published. I wonder how he reconciled his views with dark energy.
The overall scientific assessment on Tipler is pretty harsh. The consensus seems to be that he started as a productive physicist but devolved. Even Deutsch is careful to only support his physics.
LikeLiked by 1 person
[For what it’s worth, Deutsch’s “Beginning of Infinity” had much more influence on me, giving me reason to be optimistic about the coming Singularity. That and his Constructor Theory, which provides a pretty good basis for understanding ontology, causation, all the way up to the psychule (consciousness).]
Loved that book to.
Does Deutsch give new equations for this, or is he just re-interpreting the Schrödinger equation? If the latter, I wouldn’t necessarily worry about it. Unless his explanation gets really awkward at some point, it isn’t likely to mislead, just provide another way to think about the same theory.
And if it’s the Schrödinger equation, the strength of interference between two “universes” is just given by the inner-product of their wave functions. I think.
LikeLiked by 1 person
He doesn’t get into any of the mathematics or Dirac notation in the books, and he always presents his view as the Everett one, so I think he sees it as a description of that view. I don’t doubt it’s compatible with the Schrodinger equation. But it does feel like an add-on to it. (Not that I understand it enough to make that statement with any authority.)
I guess there’s some serendipity here in that Mike puts up a post on one of the “Many Worlds” QM interpretations, since I’ve spent the morning pondering Johnjoe McFadden’s ideas about how QM might be crucial to various biological processes. As Mike knows I consider proposals of countless universes in order to potentially explain human ignorance regarding Heisenberg’s uncertainty principle, to be ridiculous. To me this doesn’t even seem all that imaginative. So if things don’t make causal sense to us here, then just theorize that other universes exist which balance things out? And I guess with this proposal we’re even talking about “infinite universes”. But infinite? As in everything possible not only will happen, but must eventually happen an infinite number of times? Now that’s silly!
Anyway quantum mechanics is an extremely well verified element of science. The tricky part comes in when people interpret Heisenberg’s principle ontologically rather than epistemically, which is to say differently than all other established ideas in physics. Note that rather than make all sorts of fantastic proposals, my own QM interpretation simply states this: Either QM reflects magic, or we simply don’t grasp what’s causally happening.
What’s wrong with a naturalist conceding ignorance rather than instead proposing countless (or infinite) universes as an explanation?
Anyway here’s my post about a guy who I consider to be saying reasonable things regarding quantum mechanics. https://physicalethics.wordpress.com/2020/09/20/quantum-biology/
It’s worth noting the things people once considered ridiculous:
the spherical Earth
Actually, with those precedents, perhaps the best compliment someone can give a theory is to dismiss it, not for lack of evidence or logic, but simply because they find it ridiculous.
LikeLiked by 1 person
Of course it could be that if I had some evidence for many worlds, or logic, then I’d reconsider. Do you know of any?
What I do see as logical however, is that many should not only theorize gods to be responsible for everything which isn’t understood, but some scientists should tend to do the same sort of thing when they also don’t understand what’s going on. As I said above, the problem here should be interpreting Heisenberg’s wonderful principle ontologically, and unlike all other theories in physics. Instead I think we should simply interpret HUP epistemically as is standard. Clearly it’s a useful idea, but once people begin thinking of it as “true”, and so decide that associated magic must be fought, that’s when actual magic should tend to be proposed. Shouldn’t “infinite universes” fit the bill of “magic”?
Just because you posted this doesn’t mean that I think you believe that there are all sorts of universes out there. I won’t think that unless or until you say you do.
Regardless, what’s wrong with my own QM interpretation? What’s wrong with saying that QM either reflects supernatural dynamics, or it’s natural though we idiot humans don’t grasp what’s causally going on? For example, couldn’t it be that there are more than four dimensions to existence, and that these other rarely perceived dimension create perceived QM variability? Or perhaps there are other options less extreme than proposing countless universes to account for our variability in measurement?
LikeLiked by 1 person
The standard argument for the MWI is that the Schrodinger equation has been verified through innumerable experiments for almost a century. So the dynamics it models are very real. What hasn’t been verified, not even once, is any objective wave function collapse. Not once. Ever. So, if we keep the part heavily supported by evidence and dispense the part there is no evidence for, what happens?
Our first thought is that we don’t see what the Schrodinger equation seems to predict, particles as waves branching out. We only detect the localized particle. But what Everett pointed out is, if we keep following the Schrodinger equation, the branching spreads and becomes entangled with the atoms of the detector, which magnifies its effects, which also spread as a wave, becoming entangled with the laboratory, and eventually the scientist. The scientist only is aware of one outcome, because the scientist has branched out. And so it continues, with each branch of the outcomes now causally isolated from each other, in effect, in their own separate world.
The Heisenberg Uncertainty principle makes sense when you realize we’re dealing with waves, where the idea of both a precise position and momentum isn’t coherent. So it is real. Sorry.
As to what I believe, I think the MWI is a real possibility. As I noted in the post, it’s taken a good amount of investigation to convince me. It’s far harder to dismiss on logical or empirical grounds than it seems.
I’m not committed to it, but I do see it as a promising theory.
On your interpretation, you haven’t provided one. You’ve thrown out some aspirations for one, and some vague speculation. Anyone can do that. But coming up with a real rigorous carefully thought out interpretation compatible with all the available evidence, is very hard. In principle an amateur might be able to do it, but they’d have to know at least as much as a physics graduate student on their way to a PHD.
LikeLiked by 1 person
Not sure you should be focusing so much on Heisenberg’s Uncertainty Principle. I don’t think that has much to do with the MWI. HUP is a core part of QM and has little to do specifically with the MWI, and in particular I don’t think the MWI is at all motivated by the HUP. As Mike explained, it’s motivated simply by parsimony, by the fact that we have no reason to suppose that the wavefunction collapses because the world would look just as it does even if it didn’t. You have to invent new physics for which there is no empirical evidence in order to avoid the MWI. The only thing motivating such invention is evolved human intuition, which is not equipped to deal with this sort of thing.
Also, I think it’s a mistake to regard the HUP as epistemic. That implies that particles have hidden variables, which is ruled out unless you want to imagine that particles communicate instantly across arbitrary distances and even backwards in time so as to conspire to give the impression that there are no hidden variables. Instead, HUP is a reflection of standard wave mechanics (Fourier transforms etc) and has analogues even in classical systems. I can’t remember the details, but I think Sean Carroll gives a pretty convincing account of this in Something Deeply Hidden.
This comment from here (https://news.ycombinator.com/item?id=21672193) gives what may be a good analogy:
> it’s impossible to have a wave whose position and direction are simultaneously well defined. If you throw a stone into a pond you will get a circular wave front with a well-defined position (where the stone landed) and a completely undefined direction. Likewise, you can get a wave that has a well-defined direction (as you do along a seashore, where the waves move perpendicular to the coastline) but to make that happen the wave has to be very wide. It’s impossible to make a wave (in open water) that is simultaneously well-localized and moving in a particular direction.
LikeLiked by 1 person
Mike and DM,
I’m not saying that I know a lot about this stuff, but sometimes “knowing what you know” rather than following the advice of various charismatic experts, can at least steer a person to less wrong positions. Let’s take this from the beginning and then you can tell me if you think I’m missing something.
Evidence suggests that matter functions both as a wave and a particle. This is to say that the more exactly we measure it in one regard, the more that we will be confounded in the other. This could either be because there is an associated void in causality (effectively “magic”), or rather because certainty does exist here, though we don’t understand what’s going on. There isn’t a third possibility as far as I know. Do you agree that things either function with perfect causality and so are determined, or rather with some level of magic?
Some may then say, “I know what must be going on here, and it is deterministic. Every time something “variable” happens in a quantum sense, this must actually be the birth of a new universe that has a momentary effect upon ours and then splits off to forever be independent. Each option will deterministically be accounted since an extra universe becomes created.” I agree that this scenario isn’t impossible, but that’s not saying much since gods are also possible. Furthermore I’ve got to think that any theory which proposes that jillion’s of Eric’s arise each second to spawn jillions of full unique universes, would be quite an antithesis of a parsimonious explanation for human uncertainty in measurement. This is not a simple explanation!
Instead of accepting HUP ontologically, unlike F=MA or any other theory in science, let’s now interpret it epistemically. This is to say, let’s interpret it as a potentially useful heuristic. Here we use the theory as a reasonable approximation but don’t presume it to be “exactly true”. Thus we decide that if there is an inherent variability to the function of reality, then causality fails and thus HUP also demonstrates “magic”. That’s possible, and science itself becomes obsolete to the extent it’s true. Or alternatively we decide that there is no such variability and so it doesn’t display magic, though we don’t quite grasp what’s going on. My own metaphysics puts me in that camp.
Are either of you able to coherently fault this assessment?
LikeLiked by 1 person
I think you’re basically arguing for the antireal interpretations here. I actually don’t have that much of an issue with those, except when people pretend like they’re the final answer. There’s nothing wrong with bracketing difficulties we can’t solve yet. It’s epistemically cautious. On the other hand, I think many are attracted to the antireal interpretations because, by being incomplete, they avoid the metaphysical costs of the interpretations that aim at realism, which seems like a less virtuous motivation.
Honestly, I’m unhappy that the division between these theories is described as real vs antireal. You can be an instrumentalist and still seek complete accounts. That’s the real difference between these categories. Those that aim for a complete account, and those that don’t attempt it, that only attempt to model our interactions with the reality, rather than the reality itself. Again, if we’re honest about what we’re doing, that’s fine. The problem is many present these incomplete accounts as final answers.
But the issue is that any interpretation that does aim at a complete account, is going to have metaphysical costs. It’s trivial to criticize them for those costs. But coming up with one that avoids them is another matter. MWI is deterministic and local. Yes, it has the cost of an ever expanding reality. But your other real options generally involve things like non-locality, objective collapses, or retrocausation. All of these alternatives to MWI would require revisions to known physics.
Quantum physics isn’t going to let us get away with the classical world. We can bury our head in a hole with nonreal theories and pretend like we’ve avoided the dilemma, but I’m not sure we’re doing anything but delay the inevitable.
LikeLiked by 1 person
How well do you understand Bell’s inequality? I think you really need to absorb the lessons from this before you assume that it makes sense to take quantum mechanics as an epistemic framework to manage uncertainty about hidden variables. In short — it makes it really difficult to get a hidden variables theory to work. As Mike says you need to make all kinds of alternative outlandish assumptions. You have to pay the weirdness tax somewhere — if not with the MWI then with something equally weird.
Second, there’s the issue of what constitutes magic. To me, supernatural magic (as opposed to stage magic!) can only refer to rules which are in operation where those rules are vague and cannot be expressed mathematically or simulated. Magic often makes sense at an intuitive level, but if you try to examine it in detail then it’s clear that the rules are insufficiently precise. For instance, if there is a magic rule that the words “Open Sesame” will cause a large boulder to roll and reveal an entrance, then you could ask questions about how we judge those words to have been uttered — is there a speech recognition algorithm embedded in the universe? Does the speaker’s accent matter? How much volue is required? How distant can the speaker be from the boulder? What if there’s a lot of interfering noise in the environment? With what force does the boulder move? etc. The rule is vague.
Quantum mechanics is not vague. Even in nondeterministic interpretations, the probability distributions are clearly defined, and so can be simulated. This is not magic, and so I have no problem with it apart from my commitments for other reasons to the idea that everything that can happen does happen.
> would be quite an antithesis of a parsimonious explanation for human uncertainty in measurement.
In my view, parsimony should be about not making unwarranted assumptions rather than how many entities your model predicts. The idea that the wavefunction collapses is an unwarranted assumption.
LikeLiked by 1 person
It sounds like you don’t object with my position, though I’m not sure that I’ve illustrated it to you fully yet. I consider virtually all theories in science to be provisional, or that they exist as potentially useful approximations. The only theory in science which one should ever know to be true, I think, emerged with the great René Descartes’ — “I think, therefore I am”. This is to say that you can be perfectly certain that reality does contain you, or an experiencer of qualia, while all else must always remain provisional to you. Thus Heisenberg’s uncertainty principle should not be considered true, but rather a useful approximation like all others except for “Cogito, ergo sum”. Thus it should go without saying that I don’t consider my epistemic interpretation of QM to be a final answer. It’s provisional.
Secondly, the only interpretation that I take off the negotiation table is the existence of perfect causality itself. The reason that I demand this is not because of anything ever shown in physics, but rather because of my own metaphysics. If causality does fail then nothing would exist to even potentially understand in that regard. Thus science should be rendered obsolete to the magnitude of such failure. So as a naturalist I demand that everything which occurs to be causally “of this world”. Anyway other than causality (which I presume entails determinism), I’m good with the alteration or abandonment of all else in physics, that is given adequate experimental corroboration.
In the end my problem with many worlds interpretations lies squarely with its stated implications. I simply cannot square an explanation rendering QM deterministic, when the cost is oodles of new universes somehow materializing out of ours. In truth I’d be more comfortable with the regular magic of quantum mechanics referencing an ultimate void in causality. To me that at least seems like far less displayed magic.
I do take your point about Bell’s inequality. These experiments always show quantum uncertainty to prevail. Furthermore many physicists seem not to mind this apparent void in causality. I’ve also noticed some to get angry with me when I imply that their interpretation may effectively be termed “supernatural”. This has mainly been over at Massimo’s, though at Backreaction a bit as well. Apparently Mike agrees since otherwise he’d have no reason to get into MWI speculation.
I think we may be using the “epistemology” term a bit differently. For me this essentially represents the beliefs of something conscious. Then an ontology would be what exists beyond any belief, or reality itself.
On an assured weirdness tax, I don’t consider myself smart enough to say that all explanations must seem weird to me. One thing that I don’t consider all that funky is the potential for extra dimensions to exist which we only perceive given quantum measurement. An “entanglement dimension” for example should have no problem causally affecting particles at opposite ends of the universe. Perhaps you or Mike can tell me why that’s hopeless speculation? As a strong naturalist however I do suspect that some such answer must exist.
On magic, perhaps I don’t go with your computer simulation definition because I’m not much of a computer guy? Anyway, if an “effect” occurs without a cause impelling it to, or a “cause” occurs that isn’t realized because no effect transpires, then wouldn’t you say that such things could usefully be referred to as “magic”? Here the future wouldn’t be determined in the end. Anyway for me there would need to be causal dynamics by which a rock moves when I utter “Open Sesame” or whatever. The right technology would suffice for example. Like you I don’t consider QM magic, but can’t say that I grasp what’s causally happening.
> In my view, parsimony should be about not making unwarranted assumptions rather than how many entities your model predicts.
The difficulty there should be in deciding what is and isn’t “warranted”. At least people with different perspectives might agree that certain explanations are more simple than others. Newton’s equations should tend to be considered more parsimonious than Einstein’s, even if they’re less accurate.
LikeLiked by 1 person
“Secondly, the only interpretation that I take off the negotiation table is the existence of perfect causality itself.”
“I simply cannot square an explanation rendering QM deterministic, when the cost is oodles of new universes somehow materializing out of ours. In truth I’d be more comfortable with the regular magic of quantum mechanics referencing an ultimate void in causality.”
These statements seem contradictory. You demand a causal deterministic interpretation. When provided with one, you say you’d prefer a magical one. Your other option is pilot-wave theory, which is deterministic although not local. (There are a couple of others on the Wikipedia comparison chart, but I’m not familiar with them. One of them sounds like a special case of MWI. For that matter, Deutsch argues that pilot-wave is MWI in disguise.)
Your other option it to simply hold out for something less unsettling and fall back to antireal theories until then. Personally, when making that move, I’ve never seen the benefit of going beyond instrumental Copenhagen, but many prefer others.
LikeLiked by 1 person
It sounds like you’ve got me pegged reasonably well Mike, except let me clarify one thing. The reason that I don’t consider there to be any contradiction between the two statements from me that you noted, is because I see the many worlds interpretation as something which doesn’t use causal explanations to naturalize an otherwise magical situation, but rather uses more magic. As I understand it things are proposed to become determined here by means of full universes that spring from essentially nothing. That’s what I consider supernatural — full universes springing from essentially nothing.
LikeLiked by 1 person
The thing to understand is that the other universes are not postulates of MWI, but consequences of it. They result from simply assuming that the dynamics modeled by the Schrodinger equation continue. Nothing else needs to be added. You actually have to add additional postulates, such as a wave function collapse, to prevent the other universes.
LikeLiked by 1 person
I’m not sure that classifying these other universes as “consequences” rather than “postulates” does much to address my concerns. In the end those universes are part of what’s being proposed, and the very thing which I consider ridiculous. As an extreme naturalist who is grounded upon the premise of epistemological solipsism, I simply do not permit myself to go along with such extravagant claims. I think therefore I am, and so go on to build my beliefs by means of progressively more complex evidence based models. But to then decide that a given example of human ignorance might adequately be overcome through the existence of infinite universes which affect ours ever so slightly and yet fundamentally each moment… I can’t even imagine what phenomenal evidence it would take for me to believe such a thing. Surely McFadden simply goes on with his work and smiles at this entire side circus.
LikeLiked by 1 person
Well, if there’s no logic or evidence that would ever move you, then it’s not productive for me to keep offering any.
LikeLiked by 1 person
Yes Mike, that does seem like a reasonable assessment to me. The point of our conversation as I see it however, has been that perhaps one shouldn’t believe in ideas which propose “many worlds” in order for them to be true?
LikeLiked by 1 person
The interpretation does not “propose” many worlds. It only proposes that there is no wave function collapse. If you accept that, then the rest follows.
LikeLiked by 1 person
Okay Mike, I’ll rephrase to remove the “propose” term for many worlds. It’s the “…then the rest follows” part that puts me off. Whether directly proposed or simply an implied result of what is proposed, to me it doesn’t matter all that much. Surely the implications of a given law, for example, matter even when those implications are not explicitly stated. I wonder if even Hugh Everett would have backed his own interpretation if he’d have grasped that it meant countless (or infinite) additional “universes” would thus need to exist to affect ours? So this is quite like a worthy thought experiment, as in “If [some idea] happens to be the case, then [various implications] should exist.”
LikeLiked by 1 person
So are you’re saying you would have judged special and general relativity according to their consequences, which are bizarre, and rejected them on that basis? Many rejected Copernicanism because it implied we weren’t at the center of the universe, or that the stars were unimaginably distant. Many rejected Darwinism because of its implications. Were they right to do so?
You often say that scientific theories should be assessed without taking into account their moral consequences. But aren’t you committing a similar sin when you judge a theory because it has consequences you simply dislike?
Everett was still around when DeWitt and others started using the many worlds description. (He left academia to make a fortune in the defense industry, and lived until 1982.) He didn’t object to the terminology. It sounds like he actually adopted it. There probably are some variants (like Deutsch’s) he might have had issues with, but he understood much of it was just a flamboyant way to describe his theory.
LikeLiked by 1 person
So Everett liked the “many worlds” implication of his theory? Good to know. Is it flamboyant? Well to say the very least.
Anyway I’m of course not saying that if I existed before all sorts of now verified scientific theory, that I would have rejected the ones with implications that I disliked. I do realize this to be quite standard in general given that we’re all self interested products of our circumstances. It’s not blatantly my own modus operandi as far as I know however.
If it’s true that virtually infinite (or actually infinite) universes emerge from ours each instant (or perhaps already exist as full universes), and slightly kiss our universe to serve as the other half of any given QM dichotomy, I’m saying that I cannot grasp what evidence of this perfectly true truth, could even conceptually be provided to me. Here we have bazillions of universes which don’t exist in any of our functional dimensions, such as time or space, but rather in some kind of “full universe dimensions” that effectively explain nothing more than perceive QM randomness. If these universes do exist, can you think of a way, even conceptually, to demonstrate their existence? Or as far as you can tell, must adherents always be doomed to rely entirely upon their faith?
This situation may be contrasted with a given theistic belief. If some preacher were to claim that God tells him how things are, at least this is something which might be testable. And if the person were to continually predict all sorts of events which science fails to, then at some point even I’d become a full believer!
It’s interesting that you bring up my skepticism for novel ideas in science, when our discussion of quantum biology over at my site has you as the skeptical party while I’m the liberal. https://physicalethics.wordpress.com/2020/09/20/quantum-biology/#comment-260
LikeLiked by 1 person
For testing the MWI, first we have to note that its dynamics are those of the Schrodinger equation, which is perhaps the most tested equation in science. The MWI argument is that the dynamics continue without an objective wave function collapse.
The MWI could be falsified by the discovery of any kind of actual objective collapse. Physicists are constantly holding larger and larger objects in superposition (they’re up to molecules with several thousand atoms). The larger they can go, the more likely one of the non-collapse theories are.
For more positive evidence, to Lee Smolin’s point that Michael and I discussed, decoherence isn’t an entirely complete and absolute process. There should remain remnant interference between decohered branches (universes). It should be possible, in principle, to detect these someday. (I’ve seen it described as like trying to detect the effects of Jupiter’s gravity on the ISS, possible in principle, but extremely difficult.)
The problem with quantum biology, as I noted in that thread, is I’m not aware of any logical account for how quantum superpositions are maintained in biological processes long enough to make a difference. It’d be different if there was evidence to force the matter.
Biology does take some quantum events and magnify them, such as when a photon strikes a photoreceptor on our retina, so it’s possible there are more of those kinds of interactions.
LikeLiked by 1 person
That the Schrodinger equation works in a practical sense is not in dispute here. It’s like Heisenberg’s uncertainty principle — they’re places to begin from and potentially explain. So verifying either of them in larger and larger structures does nothing to support the premise that “many worlds” happen to be responsible for their effectiveness.
I see from your discussion with Michael that you said, “Multiple physicists have also said it’s possible, again in principle, to detect remnant interference between decohered branches, that is between universes, but not with current technology.” But this seems a bit like claiming that Christianity is theoretically falsifiable because with future technology Jesus Christ himself could objectively be detected. Until such theorized technology does become developed, in either case we’ll need to get along by means of faith rather than reason.
If I don’t know enough about physics to convince you that theorizing the existence of other universes to explain QM funkiness provides an unfalsifiable solution, then there is also the opinion of Sabine Hossenfelder. As I recall she gave Carrol’s book a favorable review by describing it as a good introduction to QM, but still assessed his “many worlds” position to be unfalsifiable.
In any case this discussion is not about you potentially convincing me that many worlds provides a valid potential explanation, since without theorized future universe detecting technology, I don’t know what evidence you might use to convince me of this. Instead this is about me potentially convincing you that a butter smooth salesman with a popular podcast, may have gotten the better of you in this regard.
On quantum biology, clearly wherever things make no sense to us, such as how a photon of light might be converted into plant energy with near perfect efficiency, then it’s fitting to check whether quantum dynamics like tunneling, superposition, and entanglement, might be at play. And in the end you may be right that such influences will not exist long enough to be effective in biology. At this early stage however, these questions do remain for scientists to explore.
I think one thing you may be missing is that there is no fact of the matter whether Carroll is right or Deutsch is right. They are using different ways of explaining the same idea. This is what I’m proposing anyway, I’m not 100% sure they would agree.
Whether there is one world which splits in two or two identical worlds which diverge is in my view just a question of how you look at it. Either option is as reasonable as the other, and it comes down to personal taste which description you prefer. I suspect the same is true of whether the particle or the wavefunction is more fundamental. As long as the mathematics is the same, anything else is just a human interpretative gloss and not something that is either true or false. It’s just a way of thinking.
I’m not sure why conservation of energy should be a concern, because I see no a priori or logical reason to expect that energy be conserved in world-splitting scenarios. The only reason we have for thinking that energy must be conserved is our empirical observations of what happens within a particular world, from the point of view of a particular observer. But there is no observer who can see the energy in more than one world at a time, and so there has never been any empirical evidence to suggest that energy must be conserved across worlds. As such I don’t see any reason to demand that energy be conserved in world-splitting events. We can think of the energy being diluted if we wish, and this can be made to show mathematically that energy is conserved if we measure the energy according to this metric, but again that’s just a way of looking at it. I wouldn’t take the dilution too literally as a physical process that is actually happening, and so I wouldn’t have any concerns about it being diluted infinitely.
You might think that parsimony demands that we should expect energy conservation to extend across worlds if it operates within a world. It is generally a good idea to assume that observed states of affairs continue much in the same manner beyond the observable horizon. It is for example reasonable to assume that the laws of physics are much the same just outside the observable universe as they are within it. As such why would we not expect energy conservation to operate across worlds if it operates within a world?
I’m not a physicist, so my reaction to this may not be 100% correct, but my feeling is something like this. Conservation of energy is not really a fundamental law of physics. It’s more like an emergent statistical rule arising out of the fundamental laws. The fundamental laws we have (Schrodinger equation etc) predict that we should observe energy to be conserved within a universe, and also that the wavefunction evolves deterministically in a manner described by Everett as the MWI. As such we don’t need to add an additional assumption that energy conservation is violated in MWI — the predictions of the MWI simply fall out of the fundamental laws of physics. So there is no violation of parsimony here.
Even in wavefunction collapse, there might be energy conservation concerns. In the double slit experiment, the particle appears to go through both slits at the same time, interfering with itself. The wavefunction only collapses when it hits the screen. But before the wavefunction collapses, it seems there were briefly two particles. Where did that additional energy come from or go to? It all balances out in the end, I guess, but for a brief period it is as if an additional particle appeared from nowhere. You could perhaps view the MWI as something like that — a temporary state of affairs that will all balance out in the end (heat death perhaps, when there will really only be one homogenous world). We’re just living in the brief interlude as we pass through multiple slits.
LikeLiked by 1 person
I can see that view. Although I find it hard to see it as no fact of the matter. Rather it strikes me as different ontologies that are compatible with the mathematics. But I do know that Carroll sees various ways of speaking about the MWI, which he sees as merely different ways to account for it. So I suspect he might agree with you. Not sure about Deutsch, but it’s noteworthy that he never describes his view as anything other than the Everettian interpretation.
On conservation of energy not applying across multiple worlds, that’s a view I didn’t get around to discussing in the post. That maybe we shouldn’t worry about conservation. We already know there are issues with it in the ongoing expansion of the universe. And if cosmic inflation is true, then it definitely isn’t something that applies on cosmological scales. Maybe when the force carriers branch, we should just view it as the energy itself branching.
Or maybe we should just view energy conservation as an aspect of the emergent physics we observe in any one branch.
And you’re right that the wave function collapse has its own energy issues. We tend to focus on its indeterminism and non-locality, but the idea of a wave that might be spread out over vast distances instantly collapsing into a particle is one that has a long list of its own problems. As you noted, what happens to the energy inherent in the other branches?
Still, I remain uneasy about this issue. Maybe like the people who once doubted heliocentrism because they couldn’t understand how gravity worked within it, or why the stars show no movement in the sky due to Earth’s orbit, this is simply a concern that only arises because I’m still mired in the old paradigm.
I do like your idea that maybe it’s imbalances that will be settled in the end.
I guess my attitude is that regarding ontology as fundamental is mistaken. To me (and perhaps only to me) what exists is largely a choice — it depends on how we want to define existence or what we want to regard as existing. That doesn’t mean that objective reality doesn’t exist, but it does mean that there may be more than one ontology that serves to describe it, with no one of these ontologies being uniquely correct.
For an analogy, consider this Escher tiling:
Forgetting details such as eyes etc, you could define this tiling in terms of the fish shape, and regard the fish shape as fundamental, with the bird shape being only being the spaces between the fish. Or you could do the reverse. Either way, you don’t need to postulate both shapes independently. It suffices for one shape to define the other. But whether you take the fish as fundamental or the bird as fundamental, it’s obviously wrong to think that either ontology is objectively correct. These ontologies are just two different ways of describing the same structure.
I think something similar is going on with Carroll/Deutsch.
I know where you’re coming from about existence. But I think we can say for sure that the interference exists relative to us, one way or another. The question is what brings it into existence? Are we talking about a wave in a quantum field? Or a particle effected by its doppelgangers in other universes? I suppose we could say that the field is composed of all the particles in all the universes, but that brings in the issues I noted in the post.
This is distinct from an issue such as whether the branches are all in the same universe or should be regarded as universes in and of themselves. That’s a definitional matter. But no matter which definition we go with, it doesn’t seem like it changes the underlying reality.
> Are we talking about a wave in a quantum field? Or a particle effected by its doppelgangers in other universes?
Are we talking about a rabbit? Or a duck?
Six of one, half dozen of the other. You can have it either way, depending on how you look at it. There is no true answer. At least I don’t see why there would have to be, anyway.
> I suppose we could say that the field is composed of all the particles in all the universes, but that brings in the issues I noted in the post.
I think you’re worried about two issues primarily
1) Where do all the universes come from
2) Which particles in which universes interact?
I think you can perhaps answer both questions just by adopting the perspective that Deutsch’s interpretation is just another way of looking at Carroll’s.
1) They were always there. Where did they come from is just the same question as where did the multiverse/the wavefunction of Carroll’s MWI come from. I may have an answer (which you don’t accept), but to most people this is always going to be a mystery either way.
2) This is just the same question as when does a particle in different branches of the multiverse interact with itself on the Carroll’s MWI. Interaction is detectable until decoherence. So you’ll only detect interactions between universes which are effectively identical apart from the systems you’re investigating. So in the double slit experiment, you have a universe for each path, and all the universes have to be identical apart from the particle. As soon as the differences start to propagate beyond the particles into the environment (and in particular when the differences reach the observer), decoherence has occurred and no more interaction phenomena will be observed.
> But no matter which definition we go with, it doesn’t seem like it changes the underlying reality.
No it doesn’t change the underlying reality. These are both just ways of describing the underlying reality. The underlying reality itself is in a way ineffable. Well, not really, but as soon as you start talking about it you start making choices about how to represent it. And there are always going to be more than one way to represent it. Asking which representation is correct is getting confused, like asking which of 2 and II is actually the number two.
I understand the principle you’re describing. It’s one I usually take with consciousness. A big part of the issue is defining exactly what we’re talking about. And as noted above, there are aspects of this topic like that, such as how we define “universe” or “world”.
But my issue is, on whether there’s one version of me with my current state, that then proliferates as the state changes, or an infinite number of me with the current state, which then diverge from each other, may be unknowable, but it does seem like an ontological difference. And I’m pretty leery of declaring things unknowable. Things that seem unknowable today may become knowable in the future.
On the question of interference, in straight Everettian physics, we have an accounting of the universes. We know where they come from. In Deutsch’s account, we have an infinite number of universes, including an infinite number with our current state, so we never run out of universes to provide interference. But as I indicated, not all of the infinity can interfere, since that would make any dynamics impossible. (Unless I’m missing something.) So only a subset of the universes can interfere with each other. The question is, what controls which subset? This is a question the Everettian view doesn’t encounter. (Again, unless I’m overlooking something.)
> but it does seem like an ontological difference.
Yes. But all the same both ontologies can be equally correct, in my view. My view is actually that it makes no difference whether there is one universe or an infinite number of identical universes. The only vantage point from which this could possibly make a difference is from outside the universe, and there is no such vantage point. Without such a vantage point, I’m claiming that not only is it unknowable, I’m claiming this is a distinction without a difference, that the two scenarios are in fact two ways of looking at the same state of affairs. As such, to me, there is no real difference between the idea of a single universe splitting and two universes diverging.
But I guess this is controversial. I’m just suggesting the idea for consideration, not trying to say you should accept it.
I understand. In truth, it takes judgment to determine whether two descriptions are the same ontologically or different. I don’t want to pretend that one stance is obvious. All I can say what my judgment currently is. I might feel differently after thinking about it more.
Interestingly, Sean Carroll says much the same thing about whether the branching results in a new universe at a particular location which then spreads out from there, or a complete new universe billions of light years wide coming into being instantly. His stance is that you can account for it both ways. I can see his point from an instrumental stance, but taking an instrumental stance in an interpretation aiming for a full real account seems strange.
My credence in the MWI crucially depends on the ontology of the gradual emergence of each new universe. At least unless I could be made comfortable with the idea of them all already existing.
> but it does seem like an ontological difference.
Yes. But all the same both ontologies can be equally correct, in my view, because (I claim) ontologies are just ways of looking at things. My view is actually that it makes no difference whether there is one universe or an infinite number of identical universes. The only vantage point from which this could possibly make a difference is from outside the universe, and there is no such vantage point. Without such a vantage point, I’m claiming that not only is it unknowable, I’m claiming this is a distinction without a difference, that the two scenarios are in fact two ways of looking at the same state of affairs. As such, to me, there is no real difference between the idea of a single universe splitting and two universes diverging.
All the same, I guess we can imagine a similar scenario where there is such a vantage point. Take for example the vantage point of a universe simulation experimenter who runs 1000 identical universe simulations. But even here, if the simulations are truly identical, then I don’t think it matters whether we regard them as 1000 distinct universes or 1000 computers simulating the same singular universe.
But I guess this viewpoint is controversial. I’m just suggesting the idea for consideration, not trying to say you should accept it. So, assuming the MWI is correct it’s not mandatory that we pick one of Carroll or Deutsch, we also have the option of accepting both equally.
> But as I indicated, not all of the infinity can interfere, since that would make any dynamics impossible.
I’m kind of at the limits of my knowledge here, but once more, I’m not seeing the difference here between the Carroll version and the Deutsch version. For interference, you need universes which are different with regard to some limited system (e.g. a single particle which is in a superposition of states) but otherwise identical. So it’s not that all universes will be interfering, right? There might be an infinite number of universes which match these constraints, but in the Carroll version there are a correspondingly infinite number of branches, so I don’t see why Deutsch has more of a problem than Carroll. Maybe *I’m* missing something!
Sorry for posting twice, thought I hadn’t posted the first time. A bit more content in the second one anyway.
> I can see his point from an instrumental stance, but taking an instrumental stance in an interpretation aiming for a full real account seems strange.
I don’t think he’s taking an instrumental stance. I suspect he agrees with me that it really doesn’t matter which way you account for it, that such accountings are just human interpretations of what is really going on. As long as the mathematics and predictions are the same, then there really is no difference at a fundamental level.
Remember that for Carroll, universe branchings aren’t really happening a fundamental level. There is just a wavefunction evolving. Branchings are artifacts of particular points of view within the wavefunction. They only happen relative to an observer. The stories we tell about universes splitting or diverging are just tools to help us imagine what’s happening. From the point of view of the Schrodinger equation as a whole, there are no distinct worlds.
Thanks for clarifying. I was switching between machines and thought I’d just seen the notice twice.
And you made an interesting point about which universes cause interference. It would only be ones that were diverging on that one particle, which could conceivably be in the trillions, but still less than infinity. It does make me wonder, under Deutsch’s accounting, what causes the universes to diverge.
I agree that the tale of multiple universes / worlds is basically a human centered description of the universal wave function. I guess I can see the mapping from the MWI account of universes gradually coming into being, but I’m struggling to see how Deutsch’s or Carroll’s alternate accounts can be mapped to it. I can see how they might appear to be true within that universal wave function, but not how they can be ontologically true within it.
Just on a technical point, conservation of energy is not an emergent statistical rule, but a fundamental law of physics. It’s a consequence of time invariance of physical laws (all conservation laws are a consequence of some underlying symmetry). Incidentally, in some physical situations, such as curved spacetime in general relativity, time invariance doesn’t hold, and so energy is not conserved.
But there’s no energy violation when a quantum particle travels through two slits. There is only ever one particle.
LikeLiked by 1 person
Thanks Steve. Your first paragraph goes over my head, but maybe DM or one of the others can weigh in.
On there ever only being one particle, so the question is, if the particle never collapses, ever, is there an energy conservation issue? Is the energy of the particle spread out over the wave? If so, then can we say it remains spread out if that wave never collapses?
On energy conservation, maybe this will help? https://en.wikipedia.org/wiki/Conservation_of_energy
As for the single particle question, I’m afraid I’m not really sure how MWI considers this. I suspect that its adherents would say that there is a single particle, and it has multiple manifestations in different universes/worlds/histories/whatever. I must say I’m not entirely convinced by MWI’s claim to parsimony, as there is nothing in the Schrodinger equation that points to the universe branching in this way (just as it says nothing about wavefunction collapse). It’s just an equation for a wave, and that’s the crux of the problem – we never observe these mysterious waves that Schrodinger says are the fabric of reality – only particles, and the Schrodinger equation is not an equation for particles.
So I suspect we are still missing something important. We know that QM is incompatible with general relativity, and we still don’t have a clue about dark energy or dark matter, so maybe we need to hold off with grand untestable declarations about the nature of the multiverse until we’ve nailed down a few of these easier problems 🙂
The MWI argument is that the Schrodinger equation does actually mandate the other universes. Remember that “universes” is a somewhat sensationalized label for what’s happening, which is the evolution of the wave function. The Schrodinger equation isn’t normally thought to lead to other branches of reality because it’s typically only followed until an observation, when the Copenhagen postulate of the wave function collapse is said to take place.
The MWI argument is that this is an illusion, that the wave function never collapses. The thing to do then is consider what the Schrodinger equation says happens next. Einstein and collaborators realized that it led to entanglement with other waves. Bohm and Everett realized this eventually translated into mass entanglement with the environment. Everett’s point is that we only perceive the result entangled with us, specifically the version of us in this particular permutation of us and the result, with the Schrodinger equation predicting all the other permutations are also there.
Definitely the MWI doesn’t reconcile QM and GR, although it seems to violate GR’s tenants less than other interpretations, since it preserves determinism and locality.
Doesn’t necessarily mean it’s right. But it’s not the flighty speculation it’s often portrayed to be.
I understand all that, Mike. What I meant by my statement is that the Schrodinger has multiple solutions. The declaration that these are other “universes” is the problematic part. Schrodinger’s equation predicts multiple solutions of the wavefunction. Nothing more. It doesn’t tell us that an entirely new universe is spun off every time a quantum event occurs. It is this idea of a new universe spontaneously coming into existence (or that infinite universes already exist, awaiting their moment to spring forth) that I think is at the root of your concerns about energy conservation. Think instead about a single universe, with multiple solutions to the wave equation. I believe that’s what Everett’s assertion is. The concept of the “multiverse” is a popular-science explanation that muddies the water just as much as it conveys an image of what’s really happening.
Ah, I see what you’re saying now. Yeah, it comes down to what we mean by words like “universe” and “world”. You’re right that Everett’s original contention was that the multiple solutions to the wave function all exist, but that we can see only one because the rest have decohered, have gone out of phase from each other. He saw it all happening in “the universe.”
DeWitt was the one who brought in multiverse terminology, referring to those other solutions and their propagation as “universes” or “worlds”. That’s been both a blessing and a curse to Everett’s interpretation. A blessing because we probably wouldn’t even be talking about it if DeWitt hadn’t done that. But a curse because it gave it a science fictiony feel that leads many to reject it just for that.
There are people who go a step further and say there’s actually an entire universe, tens of billions of light years wide that instantly comes into being. That’s the way the MWI is often portrayed. I don’t buy that version. And as I noted in the post, I also don’t buy Deutsch’s version of pre-existing universes.
But even if we just look at it as all solutions to the wave function continuing, I still have concerns about the energy. I’d love to know if I’m just missing something there.
Words and metaphors are a double-edge sword in physics. They can confuse and mislead just as much as they teach and inform. We should always look at the maths if we want to truly understand. (But the maths is hard!)
If you think the MWI hypothesis violates energy conservation, ask yourself these questions: how many universes are there before and after an interaction? How many particles? If you can convince yourself that no new universes or particles are spawned by a quantum event, then perhaps you can reassure yourself that no new energy has been created either.
Maybe another way to think of this is, how far spread out among solutions can the particle/wave be before we have an issue? Can matter/energy be diluted infinitely? Do we have to worry about a Planck energy level?
You have discussed three interpretations but what about that of Ulrich Mohrhoff? In ‘The World According to Quantun Mechanics’ he gives a fourth. At least I think he does. I don’t have the maths to follow it properly .He gives an interpretation that would be consistent with a non-dual Reality.
LikeLiked by 1 person
I’m not familiar with his interpretation, and a quick Google didn’t make it obvious. What would you say are its key tenants?
Is there any reason we should care which interpretation is right?
If there is, then we should be able to do something really amazing with the correct view – build a warp drive or visit an alternate world, for example.
LikeLiked by 1 person
None today, except for curiosity, but that’s frequently true with scientific questions. Many things that start out as esoteric phenomena with no practical value go on to become civilization changing knowledge. Of course, there’s no guarantee quantum foundations will ever provide that, but we won’t know unless it’s explored.
You can’t really explore unless it makes testable predictions. Are there testable predictions?
If not, it is no better than pseudo-science even if it has a bunch of equations seemingly to justify it.
I covered this in a post back in May: https://selfawarepatterns.com/2020/05/23/the-spectrum-of-science-to-fantasy/
The TL;DR is that there are always aspects of scientific theories that are not testable. We have to look at what has been established in testing then extrapolate from there. As long as we don’t add any additional assumptions, it’s a prediction of the theory. And what is not testable today may become testable in the future. Copernicus had no way to test his theory, but seven decades later, Galileo did.
In the case of the MWI, all we’re doing is following the Schrodinger equation, perhaps the most heavily tested equation in science. Any detection of an objective collapse would falsify it, so arguably every time you read about a larger object held in superposition, it increases the likelihood of the MWI. And it is possible in principle to detect interference between decohered branches (aka universes), although not currently possible in practice.
The pseudoscience charge is, frankly, polemical nonsense. Pseudoscience is fake science. The major interpretations aren’t claiming evidence they don’t have, or pushing theories that contradict reliable science.
It’s worth noting that the inspiration for quantum computing came from people with a MWI mindset. Deutsch is actually one of those pioneers. Other interpretations can account for that type of computing, but it was easier to see the possibility under MWI.
“The major interpretations aren’t claiming evidence they don’t have, or pushing theories that contradict reliable science”.
You could drive a big truck through the gap you create if science can done as long you don’t claim evidence you don’t have or claim something that contradict science. What about an afterlife? There is nothing in science to contradict a theory of an afterlife. Or how about an entire hierarchy of ethereal worlds where spirits and ghosts reside? Any of those theories, you would ask for evidence if you didn’t dismiss them out of hand. And, of course, evidence might appear eventually for an afterlife or a hierarchy of worlds or for MWT. The idea of an afterlife sounds on its face implausible, but are you telling me that MWT doesn’t?
They can’t make the claim with any evidence because they don’t have any evidence, just theories.
I think you’re ignoring the differences of how these conceptions are arrived at. As I covered in the earlier post, we have:
1. reliable knowledge
2. extrapolation from 1, with few if any assumptions motivated to fit the data
3. loose speculation, typically with many assumptions motivated to fit some cherished notion
4. fantasy which contradicts 1, and if presented as science is pseudoscience
The basic mathematics of QM are 1. The major interpretations, including the one that says the basic mathematics are all there is, are in 2. I won’t pretend there aren’t some in 3. The other things you mention are deep in 3 or in 4.
If you dismiss 2, then forget about pretty much all the interesting theories we talk about.
I would think an afterlife would fit right in with MWI. It would almost be a requirement of the theory. No matter how many splits, there must always be some world where I am alive.
It is true that under the MWI, everything possible under the laws of physics happens in some of the worlds, including profoundly improbable events, such as you living until the heat death of the universe. That’s not really an afterlife so much as a side life, and there’s no guarantee all of those scenarios will be pleasant ones. As I noted in a post a while back, whether that’s actually you living until the end of the universe is a philosophical question.
Have you read “The Unsolved Puzzle” (https://www.amazon.com/gp/product/0956422268/ref=ppx_yo_dt_b_asin_title_o08_s00?ie=UTF8&psc=1)
It is quirky and poorly written but it has some points to make. I am curious as to your take.
I tend to think that the more outrageous interpretations are off, but we have been led down a garden path by the physics itself. QM is spectacularly useful and still inexplicable.
LikeLiked by 1 person
I haven’t read it. Just googled him and found this: https://www.msn.com/en-ae/news/techandscience/new-theory-of-quantum-mechanics-shows-matter-is-not-in-the-eye-of-the-observer/ar-BBYTUYN?li=BBqrVLO
I don’t know how accurate it is of his views. I think he’s right that observation, in and of itself, isn’t the answer. It’s the interaction. But we can have two particles interact and the only thing that happens is they become entangled with each other. We still have two waves, just now described by one wave function. It’s only when the information becomes widespread that the collapse appears to happen. Of course, another way of saying it becomes widespread is to say it has widespread causal effects in the environment, usually summed up as “measurement” or “observation”, although the wide scale effects don’t necessitate that there be an actual measurement or observation happening.
To some degree, we’re talking about decoherence. But decoherence, again in and of itself, doesn’t explain why multiple possibilities become one, at least outside of MWI scenarios. Anyway, it’s hard for me to judge based on just this article, and I’m reluctant to pay for a physical book (there doesn’t appear to be an ebook version).
On doubting the outrageous interpretations, I think it matters why it’s outrageous. If there’s a solid line of evidence and logic leading from the data to the outrageous conclusions, that’s different than someone seeing an opportunity to just wedge in their favorite extravagant concept.
We had a discussion about this when I posted on it:
I added this comment in my post after some further thought on it.
(Note: After reading a bit of Carlo Rovelli’s Reality is Not What It Seems, I think most of Kerr’s theory is in Rovelli’s relational quantum mechanics. Kerr does seem to bring in a “dimensional” component that I don’t see in Rovelli’s RQM, but the notion that interaction is what causes the wave to take on it dynamic properties is in the relational theory.)
Thanks. I had forgotten about that post. But I wonder if Kerr’s additions to RQM put it more in the real category than RQM.
So here’s a question, Mike:
If new universes “diverge” such that Universe A becomes Universe A1 and Universe A2 after a quantum event, where the entire universe in both divergent worlds is identical except for the divergence in one particular event, then did the universes come into being everywhere at once? And wouldn’t that violate relativity? I mean, this is probably not a good question, but IF what is being said is that there is an entire universe in which a particle I’m studying lands on the left side of the detector screen, and another entire universe in which it lands on the right, and they both come into being when the photon hits the screen, then IF the extents of each universe are duplicated simultaneously I think that would violate relativity theory–because it would be a moment of “absolute now” everywhere in those universes at once, and not a “relative now” that propagates outwards at the speed of light. And if it does propagate outwards at the speed of light, well, what does that even mean! But that might lead to some interesting experimental opportunities one day.
A random thought.
LikeLiked by 1 person
The answer that makes sense for me, and the only one that gives the MWI credence for me, is that the new universe comes into being at the interaction. It spreads at the rate of interaction between the quantum particles. So the speed of light would be the maximum it could spread at. But if we keep a quantum system in sufficient isolation, it wouldn’t spread.
For example, Schrodinger’s cat would be in a superposition of being both alive and dead, two embryonic universes ready to go. But they wouldn’t spread until Schrodinger opened the box, whereupon Schrodinger would go into superposition. If he had Einstein on the phone, Einstein would go into superposition as soon as the information reached him, and so on into the universe.
(Note that the travails of quantum computing show that maintaining such an isolation for any substantial length of time is very difficult.)
So, my response is no, the new universe wouldn’t just pop all into existence all at once. But from the perspective of those of us in it, the result is the same as if it did. We can’t perceive the branching. Under MWI, the classical world is emergent from the underlying quantum reality, bubbles of froth at the top of an unimaginably strange and constantly exploding reality. The thing to remember, as Disagreeable Me pointed out above, is talk of universes is just a human centric description of the ever expanding universal wave function.
I don’t know if you’re familiar with the Wigner’s friend thought experiment. It’s like Schrodinger’s cat, but with multiple levels. Deutsch pointed out that it should be possible, in principle, to someday conduct a Wigner’s friend experiment. Multiple physicists have also said it’s possible, again in principle, to detect remnant interference between decohered branches, that is between universes, but not with current technology.
And as I noted to someone else in this thread, if there is any objective collapse discovered, it would falsify MWI. On the flip side, it seems like every announcement of yet a larger object held in superposition makes an objective collapse less likely, and the no collapse interpretations more likely.
LikeLiked by 1 person
” So the speed of light would be the maximum it could spread at”.
Speed of light isn’t a limitation for entanglement, is it? Why would it be for this?
I think you’re thinking about the speed of the wave function collapse, which has to happen faster than light. The MWI doesn’t have a wave function collapse, so it doesn’t apply. Under MWI, entanglement is a correlation set at the interaction (as Einstein originally thought it must). It’s not an issue under Bell’s theorem, because Bell implicitly assumes a single definite outcome for measurements, which is not what happens under the MWI.
I am marginally familiar with the Wigner’s friend thought experiment. I’ve read a popular physics piece or two…
I have to say that MWI defies the ability to mentally process. I know that separate universes is a human convention, and that it’s really just the wave function, but I can’t really make sense of that English statement. For instance, a wave function might have varying amplitudes for the probability of an electron hitting a screen in a double-slit experiment. If MWI is true, the electron hits everywhere there’s a non-zero probability right? But we only see one because those other possibilities are in some other parallel reality. But is all that contained in the wave equation that predicted outcomes for the electron? My understanding is typically the wave equation that applies to a particular experiment just includes the elements in the experiment, not the equipment and people all around it…?
The second thing is that re: Schrodinger’s Cat, I am not 100% following your description of how the split between two seemingly objective realities travels. I mean, I do follow the logic of what you said, but I’m questioning if there would be a delay until the box was opened, because isn’t the outcome of the observation at any given time the result of whether a radioactive particle has decayed or not? So in my mind, wouldn’t there be a wave equation to define the probable time at which the particle decayed, so that in very instant of time, it either has or it hasn’t, and those are all part of the wave equation that is spreading? The particle could decay now, or in the next second, or in the second after that…
So to the point of your article, if all outcomes of all conceivable wave equations happen, there seemingly would have to be some topology for how they interact, because at some point if there was a Schrodinger’s Cat experiment on Mars and one on Earth, and they were accomplished non-simultaneously, then somehow one possible outcome of the first experiment has to align with one possible outcome of the second to form a seemingly objective reality right? Or do all those interactions produce permutations of possible interactions?
I have to say, this is quite hard to grapple with… It does seem crazy without some ground rules.
LikeLiked by 1 person
For Wigner’s friend, suppose I’m in a specially isolated laboratory with a particle whose spin has not yet been measured. You’re outside of the laboratory. Now, at a specified time (say 3:00), I do the measurement, but don’t communicate the result. At 3:01, the spin of the particle is now in a collapsed state for me. But for you sitting outside of the laboratory, I’m like Schrodinger’s cat: I’m in a superposition of possible measurement results. At 3:02, you open the door to the lab and inquire about the result, whereupon for you the result collapses to a definite state. However, suppose your room is itself isolated and there’s someone outside of it waiting for the result until 3:03.
When does the collapse happen? Is it an absolute definite event or a relative one? The answer depends on which interpretation you favor. If you were able to detect the superposition of my inner lab, it would tell you that I was in superposition, and strengthen all interpretations which posit a relative result (including MWI). In this sense, every time a larger object is held in superposition, we get closer to the Wigner’s friend scenario.
On varying amplitudes and probabilities, the usual answer is that the ratio of outcomes is equivalent to the probability, the amplitude squared. (There is a lot of hand wringing among physicists about what probabilities mean in the MWI. I’ve personally never been bothered by that. For me, it just means probabilities mean what they they mean everywhere else, where it’s an expression of what we know.)
Under Copenhagen, there are quantum things and macroscopic things. So the distinction you make between the subject of the experiment and the equipment involved is honored. But a key to remember is that the equipment is composed of quantum objects. Everett asserts that there’s no good reason to assume something composed of quantum objects isn’t itself quantum. (Again, note point above about ever larger molecules held in superposition.)
On the Schrodinger’s cat scenario, you’re right that the usual description is a vast oversimplification. The radioactive particle is constantly evolving and so constantly branching, meaning that there are far more than just two branches in the box. In reality, every subatomic particle in the cat, the radioactive atom itself, and everything involved are also constantly branching. So there are bazillions of baby worlds in the box, ready to burst out as soon as it’s open. The box itself is also constantly branching, which is another complication, so in reality isolation only limits the world splitting relative to that taking place outside. It can’t prevent it entirely.
If different Schrodinger’s cat experiments were performed, they would both propagate. When the branches encountered each other, every combination would branch out from the interactions. The only limitation would be if entangled particles were involved, then the various branches would only interact with the ones that were compatible. In other words, if the same branch encounters itself, it will interact and create new compatible branches. But decohered versions don’t. (At least that’s my understanding.)
There’s no doubt that this stuff is monstrously counter intuitive. It’s the result of simply letting the Schrodinger equation play out with no collapse. It solves the measurement problem, but with an unimaginable and unperceivable amount of activity constantly going on. It’s strength is that it provides a deterministic and local account of quantum physics that compatible with cosmological models. The flaw is it seems absolutely absurd. But reality has already shown us it’s absurd. The only question now is the nature of its absurdity.
LikeLiked by 1 person
Thanks for the detailed reply, Mike. I appreciate learning more about this stuff.
On Wigner’s thought experiment, correct me if I’m wrong, but wasn’t it even hypothesized that the outcome could be seen differently by people outside the room who later take a measurement of the scene? (Okay I just did a quick web search. It appears to be the case that the researcher in the room can make a definitive measurement while at the same time the experiment can be observed to be in a state of superposition to an outside observer–not just because they haven’t looked yet, but through a separate experiment that Wigner makes to determine if there is superposition inside the first experiment or not. So on this point, Wigner and the primary investigator can disagree. Just interesting stuff. The article suggests that objective reality may not exist quite the way we intuit. Meaning, a fact to one person is not necessarily a fact to another. This is the one in the MIT Technology Review site on-line, titled, “A quantum experiment suggests there’s no such thing as objective reality.”
Second, I follow what you’re saying about the gazillion baby worlds in the box, waiting for it to open. I just don’t understand why they’re waiting for it to open. I suspect this is what you addressed by saying “The box itself is also constantly branching.” What I’m getting at is: what does it mean for the “box to open?” Do we need to take a “measurement” to get information out of the box? Not in MWI right? If so, that seems to put human consciousness back in the driver’s seat here, in a particular way doesn’t it?
LikeLiked by 1 person
If I recall correctly, Wigner’s take on his own thought experiment is that consciousness is what matters. I don’t recall the exact reasoning he used to reach that conclusion. But saying there’s no objective reality could also be someone’s takeaway. The MWI’s takeaway is that the disagreement comes from people being at different stages in the branching. It’s a thought experiment, so it’s really just exercising our intuitions. It only provides answers if we can someday convert it to a real experiment.
On why the baby worlds aren’t spreading, remember that we’re really talking about the wave function here. We have the same situation with a particle we hold in superposition. It may have various states in that superposition, each of which could branch out into the environment, but don’t unless we allow it to interact with that environment.
In the case of the box, assuming we’re using some kind of technology that limits the spread of the quantum effect inside it, those effects can’t spread. The contents remain in a superposition from the perspective of the outside environment. (Note: this would be extremely difficult to actually do. It requires isolating the contents from all interactions with the environment, including electromagnetic fields, gravity, etc.) Only when it’s allowed to interact with that environment do its various branches get a chance to become entangled with that environment, to have causal effects throughout it. Once it does, the new worlds spread out from there.
On the question about measurement or observation, they, in and of themselves, have no special role in MWI. It’s about quantum interactions and entanglement. That happens everywhere constantly. It’s just that when we do a quantum experiment, a particular quantum result gets magnified by the equipment. But it’s not just the experiment generating new worlds, it’s everything, all the time, constantly. Consciousness is just along for the ride, like everything else in the emergent classical world.
LikeLiked by 1 person
Thanks for taking the time to follow-up. We have been talking past one another just slightly, but I agree with all of your clarifications here. If we assume a completely isolated Schrodinger’s Cat experiment, then I agree the wave function is kind of bottled up.
But my further point, which I think you agreed with, is that quantum events with an unpredictable time–such as the decay of a radioactive nucleus–would throughout the universe be decaying all the time and creating branches in the wave equation. It’s impossible for me to follow how such a proliferation of branching results in something we can all agree upon. I just can’t mentally imagine it I guess.
As a further aside, I’m not sure a wave equation of the whole universe is actually a valid talking point. Because–and this is taken from Lee Smolin’s latest book, which I listened to in the car last winter–we really can’t apply QM except to a particular system. In his mind, or I should say in my somewhat spotty memory of what he said, it’s not valid to talk about a wave equation of the entire universe because we can only apply QM to a particular system, of which we are asking a specific question. That’s what QM is good at. We have to specify a system, which is always a subset of the whole, to perform an analysis. Curious if you have any thoughts on the notion of a “universal wave equation” Mike!
LikeLiked by 1 person
I haven’t read Smolin’s book (maybe something I should consider?), but if I recall he thinks both QM and GR are wrong, and that reconciling them will probably require overhauling both. If that’s right, then his view of both would be instrumental, that is, antireal. If so, I can see where he’s coming from. He expects QM to break down at some point, past which its predictions will no longer be accurate, although it depends on when it breaks down and the nature of that breakdown.
That is a vulnerability in the MWI (and any scientific extrapolation for that matter). It assumes the dynamics of the Schrodinger equation continue past current abilities to test. If anything interrupts those dynamics, then the predictions fizzle. One thing many thought might be that factor was gravity, although that hypothesis has been weakened lately: https://www.sciencemag.org/news/2020/09/one-quantum-physics-greatest-paradoxes-may-have-lost-its-leading-explanation
But if the domain of the Schrodinger equation is limited in ways not understood yet, then predictions made with it will fail. All we can say right now is there’s no direct indication of such a limitation. (Unless Smolin identified one?)
LikeLiked by 1 person
I had to make a roadtrip today and did a little refreshing on Smolin’s book (Einstein’s Unfinished Revolution). So I’m a little better prepared to explain what the heck I was only partially recalling. It was pretty interesting.
Okay, so first off, Smolin is a realist. It turns out that like many pairs of terms in history, it is starting to feel like anti-realism and realism are morphing. MWI is (in Smolin’s way of thinking) one of three “realist” approaches to QM. The others would be pilot wave theory and whatever the theories of a physical wave function collapse are called. Collapse theories?
Smolin worked a lot in loop quantum gravity and other attempts to unify QM and Relativity I believe, and now believes a deeper theoretical insight into the universe is required to make progress. I think the treasured notion he is most ready to kiss good-bye is locality. The ideas in the last third of his book were pretty interesting I thought. And I did enjoy his discussion of QM in general. The requisite synopsis of the last 120 years.
I’m not as good with audio books as I am with written ones, but the notion I was trying to explain earlier about there being no wave function for the universe as a whole came in a section in which he was describing Rovelli’s relational quantum mechanics. Smolin notes that the theory requires a slice somewhere in the universe be made to have the observer and the observed, and since that is the heart of the theory, you can’t apply it to the Universe as a whole. He likened it to Bohr’s requisite distinction between the classical measurement apparatus and the quantum system, only perhaps more specifically developed in language by Rovelli. So I’m not sure that means he would say there isn’t a wave function for the whole universe… but it seemed pretty clear he would say QM applied to the universe as a whole.
He had a couple of concerns about MWI I’ll try and summarize briefly:
1. You can never prove you’re in an Everettian world or not. Quantum experiments rely on Born’s Law to relate probabilities to the frequency of outcomes in repeated tests. Since MWI requires an infinite number of universes with all possible outcomes of repeated tests, no test can be used to prove we are or are not in an Everettian world, or even in a benevolent (Born’s Law applies) or malevolent (Born’s Law not applicable) branch. Our branch is benevolent so far. But if it ever proves different it doesn’t prove or disprove anything about MWI.
2. There is no source of probabilities in MWI. So the conventional notion that relates the square of the wave equation to frequency of results in particular experiments doesn’t apply. But more generally, there simply is no way to derive probabilities from the wave equation and come out with Born’s Rule. He reviewed Deutsch’s arguments that using decision theory, it is rational to bet that Born’s Rule is valid, but for Smolin this requires acceptance of all the axioms of Decision Theory as well as some physical axioms, and still doesn’t prove anything. It just proves it’s rational if your goal is to win a bet. As a realist, Smolin is dissatisfied with this.
3. The wave equation is reversible in time. But in conventional QM, a measurement is irreversible. Born’s Rule makes sense only if a measurement is irreversible. How do we get a world that behaves irreversibly out of MWI when the wave equation is what is real? Smolin says that decoherence is always an approximate solution, but that complete decoherence is impossible. If we wait long enough it will be reversed per the Quantum Poincare Recurrence Theorem. He also related decoherence to a statistical process, like the diffusion of heat through a gas. He said the likelihood of recoherence may be very small over short times, and that if we only want a useful, approximate, short-term description, decoherence is great. It works wonders for the design of quantum computers for instance, but overall as a description of physics it is an incomplete description. When the full picture is considered, with recoherence, then one returns to the notion of reversibility, which one would expect from the wave equation alone. But now one cannot argue that decoherence alone produces irreversibility. In short, and this is a little hard to follow, Smolin feels decoherence is insufficient to bring probabilities and the irreversibility one gets from a conventional QM measurement, into MWI.
LikeLiked by 1 person
Sorry, Mike, in the fourth paragraph, I meant to say it seemed pretty clear Smolin did NOT think QM applied to the whole universe. At least in its conventional forms.
LikeLiked by 1 person
I appreciate the clarifications and details on Smolin’s arguments.
I didn’t mean to imply that I thought Smolin was an antirealist, only that if he sees GR and QM as wrong, then he’s essentially antirealist specifically toward those theories, that is, they work instrumentally but don’t reflect reality. (Or does he argue that they don’t work instrumentally?) It’s kind of like a realist can admit that Ptolemy’s model of the universe worked instrumentally until telescopes came along.
One of the things I find interesting about any controversial theory or philosophy are the arguments opponents make against it. The strength or weakness of those arguments often reveal a lot about the proposition itself. I’ve sometimes been converted to a position more by the weakness of the arguments against it then the arguments for it. In the case of the MWI, I find most of the arguments against it underwhelming. (Although I’m not yet fully convinced of MWI.)
On Smolin’s points:
1. He seems to be arguing here that the MWI can’t be verified. But no scientific theory can. Verificationism, a tenant of logical positivism, has long been recognized as an impossible standard. It’s better to use falsifiability and parsimony. In that sense, MWI can be falsified, at least in principle. Any discovery of an objective collapse would do it. And the ever larger objects held in superposition progressively increase its likelihood. Finally, it may be possible someday to detect remnant interference between decohered branches. (As Smolin himself admits, decoherence isn’t an absolute thing.)
Of course, the parsimony argument for MWI is very contentious. But we don’t assess general relativity’s parsimony by its implications, such as black holes or gravitational waves. Long before these things were observed, they were acknowledged consequences of the theory. No one ever used them against the theory on parsimonious grounds. The assessment of the parsimony of Everettian physics should be based on its postulates, not its consequences, at least unless those consequences can be shown to contradict evidence.
2. I really don’t understand the concern about probabilities. Probabilities in most domains are relative to an observer. Probabilities in collapse interpretations are absolute things. But in non-collapse interpretations, they move to their customary role of being relative to a particular observer. In the case of the MWI, the probability is the proportion of outcomes, so from the perspective of our subjective timeline, it remains the probability of which branch we’re on after the measurement.
3. The wave function is reversible. In principle, that does mean branches could recohere. But as you mention, it’s a profoundly low probability event. Of course, if the MWI is true, then all low probability events happen. They’re just profoundly rare. Meaning we shouldn’t expect to observe them. It’s like saying that it’s possible for cream in coffee to spontaneously separate from each other. It is possible, but statistically it’s so rare we should never expect to observe it.
Rovelli’s relational quantum mechanics is a an antireal interpetation. Rovelli, in the SEP article, characterizes it as a “weakening of realism”, but for most writers, that’s enough to put it in the antirealism camp. Based on what I know about it, I think it belongs there. Antireal interpretations avoid the metaphysical issues, but arguably only because they’re incomplete.
So locality is the principle Smolin’s ready to dump? Does he explore how non-local physics would work? That’s one of my beefs with people willing to give on those kinds of things. It’s fine to do so, but then you take on a burden of explaining how physics works without it. How does action at a distance work? It seems like we would need new theories to deal with it.
Anyway, thanks for going through it again!
LikeLiked by 1 person
On your paragraph about Smolin being anti-realist with regards to relativity and QM, I understand what you’re saying better now. Smolin is a realist in the sense that he thinks QM in particular is not the most complete description of the universe that is possible, like Einstein I suppose. If his view that QM and relativity theory are incomplete makes him an instrumentalist by default, I can see your point.
I’m not a very good champion for Smolin, I must concede, Mike. So I think before you argue against my proxy contentions too strongly you should probably read some of his writing directly. He clearly has different inclinations than some/many other physicists, but I always enjoy his writing. But I do want to just reply to your paragraph on probabilities, because I can’t tell if you understood what I was saying, or if I even explained it well enough to be understood.
On probabilities, when you write that branches of MWI have higher or lower probability of occurring, and note that “probability is the proportion of outcomes” then I think Smolin would disagree that this is what the theory states. He talks a lot about two basic rules of conventional QM, which would be the anti-real interpretations. Rule 1 is the wave equation. Rule 2 is the act of making a measurement and all that measurement entails. There is nothing probabilistic about any of this until we bring in Born’s Law, which equates the square of the amplitude of the wave function to a probability of that outcoming occurring in an experiment. Without this, there is no way to relate the wave equation to experiments. There may be dissenting views–I’m sure there are–but this is my limited understanding.
Born’s Rule is not something we can prove. It is a notion a brilliant physicist wrote down that holds up empirically. But to my understanding we cannot derive it. But the important thing is that Born’s Rule is entirely bound up with Rule 2, which relates to measurement. So in an anti-real approach, the wave equation coupled to Born’s Rule gives probabilities of outcomes that we relate to the relative frequencies of observations in repeated tests. We don’t test QM by running an experiment once. We have to run it hundreds or thousands of times and then compare the relative frequencies of the differing outcomes to the probability distribution which is given by Born’s Rule.
MWI does away with measurement altogether, as being anything special. The thrust is that the wave equation is not some mathematical tool to give us probabilities, but a physical reality. I believe so, right? And in MWI each and every possible outcome occurs. Everywhere the wave equation goes, that reality obtains. Probability has now become meaningless in a sense.
I believe Smolin’s position is that ever since Everett’s original thesis, physicists interested in MWI (including Everett, in his thesis) have been hard at work trying to come up with a way to re-introduce probabilities from the wave equation alone, without the additional step of measurement. And if I read Smolin correctly, no one has done so successfully, at least in a way that satisfies him. He said a number of groups have shown that if one assumes a certain correlation between the wave equation amplitude and the likelihood of a particular outcome, that one can build up a consistent approach. But it’s circular, because it assumes a correlation at the outset. This was the problem (again, in Smolin’s view I suppose) with the original thesis approach.
Smolin is as much a philosopher of science as a physicist I think. So he’s not content with approaches to reintroducing probability that sort of pave over the problem. Decoherence is great he says for day to day and doing experiments. It’s not the whole story though and so can’t be a fundamental description of nature. Likewise, he devoted a chapter to discussing the Oxford Group’s work, with Deutsch at the heart of it I believe, to develop a philosophical approach to solving this, and finally he proved it would be rational to assume one lived in an Everettian universe. But as I said above, this too is somewhat dependent upon philosophical axioms and even in the end doesn’t permit one to conclude what is or isn’t so. So as a realist, this cannot be the end.
As to your final questions, yes, Smolin has been at work on new theories, probably for much of his career. I really shouldn’t try and explain what he’s up to, but the last quarter or so of his book is about some of the new ideas he is working with. I’ll just say I think he has been playing with a class of model universes that he and some colleagues have been exploring that derive various topologies of spacetime as emergent fields from underlying sequences of cause and effect. That’s about all I can personally say about it. I think he’s working very hard on this, and clearly is passionate about it. You’re quite welcome to have a beef with him, but I think he’s pretty frank that he’s not working in the mainstream, and fully accepts the burden of developing testable ideas. I do think he has some ideas for experiments, or at least concepts he hopes will be testable. I’m grateful for the various professional scientists who take the time to explain their views on these things for one.
I’m personally not opposed to some version of MWI. I’m not sure what it would be, but the notion of multiple realities doesn’t bother me too much. I do have reservations about there being an infinitude of them with every single, last, miniscule possibility being actualized to the fullest possible degree. I have no basis for thinking this way, but I’m not stoked about the profound number of hellish realities this would require conscious beings to experience. So for me there is likely some mechanism in the universe that makes various branches “fruitful” and other branches nothing but “empty rooms.”
LikeLiked by 1 person
I’m still struggling to see the problem with probabilities. I don’t see how the MWI renders the Born rule problematic. It just means something different under it than in collapse interpretations. But even Sean Carroll spends time on probabilities, so I may well be missing something.
I know Smolin is out of the mainstream on physics. I actually have a book he cowrote: The Singular Universe and the Reality of Time. I didn’t get very far into it. As I recall, he and the co-author argued that there is an absolute preferred frame of reference contra general and special relativity. I don’t recall the other tenants, but I recall finding that one odd. It was several years ago. I think I stopped reading because I generally dislike reading people’s radical new theory. I prefer other scientists to find merit in them before I invest time.
The straight MWI does mean there would be hellish realities, as well as a lot of barren ones. There would also be ones where the laws of physics seem different but aren’t, such as one where entropy never happens because it’s possible (albeit unimaginably improbable), but everything possible happens in the MWI.
If MWI turns out to be reality, it would be the Copernican principle taken to a previously inconceivable extent. Myself, I think if it is true, it would be true in the same way Copericanism was true, in that it’s less wrong than many of the alternatives, but there remain many conceptual concepts we’re still missing, similar to how 16th century Copernicans remained hobbled by not understanding gravity or the full scale of the universe.
LikeLiked by 1 person
Regarding the probabilities, I think if do some googling on Born’s Rule and MWI and/or Born’s Rule and Copenhagen Interpretation you’ll see there is a reasonably healthy discussion ongoing about this. I guess it depends on how you like your physics, but the original application of Born’s Rule contains within it the inherent supposition that the wave equation gives probability amplitudes for what will be seen when a measurement is made. It relates to the postulate surrounding measurements, which is that only one thing happens. When you take out measurement altogether, you take out the primary way in which the Born Rule gets into QM in the first place, and when you say the wave function is reality in MWI, you’ve lost the ability to say it’s a probability density matrix that tell you what you might see.
To then add on a statement suggesting we can take Born’s Rule and state that it now means the probability of which branch we’re on following a measurement in MWI, is even more ad hoc it would seem. It could be perfectly correct. It just seems that some/many physicists are interested in understanding what allows that assumption to be added to the theory. I don’t think it’s just Smolin.
I have no skin in this game per se, but I can understand why there would be a reasonable debate on the topic. Carroll himself says this is critical to proponents of MWI on his blog, here, and goes on a fairly lengthy discussion of how he and a colleague have tried to address what clearly needs addressing.
Do all physicists agree with his proposal here? I don’t know… Two important elements in his derivation seem to be a) that things far away shouldn’t have any bearing on a quantum experiment. So he has to make an assumption, which to a layman like me makes sense pragmatically, that things far away can be removed from the universe in which this experiment happens. And b) he concludes by noting that this is about our belief in where we are and not really about the probability of being there. Meaning, his argument is a mix of physical arguments, arguments based on ignorance, and arguments based on what is rational to assume.
It’s just that not all physicists would rely on this toolkit, right? I think Carroll is brilliant and like reading him as well. I guess I’m just trying to say there’s clearly a reasonable debate on the importance of this topic and its relevance to MWI. It’s a pretty convoluted topic. Carroll’s theoretical resolution to the application of Born’s Rule to MWI appears to be just in the last 2-3 years, however, so I don’t know if most physicists agree or not?
LikeLiked by 1 person
PS, the Twitter link to Carroll’s article contains this quote from Carroll: “Deriving the Born Rule (probability equals wave function squared) is one of the most important puzzles for anyone who believes in many-worlds.”
LikeLiked by 1 person
Thanks Michael. I am familiar with the controversy and have read a good deal about it. Despite that, I still struggle to see an issue.
I remember that Carroll post, although it’s been a while. He also discusses this in his book. He admits that there are a variety of thoughts about this. But it’s not clear that there are any ontological differences in these approaches. It comes down to one’s philosophy of probabilities. In other words, this is a philosophical problem, that is, one of how we define probability. Once we understand that, then we realize that the variety of approaches isn’t anything problematic.
Much of the hand wringing happens under a philosophical position toward probability called frequentism, that probabilities are an objective representation of how frequently something is going to happen. Under MWI, this becomes problematic because it all happens, so it becomes meaningless to discuss probabilities in that manner. And yet, they clearly remain useful in predicting outcomes, so what gives?
But if we take an epistemic philosophy toward probabilities, the issue disappears. The wording here is difficult. Above I used “relative to an observer”, but that’s not strictly right under MWI since the same observer will evolve into all the observers who will find all the outcomes true with 100% probability.
The way Carroll argues we need to think about it is as the observer immediately after decoherence but before the result is known. At that point, the observer doesn’t yet know which branch they are on, and the probabilities are probabilities from an epistemic view.
Do all physicists agree with this? I’d say no. And many, such as Smolin and Jim Baggott, continue to see the probability issue as a serious one for MWI. But to me, it seems to amount to, “We don’t know how to account for this, therefore it must be false,” type thinking.
Maybe I’ve never seen this as an issue because I’ve never thought of probability as an objective thing. To me, it’s always been more about what we know than some absolute measure. It might come from my college math teacher using gambling analogies to teach us about it.
LikeLiked by 1 person
I think this has probably run its course. I can’t even remember how we got here. Haha. I guess for me to close on this mini-discussion, it just boils down to the fact that I enjoy reading both (and I’m sure there are more than two) sides to these types of things. It sounds like you’ve reconciled in your mind the objections to MWI, at least as they relate to Born’s Law and probabilities, and I can appreciate that. I also can appreciate why some people are not yet satisfied.
When you note that a reluctance to buy-in amounts to “We don’t know how to account for this, therefore it must be false,” I think perhaps you are over-simplifying a complex topic. What’s kind of ironic–to me at least–is that this view borders on the “shut up and calculate” approach, but the revival in popularity of MWI has at its core a desire to push past that mindset and consider that there might truly be an underlying reality we can understand in a deeper way. If we then resort to the “shut up and calculate” mindset, what does MWI even gain us in your opinion?
A final note on probabilities that I would like to see if you agree on or not follows here. This truly isn’t an attempt to repeat our dialogue above, but an attempt to understand your take on probabilities generally. It seems to me that in QM not all probabilities are about limited knowledge. If mixed states are in a superposition until one set of correlated states or the other emerges, and Bell’s Inequality tests show there’s nothing more to be known–e.g. the specific outcome of each individual test truly is random in the sense that there is nothing more that could have been known prior to measurement to enhance prediction, (as opposed to a coin toss, for which we could presumably find out everything there is to know about it’s mass distribution, shape, initial position and speed, rotational velocity, wind direction, ground topology, etc., and thus transform ignorance into prediction)–then this type of probability is different than the sort that is due to ignorance. This is in essence absolute randomness, no? Am I mistaken?
So it seems that Born’s Rule is being applied in a fundamentally different way in the Copenhagen Interpretation as compared to Carroll’s application of it to MWI. The former does not relate to our ignorance, while the latter does. It would seem to me those are two very different situations in terms of the underlying physics. If one takes the view that a probability distribution is due to ignorance, that inherently suggests there is more to the picture that could be understood.
At the end of the day, it strikes me that it’s a good thing to have a diversity of viewpoints and approaches. I doubt any point of view on this subject at the present time is actually “correct” in any absolute sense, anyway.
LikeLiked by 1 person
For me, it isn’t a “shut up and calculate” stance, so much as, I see ways to reconcile this without much heavy lifting, ways that seem to accord with what Carroll and other talk about, so I’m unable to see the obstacle others see. I’ve said before that I’m actually an instrumentalist, but I don’t like instrumentalism being used as a crutch by incomplete theories. One of the benefits I see of the MWI is it’s potentially much more complete than the alternatives.
The meaning of probabilities and the Born rule is definitely different under the MWI than under Copenhagen. I’ll note that Carroll re-derives the Born rule under MWI in his book, probably in a way not that different than that blog post, although the book’s approach feels less technical. Unfortunately, it’s not something that’s easy to quickly summarize. One point I found interesting is that the Born rule is actually the Pythagorean theorem in action.
On Bell inequality theorem, it actually doesn’t apply to the MWI. That’s one of MWI’s benefits, resolving that particular paradox. Bell assumes definite results in one universe, which is not the case in MWI. In MWI, the correlations are established at the point of interaction, as Einstein thought they must be (although Einstein was probably not thinking all the possible combinations were established at that point). As I understand it, within each branch of the resulting wave function for the entangled particles, Bell’s inequality holds, but not across all the branches. Lev Vaidman, a proponent of MWI, actually asserts that this alone should be seen as proving the theory. (Obviously Vaidman, unlike Smolin, is unwilling to dispense with locality.)
So the MWI is fully deterministic and local. If true, it means there are no absolute probabilities involved. Just probabilities relative to a particular vantage point. It is related to ignorance, but only to the ignorance of that vantage point, not to some unknown aspect of the physics. (There are unknown aspects, but my understanding is this isn’t one of them.)
I agree a diversity of viewpoints and approaches is good. If nothing else, it gives experimentalists something to aim for.
LikeLiked by 1 person
Thanks, Mike. I was not familiar with Vaidman’s work, so that was an interesting find. For what it’s worth, in his own words he notes that the resolution to Bell’s Inequality that MWI allows is a different sort of nonlocality to the sort found in a single world interpretation. In his 2015 paper that is available online he says, “Although there is no action at a distance in the MWI, it still has nonlocality. The core of the nonlocality of the MWI is entanglement which is manifested in the connection between local Everett worlds of the observers.”
I do take your point on this being an interesting resolution to the conundrums of a Bell Test in a single world though. I think this means that no observer can perceive the universe as deterministic and nonlocal, even if this is actually the case in the ultimate sense. That’s pretty wild…
LikeLiked by 1 person
I’d forgotten about Vaidman’s statement in that paper. I think it comes about due to the way he thinks of a world within MWI. He wrote the SEP article on MWI, where he explains that the concept of a world is not a rigorously define mathematical entity in the theory, just something we add on to map the theory to human experience. It’s not entirely clear, but I think he thinks of the entire world coming into being on a measurement (even though that’s not what the math says). If so, when he says there is no action at a distance but there is nonlocality, I think he means this version of a world. But again, it’s not in the mathematics.
I’m leery of Frank Tipler’s work, but this seems grounded. He provides a more completely local account. The crucial point is that any comparison between the results of measuring entanglement particles under the MWI is itself a third measurement.
It is true that under the MWI, its locality and determinism can’t be cashed out, at least not with current technology.
LikeLiked by 1 person
Thanks, Mike. I much enjoyed this exchange and learning some new things.
As to Tipler’s paper that you linked, it appears to be a case in which he assumed a correlation, and then derived Born’s Rule. In the abstract he says, “Assuming the wave function is a world density amplitude, I derive the Born interpretation directly from Schrödinger’s equation.” I’ll leave it to the professionals to debate the relative merits of such an approach! It sounds like an example of what Smolin wrote about in his book, and doesn’t favor, which is assuming the outcome one derives.
That said, I don’t see why in MWI one couldn’t make an assumption like that. It’s different than deriving an equation from more fundamental or testable assumptions, but it’s certainly true at the same time that it’s a reasonable approach to matching the mathematical theory to the data we have. It’s interesting that I think various papers have been written that basically show if there’s this type of correlation, then Born’s Rule is really the only relationship that makes sense.
LikeLiked by 1 person
Thanks Michael. There are other papers on MWI locality cited by the SEP. One is by Deutch which I haven’t attempted to parse since it appears very technical.
I enjoyed the conversation as well! I’m fascinated by the MWI. I don’t know whether it’s reality, but it appears to be a theory that generates a lot of emotion. I’m grateful to you for keeping an open mind!
If MWI is viewed as the world not really splitting but alternative worlds still existing in superposition, then how is this different from a timeless block universe concept?
On the other hand, why is there a preferred direction to the splits in our world? Why don’t we see as many omelets making eggs as eggs making omelets?
I think all version of the MWI are compatible with the block universe. (For that matter, it seems like any deterministic interpretation is.) Although if spacetime ends up being quantum, it seems like the shape of the block would be affected.
I’m not sure on the preferred direction. I’ve read that processes under MWI should be reversible, but entropy may make it very unlikely. But I guess in principle, it’s possible for two different branches to merge and interact. Which means it happens, but it seems like one of those profoundly improbable things.
Well, I’m willing to be your token Copenhagen defender. There doesn’t seem to be too many of us here.
I suspect that what happened historically was that people started taking the Schrodinger equation too literally in the following way: It has the mathematical form of other equations that describe waves, especially so-called standing waves. So they assume that the Schrodinger equation must be describing a wave, too, and so, wow! now we can ascribe the other characteristics of waves to quantum mechanics, characteristics such as how waves can cancel and superimpose and travel. But that is like saying that because the formula describing a circle’s diameter and circumference C = d has the same mathematical form as F = ma (both are of the form a = bc) that then the properties of a force must apply to a circle. But being mathematically shaped like F = ma does not make C = d be about forces, and the Schrodinger equation having the same shape as an equation describing standing waves does not make it be about things moving like a wave.
But by literalizing the Schrodinger equation to be a wave, now MWI adds in all kinds of qualities to the equation that are not in the equation by itself. So first of all, the Copenhagen interpretation is about taking for real just the equation itself. Secondly, that means that the MWI is not the more parsimonious view (not that I believe in scientific parsimony) because the MWI is adding all these extra overtones. And thirdly, when we say that the Schrodinger equation is confirmed, it is just the equation itself that is confirmed, not this extra stuff. And that’s not even to get into the legitimacy of confusing standing waves with traveling waves.
So why would anyone add all these overtones to the Schrodinger equation. It is because, as you said, certain people started out wanted to make it look like it is deterministic when it is s not. Once they say it is a traveling wave, then they can add in causation and the like as the wave is said to travel.
But is there a way to see the equation in itself? Yes, famously Born suggested that the equation can be understood to be about describing probabilities. Yet that is what apparently offends those people wanting a deterministic interpretation of quantum mechanics, since probabilities are not deterministic. But let me suggest how Heisenberg describes it in his book on philosophy. He literally uses the word “potential.” The Schrodinger equation describes a potential of how one of many outcomes can become actualized upon a measurement (although we cannot know ahead of time which outcome will happen). That is like how a person has the potential to react with anger or forgiveness upon being punched in the face. It cannot be predicted deterministically, but we can describe the potential outcomes.
And I submit that potentials are very real things in the Universe and that that does satisfy a desire for a description of what is solidly real. A rock on a hill has the potential to roll, and that exists (as in thump the table) really. So it is inaccurate to call the Copenhagen interpretations anti-real. (There are other ways of taking the Copenhagen approach besides as per Heisenberg, but that is enough to address this point).
And I think that the MWI theorists are just screwing with people’s minds when they suggest that the other sciences somehow meet this criterion of providing deterministic a priori theories that are able to causally explain a posteriori observations. For instance, Newton himself stated that using his invention of forces to describe planetary action worked very well but that taking it literally was preposterous. And thermodynamics posits entities that are described only by their initial and final values, skipping over the intervening values where any cause-and-effect might be located.
So it is not only the Copenhagen interpretation that does not meet this desire for determinism but arguably that is the nature of much of science itself.
But the strongest argument for the Copenhagen interpretation is how chemistry, electronics, and biology are premised on needing randomness to be fundamental, not determinism. For instance, when the Schrodinger equation is used to describe a chemical bond, it is by showing how the random activity can overall have a shape because of the presence of the nuclei and other factors, and then predictions can be based on that. It has nothing to do with infinite waves as in MWI.
And it is in this kind of application the Schrodinger equation is confirmed over and over.
LikeLiked by 1 person
I actually don’t have that much of an issue with Copenhagen, at least as an instrumental theory. But I think Einstein was right, it’s not complete. (None of the theories typically labeled as “antireal” are.) In 1928, it was what they could come up with, and it allowed physics to move forward. It’s basically similar to what Newton had to do when he couldn’t describe what gravity actually was.
My issue with it isn’t the randomness. I mean, it would be an issue, but it’s not the biggest one. The issue for me is the idea that particles have no set properties until they’re measured, with the idea that “measurement” can only be spoken of using everyday language. Again, that’s fine for an instrumental theory. But as a purported description of reality, I find it incoherent.
I wonder if you could go into more detail on what you see MWI adding to the Schrodinger equation. As I understand it, that equation is the mathematics of MWI. Granted, all the talk of “worlds”, “universes”, and “multiverse” are flourishes added to those mathematics. But we could completely eschew all that and still describe the theory purely in terms of wave mechanics. It’s reportedly what Everett did in his original 1957 paper. Bryce DeWitt was the one who added the “many worlds” terminology to explain it.
The problem is, to avoid the multiple worlds, you have to add postulates, like the wave function collapse, or a separate particle and wave like in the pilot wave interpretation. And those postulates all bring in their own problems, such as non-locality, randomness, etc. That doesn’t mean MWI is right. There might well be some unknown factor that prevents the other scenarios from playing out, but we don’t have any evidence for it yet.
Even just regarding wave itself as epistemic has issues. The waves interfere and interact with each other. It’s how particles become entangled. And the wave interferes with itself. That seems to indicate it’s real in some manner. There’s a joke in physics: if it walks like a duck, quacks like a duck, then it’s a photon. If the wave isn’t a wave, it sure behaves like one.
So I’m good with Copenhagen as a pragmatic recipe for dealing with the physics, but the idea that it’s an actual description of reality, I think, needs a lot more justification.
LikeLiked by 1 person
I’m saying that calling the equation a wave is what is being added by MWI. Copenhagen says it’s not a wave; the equation is a way of describing probabilities in a way that has nothing to do with waves. So what is being added in the MWI is all this stuff about waves superimposing and canceling and traveling and of course changing from a wave to a particle. That’s all an invention of MWI. Instead, it’s that the equation describes the possible outcomes and the measurement has the effect of making one of the outcomes become actual. This actualization of a potential need not have anything to do with waves.
So why invent that the Schrodinger equation describes a wave? It is because saying that it is describing a wave enables MWI to say the equation is about causation and determinism as events happen as per a moving wave (moving from one place on the wave to the next). The goal since Einstein is to find a way to restore determinism. But Copenhagen denies that there is a wave at all. Saying the equation “is” a wave because it has the mathematical form of some other waves (standing waves) is not enough to make it really be a wave.
And as for being complete, I think we have to look beyond physics to the other sciences and see how they have all incorporated the Schrodinger equation into their applications, but it is in a way that is about probabilities, not about waves.
To me, that is how science works toward completeness. If I add oxygen and hydrogen and get water, I’m not looking to say, “Hey, here is the final answer, really.” I’m looking to say, “Hey, look what I can do.” And that is where the Copenhagen interpretation shines over and over. My example was how describing a chemical bond depends on the Schrodinger equation by it telling us the probabilities of where the electron will be and how we can use that to create the shapes of orbitals, and that can be turned into predictions of outcomes of chemical reactions. Being able to do things like that throughout chemistry, electronics, and biology “demonstrates” how the Schrodinger equation is “fundamental” as measured by what we can do with it.
So the Copenhagen interpretation is for people who judge success by what can be demonstrated.
Then to sum up, the answer to what MWI is adding extraneously to the Schrodinger equation is all this stuff about it being a wave. Feynman, for instance insisted in his book QED that the point of quantum mechanics is that quantum entities are particles, not waves. It is true that the double slit experiment can produce evidence that might be called a wave doing it, but Feynman (and Hawking agrees) has other explanations for that.
Good talking with you. It’s amazing what you do with your site.
LikeLiked by 1 person
Hey Deal, I’m a layman like a lot of others here, so I only have a superficial view of the various theories. I’m having trouble keeping track of which are realist and which are anti-realist. My current view is that the wave equation is epistemic, i.e., tells us what we can know or predict, as opposed to ontological, telling us what *is*. My understanding had been that both many worlds and Copenhagen were realist/ontological. Many worlds says the new worlds are real. And Copenhagen says there really is something that collapses, i.e., something that was spread out, but then collapses down to one place where it interacts.
But based on what you wrote here, it seems you’re saying that Copenhagen may be epistemic, saying that the wave equation describes potentials, things that could happen. So is there a real collapse in your view, or is that just a way of describing that there was a real interaction at a certain location, but nothing collapsed?
Hey, we’re both on Pacific time. I’m in Arizona. So here’s a late-night answer.
I tend to be cautious about labels like epistemic and realist because there are so many definitions, especially of the latter, that you can end up giving impressions that you do not intend. But given the definitions you are using, I would definitely say that Copenhagen is epistemic. No one is saying it is the ultimate answer, just that it is incredibly fruitful when incorporated into other applications. I can’t really say what MWI is claiming for itself. I would say that it is literalizing the math into saying the math is describing a wave when I don’t think the math is saying that.
As for there being an actual collapse, well, there is a physical event (the measurement) but it’s not a collapse of a wave (since there isn’t any wave). It’s like the Copenhagen people are throwing a die, and they have made some math to describe the possible outcomes. And then the measurement is like throwing the die to have one of the outcomes no longer be just a potential but be actualized. The act of measurement forces one of the possibilities to come about (although as with throwing the die, we cannot predict which result it will be). Instead of calling it a “collapse,” I wish they would say there is an “actualization” of one of the possibilities so that it is no longer a matter of just potentials.
From the Copenhagen view, it’s as if someone came along and said, “Hey, your die is really a wave. And since it’s a wave, that means your die has all these other properties of waves like superposition and traveling and even stopping being a wave.” And the Copenhagen view is, “What do you mean? That makes no sense at all. The equation is just a way to keep track of what is possible to happen. It has nothing to do with being a wave.”
I emphasize that seeing the Schrodinger equation as about potential is the Heisenberg manner of being Copenhagen, and there are other versions. But it is certainly the way I have always thought of it in chemistry.
It turns out that the real utility of the Schrodinger equation often lies not in seeing a single result but in seeing the picture that results from seeing lots of results. It’s like using a pencil to stab a piece of paper over and over but finding a pattern in the dots on the paper, and the pattern is there because your arm can only move in certain ways, and so you describe the constraints on your arm. That is how the other sciences use the equation, and I would say that the results are real even though they come from an equation that is about potential. I hope that that makes sense. It’s hard to describe it in the abstract.
But suffice it to say that turning all of that into being about waves is to miss how the Schrodinger equation is actually used.
All of that said, however, just to complicate things, I guess I have to add that I personally would argue that what is fundamentally real about the Universe is that it is made of energy, and energy starts out by making the things it is in move randomly. I think that science is accommodating itself to that fundamental randomness quite well, but I don’t know if the word “real” should be applied to it or not. I think the Schrodinger equation being probabilistic is consistent with that view of fundamental randomness, but the equation itself is not a claim to that effect.
So bottom line, the Schrodinger equation is epistemic by your definition.
Or put another way, my personal view is that all of science is not about realism but about finding what can be demonstrated. So I don’t like to use the word “real” at all. But my personal view on that should not be confused with Copenhagen interpretations, which are a different issue.
LikeLiked by 2 people
There may be some value here in reviewing the history. Whether light is a wave or particle was heavily debated in the 17th and 18th centuries. It was apparently resolved in the early 19th century by Thomas Young performing an early version of the double slit experiment. When light was shone through the slits in a front screen, they formed an interference pattern on the back screen, the same kind of interference pattern seen in wave dynamics in other mediums (such as fluids). This seemed to prove that light was wave.
Einstein would later point out that light’s particle properties were necessary to explain certain phenomena. And Louis deBroglie would point out that all matter had wave like properties. Schrodinger would work our his equation in the 20s. It was a simplification of a matrix scheme Heisenberg had previously developed.
So, the idea of waves seems to long predate the MWI (1950s), and even the Schrodinger equation. Schrodinger himself always saw it as modeling something physical. He actually seemed to presage MWI in a talk he gave in the early 50s.
The equation seems to be modeling something. It appears to have causal effects on the path of the particle. If we want to say it’s not a wave, it can be called something else, but whatever we call it, it’s going to have wave like properties.
I’m curious what other explanations Feynman and Hawking might have been referring to. Feyman diagrams seem oriented toward a wave ontology.
Thanks Deal! Love having these types of conversations.
[just wanted to point out your insistent particle intuition. “ The equation seems to be modeling something. It appears to have causal effects on the path of the particle. ”
“Path of the particle“ assumes a particle, with a path. But if there is no particle, there is no path. Maybe there’s just a set of interactions that may or may not happen, but if one happens, the others don’t happen. It’s just that the math makes the pattern of possible interactions look like a wave front.]
I used “particle” because that seems to be ontology Deal favors. But the point is that the dynamics, if they’re not waves, produce wave like effects. So there’s something wave-like in the process.
“But if there is no particle, there is no path. Maybe there’s just a set of interactions that may or may not happen, but if one happens, the others don’t happen. It’s just that the math makes the pattern of possible interactions look like a wave front.”
So, I think we can take that tact if we’re explicitly doing so instrumentally, and we admit we’re not reaching for a true explanation. But if that is meant to be an actual explanation, I can’t see how it’s coherent. If it’s all just interactions, then what connects the interactions together?
I agree about the history of particles versus waves. But I’m not sure if the Schrodinger equation in itself is about waves versus particles. When Einstein complained about quantum mechanics by saying that God does not throw dice, he was referring to how the equation says that we cannot even fully know where a quantum entity is located except to say that it might be here or there with a known probability. If it were either a particle in the Newtonian sense or a wave we could know where it is. Yet using the probabilities proves very practical, and these applications are what makes the equation fruitful. So the Copenhagen interpretation is that you cannot take the probabilities out of the equation by saying that the equation, instead of describing probabilities, describes a wave.
Feynman says that quantum mechanics is about particles but not in the Newtonian sense of what a particle is. He goes beyond talking about the double slit experiment and considers how light bouncing off a mirror acts differently depending on how big the surrounding mirror is. The reflected light can be either fuzzy (random) or clear (in a straight line). He describes it in terms of amplitudes which are the square root probabilities. The amplitudes are what carry over into the Feynman diagrams.
What is extra weird—but it is what Hawking agrees with in his book with Leonard Mlodinow The Grand Design (2010)—is that Feynman describes a quantum particle as traveling by every possible route all at once (which is neither a wave nor a classical particle).
All I am really saying is that the Copenhagen view is that they can’t take the probabilities out of the Schrodinger equation (by saying it is really about a wave) and still have the equation be used as it is used successfully in science. Science has grown in the last century in way that it depends on there being these probabilities at a fundamental level.
I agree with you that that can never be a complete answer. Anything that calls itself a “model” does not even claim to be a complete answer.
LikeLiked by 1 person
I think you might be conflating the mathematics of QM (the Schrodinger equation) with the Copenhagen Interpretation. The mathematics are the same for MWI. Everything you can calculate under Copenhagen, you can calculate under MWI. Even under the MWI, we have to deal with probabilities. MWI’s determinism can’t be cashed out, although the interpretation explains why it can’t. It’s just that the probabilities aren’t fundamental under MWI.
It’s also not accurate to say all of science uses Copenhagen. Cosmologists generally prefer MWI. As do quantum computational theorists. In both cases, it’s just easier for them to reconcile and think about what’s happening under the MWI view than the Copenhagen one.
What most of the rest use, to the extent they do, are the mathematics. They don’t need to think about the foundations of what’s happening, so most of them don’t. If you ask them about it, they’ll probably describe some variant of Copenhagen, because that’s the default everyone is taught.
Certainly every scientific theory is provisional, always subject to change on new evidence. But if we stay too instrumental, too cautious, we never move on to the next model. Even Jim Baggott, who detests the MWI, admits that realist interpretations are much better than antireal ones at driving experimental research.
I’m actually not committed to the MWI. I usually end up arguing for it primarily because so many people tend to dismiss it out of hand. I think it holds promise, but it’s not without its problems, such as the energy one I discuss in the post, but all of the interpretations that aim for a real account have problems too.
I’ve been avoiding weighing in here because Mike and I have gone several rounds on this, and I wasn’t sure I could add anything new. I did want to say I very much agree with what Deal has written above.
One thing Mike and I agree on is that there seems to be an energy issue. I think it exists on multiple levels. It took a highly energetic event — the Big Bang — to create the “original” universe; one that took 13.8 BY to evolve to the current state. Under MWI, measuring a photon instantly creates duplicates of this evolved universe. Secondly, how does E=mc^2 continue to work when universes split? Thirdly, we can account for the energy of a measured particle affecting a measuring device and putting that device into a state. Under MWI that single particle puts the device into two states. What is the energy accounting there? So all in all I think the energy issue is a big one.
About “particles” — there’s no such thing. Ever. What there is are point-like interactions we model as “particles” just because they happen on such a small scale. But physical reality is wave-like all the time. “Particles” are wave packets — vibrations in the relevant quantum field.
About Heisenberg. The uncertainty relationship isn’t just about the quantum world. Consider a musical note. If that note exists for only the briefest possible moment in time, it’s just a single energy level. As such, there are infinitely many frequencies it could be. We know exactly where the note is in time, but we know nothing about its frequency. OTOH, if the note is finitely long, we know exactly what its frequency is, but can say nothing about when it occurred. Time and frequency are mutually exclusive — the system simply cannot have both with precision. This is an ontological property of any wave system.
WRT Schrödinger equation and MWI, I think it might be good to separate the two. The former is just a linear differential equation describing a given wave system such that applying a given operator returns the probability of getting the measurement associated with that operator.
Schrödinger equation aside, what MWI is saying is that a system in superposition, when “measured,” extends its superposition to the measuring device causing it to be in superposition. That superposition extends outward (instantly or at c, take your pick) to include the whole universe. Decoherence causes the split to separate.
I think a key question about MWI is whether a superposition of a macro object makes sense. Under the thinking that “everything is quantum” (including Wigner’s Friend) there is a mathematical sense to it, but I’m beginning to question the physicality of it.
I’m thinking one of the true mysteries of quantum is superposition (the other is non-locality). I suspect solving the measurement problem (which MWI entirely dodges) comes from fully understanding superposition. The thing about the two-slit experiment is that the “particle” doesn’t go through “both” slits. It also doesn’t go through one or the other or neither. None of the classical options are correct. What happens, as long as you don’t look, is there is a superposition of going through one and going through the other. Something strictly quantum and very, very weird.
LikeLiked by 1 person
We do agree that energy is an issue. For me, the issue is whether energy can be infinitely divisible. Perhaps in a pure wave ontology it can be. I’m also open to the possibility that my concern could be in the same category as those who wondered why the stars didn’t show parallax under Copernicus’ model.
I don’t think the energy of the particle interacting with the measuring device is a problem though. Remember, under MWI, it’s not just the one particle causing the device to go into superposition. Every particle within the device is also causing it to go into superposition, bazilions of times every second. The device is architected to magnify the effects of the particle, so the particle ends up having an outsize effect on the measuring device, which is true even in a single universe scenario, but it doesn’t have to supply all the energy for that effect.
I don’t have any credence in the versions of MWI where a whole total universe instantly comes into being on each interaction. That’s not my understanding of what the math says. What I do understand it to say is that each branch of the original wave becomes entangled with every particle it interacts with, and that entanglement spreads, creating separate branches cascading out into the universe, each creating an emergent classical world, which spreads from the point of the interaction at the speed of quantum interactions, which could be no faster than light.
When I use the word “particle”, I’m really just using it as a placeholder term. I also use “quantum system” or “object”. But under straight Everettian physics, there are only waves. I still talk about “particles” because that’s the common placeholder term, but don’t intend any ontological assertion with it. (There are those who insist there is no wave, only particles. But that seems hard to reconcile with interference patterns, not to mention the success of QFT.)
On what MWI says happens in a measurement, I think this is a case where, like the island of blue eyed and brown eyed people, it pays to start with only a couple of particles (waves). When these two systems interact, they become entangled, so that they’re now modeled with a single wave function, including all the superpositions of the original particles and their possible combinations. If we add in a third particle/wave, it also becomes entangled. As we increase the number of particles involved, the entanglement spreads, until we end up with a situation where branches of the initial wave are now entangled with large numbers of particles.
The MWI postulate is that this dynamic never stops, that there is no wave function collapse which interrupts it. (Technically this is removing a postulate held by Copenhagen.) If so, then the only thing that makes all but one outcome disappear is decoherence, that interference from the other branches becomes effectively nonexistent, making them undetectable. Each branch effectively becomes causally isolated, it’s own emergent classical world.
On whether macroscopic objects can be in a superposition, that’s the question. Copenhagen, along with many objective collapse theories, say no. If they can’t be, then we should expect experiments attempting to hold ever larger objects in superposition to eventually hit some limit.
I agree that superpositions (notably the disappearance of) and locality are big issues with quantum physics. The benefit of the MWI is it seems to solve both. In it, superpositions never collapse, just decohere from each other. And the MWI only has local interactions. Entangled particle correlations are set at the initial interaction. My understanding is that Bell inequality holds for any one branch, but not across all the branches.
Of course, the cost is a constantly exploding reality. (Or in Deutsch’s case, a preexisting infinity of universes.)
“For me, the issue is whether energy can be infinitely divisible. Perhaps in a pure wave ontology it can be.”
Quantum mechanics is a wave ontology in which atoms are stable because energy is not infinitely divisible. That energy comes in minimal quanta was what started the whole quantum thing.
As you go on to suggest, maybe we’re missing something about how this works, but the point is that MWI creates a requirement suggesting new physics or a new view of physics as we understand it when it comes to energy.
Regarding single particle detection, I agree the “device is architected to magnify the effects of the particle” — my point is the transition to a single state accounts for all the energy involved. I’m not sure that’s the same energy accounting as for putting the detector into a superposition (which is a physical process).
“But under straight Everettian physics, there are only waves.”
Under quantum physics, there are only waves. Pretty much everyone uses the term “particle” as a placeholder (whether they realize it or not). The wave nature of matter has been pretty well established in myriad experiments.
“On what MWI says happens in a measurement,”
Yes. That paragraph and the one following it basically restate what I said above, so we agree on what MWI claims. The only quibble I might have is with “emergent classical world” — even within the branches of MWI, the world is quantum.
“On whether macroscopic objects can be in a superposition, that’s the question. Copenhagen, along with many objective collapse theories, say no. If they can’t be, then we should expect experiments attempting to hold ever larger objects in superposition to eventually hit some limit.”
Very much a central question. In some sense, our “classical” world is one extreme end of that experimental spectrum. Individual particles are the other. Bose-Einstein Condensates, many qbits, and other large-scale quantum systems, all move towards an imagined boundary.
It may boil down to what can be kept sufficiently isolated from the environment. (In Neal Stephenson’s D.O.D.O. they create isolation chambers large enough for several people to be in a quantum state.) Generally speaking, detectors aren’t isolated and therefore aren’t in superposed states (and, I’m arguing, cannot be put into them).
“The benefit of the MWI is it seems to solve both. In it, superpositions never collapse…”
I’m not sure it solves either. The thing about superposition isn’t that it collapses (or branches under MWI) — it’s that it exists at all. Superposition is very weird.
Consider measuring an electron’s spin on the Y-axis (vertical). The result is always either up or down — the electron is deflected one way or the other. Let’s say we get an UP electron. Further identical tests on that electron will always produce an UP measurement.
Now we test that electron on the X-axis (horizontal). That UP state is identical to a superposition of LEFT and RIGHT states, with coefficients that give 50% probability for either measurement. Let’s say we get a LEFT electron.
But being in a LEFT state is identical to being in a superposition of UP and DOWN states (with coefficients that give 50% probability to both), so now that we know we have a LEFT electron, we no longer know what the vertical spin is. If we do the vertical test again, we get UP or DOWN with equal probability.
Each measurement of a conjugate property gives us a definite state but throws the other one into superposition. That’s… really weird.
MWI not only doesn’t solve superposition, it relies on it.
Regarding locality, consider a Bell’s experiment with Alice and Bob separated by a space-like interval but with synchronized clocks so that Alice makes her measurement just before Bob (in their mutual frames). Importantly, Bob measures before information from Alice could reach him at c.
Under MWI, when Alice measures, she splits and, as far as I can see, necessarily splits Bob. Say Alice splits into UP-Alice and DOWN-Alice. The Bob in UP-Alice’s branch has to correlate and find spin DOWN. The Bob in DOWN-Alice’s branch has to find spin UP.
In such experiments, the two particles share the wave-function, so when it either collapses or splits, it necessarily does so everywhere. It may spread out from Alice and Bob, but it seems it does have to span them given the spatial size of the wave-function. This has some application with regard to starlight, which, again, makes MWI a more complicated proposition than it first seems. Consider the implications of all those quantum events everywhere creating ripples that expand through spacetime.
On energy, that’s my concern, whether the minimal quanta aspect applies here.
I’m pretty sure we are missing things with all this stuff. Consider Copernicus’ theory when published in 1543. It wasn’t like Copernicus had a modern view of cosmology. He moved the center of the universe from the Earth to the Sun, but kept orbits perfectly circular with the planets riding crystalline spheres, with the outermost sphere being the celestial one, with the stars being holes in it. There were lots of paradigm shifts that had to happen between Copernicus and Newton (not to mention the contemporary view). My suspicion is that if MWI is true, it’s true in the way Copernicanism was in the 16th century, in the sense of being less wrong than the alternatives.
By “emergent classical world” I only meant the world as we perceive and work with it, including with all of science “above” the quantum level. A “world” under Everettian physics (quantum physics without the wave function collapse) is just one of these emergent frameworks. But definitely it’s all emergent from the overall quantum reality.
I totally agree that spin is very weird. It seems like one of those things which work mathematically but where there’s no good metaphor to describe it. (The word “spin” being based on an incorrect understanding early in quantum theory.)
On the Alice and Bob scenario, I’m going to explain this according to my understanding (which I’ll admit is amateurish). There are numerous papers that address this. This particular explanation is inspired from a paper by Frank Tipler, whose work I’m usually leery of, but which seemed grounded in this case.
Let’s say Alice entangles a couple of particles on Earth and gives one to Bob, who flies off to Alpha Centauri. On a predetermined date, Alice measures her particle. Under MWI, she splits. However, there is no effect at Bob’s site. Let’s say, one day later, Bob measures his, and he splits. Again, with no effect at Alice’s site.
Now, Charles has flown out to a midpoint between Earth and Alpha Centauri. Both Alice and Bob transmit the results of their measurements to him. The question is, what happens when Charles compares the results? Remember, under MWI, it’s all quantum, so this is yet another quantum interaction.
Before answering, let’s back up a bit and consider what happened when Alice initially created the entanglement. She created a shared wave function between the particles, with all the combinations of superpositions they could hold. This didn’t cause any world splitting, because the particles were kept isolated. But, it’s important to understand that under MWI, the initial split has already happened. It happened when the first particle went into superposition. It then spread to the other particle when they were entangled. It could be said that there are already two baby worlds ready to go in the entangled particles. The only question is when they start spreading.
So what happened when Alice did her measurement? The split started spreading, but only at Alice’s site. When Bob did his, the same split started spreading from his site. When they reach Charles, what happens is that the branches that are entangled with each other interfere, causing yet more branching, but with each new branch now holding correlated measurements. The incompatible branches are already decohered and pass each other by.
I think the key thing to remember is that the correlation between the measurements was set at the initial entanglement, as was the initial branching that would hold the compatible correlations.
Here’s the Tipler paper: https://www.pnas.org/content/111/31/11281
And some addition comments and citations on the SEP article on MWI: https://plato.stanford.edu/entries/qm-manyworlds/#7
“I totally agree that spin is very weird.”
It’s not just spin; it’s any conjugate pair of properties that can be in superposition. Spin is definitely weird, but that’s a whole separate weirdness. It’s superposition itself that’s weird. It’s the consequences of superposition that violate Bell’s Inequality.
“[Alice] created a shared wave function between the particles, with all the combinations of superpositions they could hold. This didn’t cause any world splitting, because the particles were kept isolated. But, it’s important to understand that under MWI, the initial split has already happened.”
I’m distracted watching the helicopter on the WH lawn. When things settle down, let me read those links before I respond.
I will mention that I’m not sure what Charlie adds to things. There are only two possible sets of signals from Alice and Bob: Alice-UP & Bob-DOWN or Alice-DOWN & Bob-UP.
One can make the argument that Charlie splits when two superposed sets of information arrive, but time stamps can make it clear Alice and Bob both measured in close sequence. I think this demonstrates that Alice splits Bob when she splits the shared wave-function.
On being distracted, yeah, this was one of the rare times I actually watched the evening news.
Charles was added just to clarify that without the comparison, the claim of correlation is meaningless, similar to how it would be in a special relativity scenario. It could also be done by Alice or Bob transmitting the result to the other. I purposefully separated Alice and Bob’s measurements by a day to try to make clear that their splits happened at separate times.
To clarify, are you saying Alice’s and Bob’s measurements are separate and not correlated until compared? Put another way, are there four outcomes or two?
I think there are only two outcomes, with the initial split happening at the first particle, spreading to the second on the initial entanglement, then not spreading again until the measurement events.
Maybe we can make this a bit more explicit if we alter the scenario slightly. Say only Alice does a measurement, then transmits the results to Bob, who waits until he receives them before doing his own measurement. Bob splits on receiving the message, a version receiving the Alice-Up result, and another the Alice-Down. The Bob who received Alice-Up is now part of a branch that is only coherent with the Bob-Down branch of his particle. Likewise, the Bob who received the Alice-Down is part of a branch that is only coherent with the Bob-Up branch of his particle. So when each measures, they only see the version they’re coherent with.
Again, amateur here, so it’s possible I’m muffing this somewhere.
“I think there are only two outcomes,”
We agree on that. It’s one quantum system being measured the same way twice. The only possible outcomes are Alice(U)+Bob(D) and Alice(D)+Bob(U).
(It gets more interesting when Alice and Bob make different measurements — that’s where Bell’s Inequality kicks in.)
“…with the initial split happening at the first particle, spreading to the second on the initial entanglement, then not spreading again until the measurement events.”
This isn’t clear to me. The preparation involves creating entangled particles prior to any observation. A typical method with photons involves “down converting” — splitting a photon of frequency f into two photons, each with frequency f/2. I’m not sure how they create entangled electrons. For this experiment we could imagine a BEC that, once created, can be halved such that it remains in a single quantum state. Alice and Bob each get half.
So the initial split happens after the entanglement. And since we’re dealing with a single wave-function, it seems that split has to occur for Bob as the same time otherwise the wave-function of Bob’s “particle” would be different from Alice’s, and that’s not allowed — it’s one wave-function.
Again, imagine Alice and Bob are separated by a space-like interval but synchronized so Bob takes a reading just after Alice does. They send their time-stamped results to Charlie who has to see them as correlated despite the apparent lack of causal connection.
“Bob splits on receiving the message, a version receiving the Alice-Up result, and another the Alice-Down. The Bob who received Alice-Up is now part of a branch that is only coherent with the Bob-Down branch of his particle.”
What specifically does the phrase “only coherent with” mean?
What happens if Bob doesn’t get a message from Alice? More importantly, what happens if Bob get Alice’s message after he’s measured his particle? (Which is what would happen in the scenario I described.)
As far as I know, splitting the photon to create the entanglement doesn’t change anything once the entanglement is done. The main thing is, the isolated system is already in superposition with its different branches. The later measurements just allow those branches to spread.
I’m also not sure that the timing of the measurement matters, which was why I went fully unilateral above. But if they do them at the same time, and Charles is equidistant, then it just means both sets of branches reach Charles simultaneously. The ones coherent with each other interfere / interact, but the ones decohered from each other don’t.
By “only coherent with”, I just mean that the particles in that branch are in phase with a particular branch of the particle in question, and decohered from the other one.
If Bob never gets the message, then there’s never an opportunity to compare. (Although if he does do a measurement, chances are the branches will eventually meet / pass each other anyway. There are just too many different ways for a branch to propagate.)
If he gets it after he’s measured, then he’s already split from doing the measurement, with the B-Up branch being coherent with the branch carrying the A-Down result, and the B-Down branch being coherent with the A-Up result. Importantly, the A-Up branch and B-Up branch should be decohered from each other and not interact, as should the A-Down and B-Down ones.
“As far as I know, splitting the photon to create the entanglement doesn’t change anything once the entanglement is done.”
I’m sorry, I don’t follow. (I’m not sure it matters; we agree Alice and Bob can each have half an entangled system.) Splitting a photon is why the resulting two are entangled. There is a change in that the resulting pair each have half the frequency; I’m not sure what non-change you mean there.
“The main thing is, the isolated system is already in superposition with its different branches. The later measurements just allow those branches to spread.”
But what branches? The isolated system, prior to any measurement, is a superposition of any possible measurement you could make (including energy, position, momentum, spin, etc). When you say branches do you mean that infinite set of possible measurements?
“By ‘only coherent with’, I just mean that the particles in that branch are in phase with a particular branch of the particle in question, and decohered from the other one.”
What does that specifically mean in terms of Alice and Bob and their shared quantum system? What particles and what phase?
I think the salient point here is that Alice and Bob are dealing with the same wave-function. The description of Alice’s “particle” is necessarily the same as for Bob’s. If either makes a measurement, that wave-function has a known eigenstate, so if the other makes the same measurement, their result must correlate.
The alternative is that the wave-function itself is somehow limited to c, and I’m not sure that works. How does one part of an equation change while the other doesn’t? The very nature of a wave-function seems to suggest non-locality.
Yes, a branch is a possible result of a measurement. And those are all there at the point of entanglement.
On coherence, when Alice does her measurement, it splits her and her surroundings into two branches, one entangled with one outcome of the measurement, and another entangled with the other outcome. Importantly, these branches of Alice are now entangled with particular branches of the shared wave function of the entangled particles.
When her signal reaches Bob, if he hasn’t measure yet, it causes him to split into those same branches, each entangled with a particular branch of the wave function of the entangled particles. So then each version of Bob and his equipment can now only interfere/interact with the corresponding branch of Y-axis spin of his particle, which was already entangled from when Alice did the initial entanglement.
If Bob has measured when the signal arrives, then he’s already split, but his branches are entangled with the same branches of the same wave function that Alices’s were. So the entangled versions will interfere/interact with each other, and the decohered versions will pass each other by.
If you want to dig deeper into the particular wave mechanics, we hit the limit of my understanding. I fear we’d have to get into the mathematical details of quantum coherence and decoherence, which I won’t pretend to grasp. (There’s material out there which go into it in hideous detail.)
On the wave function, I think it’s important to remember again that Everettian physics is quantum physics without the wave function collapse. It seems like your points assume the collapse paradigm. Under such paradigms, the correlations are only set at the measurement, implying some kind of non-local connection. But under MWI, the correlations are set at the initial entanglement, all the valid combinations of correlations, each combination in their own branch.
“It seems like your points assume the collapse paradigm.”
No, not at all. I’m speaking strictly within the confines of MWI. Just as we use “particle” as short-hand for “point-like interaction among wave-packets”, under MWI “measurement” is short-hand for “measuring device entangles with measured system and goes into superposition of possible measurements.”
“Under such paradigms, the correlations are only set at the measurement, implying some kind of non-local connection.”
That’s not quite correct. (We’re talking Copenhagen and similar here.) The measured properties don’t exist until they are measured. But since the system is described by a single wave-function, measuring a given property puts the wave-function into a known eigenstate. That a wave-function with an extension in space changes everywhere simultaneously is what points to non-locality.
“But under MWI, the correlations are set at the initial entanglement, all the valid combinations of correlations, each combination in their own branch.”
It almost sounds like you’re saying MWI is a hidden variables theory? Or superdeterministic?
I think we’ve probably taken this as far as it can go, because we’d have to get into the math and concrete specifics. I think the key is that Alice and Bob have a shared wave-function — they are both looking at the same thing.
When Alice makes a “measurement” at that point she enters a superposition of outcomes and at that point the wave-function is in an eigenstate reflecting that outcome. Any further similar measurement of that wave-function must return a correlated eigenvalue.
Bob may be in an unknown state until he either hears from Alice or makes a measurement, but the entangled “particle” he has, I’m pretty sure, has to change (such that he could make a measurement) along with Alice’s measurement.
To be honest, this business of a spreading wave of reality that Bob is either coherent or not coherent with makes MWI a lot more complicated than Copenhagen. Which has been a key objection all along. It’s a simple idea, I grant you, but following what it implies seems to lead to a morass of complications.
I have seen some people assert that the other unseen worlds in MWI amount to hidden variables, but it’s not normally considered a hidden variable theory. It’s not superdeterministic either, since that’s usually considered in the context of definite outcomes, which don’t happen in the MWI, except relative to a particular branch.
“When Alice makes a “measurement” at that point she enters a superposition of outcomes and at that point the wave-function is in an eigenstate reflecting that outcome.”
“but the entangled “particle” he has, I’m pretty sure, has to change”
Again, these statements assume a collapse ontology, which doesn’t apply under MWI. Under MWI, nothing Alice does to her particle affects Bob’s, at least not instantly or faster than light.
Non-collapse theories are definitely more complex than collapse ones. They lean much more heavily on decoherence, which didn’t start to get worked out until David Bohm hammered out the details of pilot-wave interpretation. MWI inherits that decoherence framework, but its full breadth apparently took decades to work out.
That said, I think the simplicity of collapse theories is an illusion. They generally ignore the implications of what collapsing realities or action at a distance does to physics. (Along which plane of simultaneity does the collapse take place?) I can see the argument for an epistemic collapse as an instrumental placeholder. But as an actual description of reality, it strikes me as incoherent.
“Again, these statements assume a collapse ontology,”
I’m not assuming collapse. Try this phrasing…
Alice (and Bob) have each chosen to use a (similar) device to interact with a quantum system. As a consequence of this interaction, Alice’s device entangles with the quantum system and, because of the characteristics of the chosen device, becomes superposed with the (let’s assume) two possible outcomes. Alice, looking at the device, becomes entangled and superposed, etc.
I think you agree with that so far, and I think you further agree that, as a result of that entanglement, the state of Alice’s wave-function and the state of the quantum system have necessarily changed due to the entanglement.
Therefore, if the wave-function of the quantum system has changed, and if Bob’s part of that system is described by that wave-function, isn’t his part of it also changed due to Alice’s interaction?
Are you suggesting the wave-function describing a quantum system is limited to c? That the wave-function has one form in one location and another in another?
“Along which plane of simultaneity does the collapse take place?”
The (apparent) non-local aspects of quantum reality clearly aren’t limited to SR, so I’m not sure the question applies.
FWIW, I’ve never found collapse as problematic as others. I do agree it’s a mystery to be solved, but I think a big part of it is the randomness — e.g. why does a photon interact with this electron and not that electron. If one accepts non-locality, then the randomness is the main mystery.
And I don’t have a problem with non-locality. I’m comfortable with the idea that reality has a way around spacetime so long as it doesn’t violate SR (and quantum non-locality doesn’t). I find non-locality a much easier pill to swallow.
Everything I’ve read indicates that the normal evolution of the wave function is limited to c. It’s a fully deterministic and local process. The only thing that isn’t limited to c, in collapse interpretations, is the collapse itself. (In pilot-wave theory, I understand the effects of the wave function on the particle are non-local, but that’s peculiar to that interpretation.)
As a pure wave ontology with no collapse, everything under MWI happens under c. As I understand it, this is one of the reasons cosmologists tend to be attracted to it.
On SR, consider if Alice did her measurement while Bob was traveling near c. Under collapse theories, when is the state of Bob’s particle actually set? Remember, their planes of simultaneity are now very different, and there is no universal now. Normally with SR scenarios, we can lean on the fact that all the interactions happen slower than c to reconcile everything. But that seems to go out the window with a non-local wave collapse.
I’d be willing to accept non-locality, would even be excited if it were reality, but I want a theory on how it works. It’s worth noting that any discovery of a way to utilize that non-locality would falsify MWI. (Although as we all know, it would also cause problems for relativity.)
I don’t know how far down into the weeds you want to get here. We need to get specific about what is meant by “collapse” and I do have a question about it under MWI.
One form of collapse is Carroll’s canonical experiment with a photon, a half-silver mirror, and two detectors. It’s a single “measurement” on a system with (two) specific outcomes. The photon has been absorbed, under MWI reality has branched and decohered.
Another form of collapse involves repeated measurements on the spin on electrons. If tested on the Y-axis, then the X-axis is a superposition of LEFT-RIGHT with a probability of 0.5 of getting either should you make that measurement. However repeating the Y-axis test always has the same result as the first test — no branching occurs.
So my question involves Alice, who interacts such that she branches into Alice(UP)+Alice(DN). Both have copies of a wave-function each sees in a known state (UP and DN, respectively). If Alice(UP) repeats her original test, she gets an UP result each time, and no branching occurs. The same is true of Alice(DN).
They both also know the wave-function has the electron in a superposition of L-R. If either measured on the X-axis, the results are undetermined. But imagine the electron had been measured on the X-axis before Alice gets it (Alice knows about this). Say Alice is given a LEFT electron. That means it’s in a superposition of U-D, so her first measurement is undetermined. (If she measured on the X-axis, she’d always get LEFT and no branching would occur.)
My question is, under MWI, what do we call Alice’s interaction with the electron, such that in each branch she now knows something about its state, if not “collapse”? Her interaction with it at time t results in a branch with both outcomes. Isn’t that wave-function in a known eigenstate in both branches?
“It’s a fully deterministic and local process. The only thing that isn’t limited to c, in collapse interpretations, is the collapse itself.”
Which is why this matters. Under “SWI” (Single WI) collapse is definitely non-local, and I’m not convinced MWI gets around it. (FWIW, I think non-locality is unavoidable under any interpretation. Tests related to Bell’s Inequality seem to demonstrate that. It’s certainly effectively true.)
((FWIW2, local or non-local isn’t any argument against MWI. As I say, I think all interpretations are non-local (except some fringe ones), so I don’t see it as helping or hurting MWI’s case.))
Getting back to my question, when Alice interacts and branches, do you agree the wave-function has been changed for both? Both Alices could repeat their initial test and get the same result; they wouldn’t branch further. Doesn’t this demonstrate the wave-function of the quantum system has changed? What do you call that change?
And, crucially, why isn’t that change (instantly) the same for Bob’s part of the system?
“On SR, consider if Alice did her measurement while Bob was traveling near c.”
If you try some scenarios, I think you’ll find that it doesn’t matter. There’s a paper out there (I think you might have pointed me to it) about how, because of SR, both views, Alice measures first or Bob measures first, are valid views. Back when I read that paper I tried a few, and it was true. I certainly wasn’t exhaustive, so you might come up with a scenario where it matters.
On use of the word “collapse”, what I get from multiple sources is that it’s not the same as decoherence. (Baggott himself makes this point in his new book.) Collapse is all but one of the branches disappearing from reality.
Now, under decoherence, all but one appear to disappear but they’re still there, just out of sight. Decoherence in and of itself doesn’t address their fate. Which is why people who discuss decoherence are careful to stipulate that it doesn’t solve the measurement problem.
If we’re using a collapse interpretation that’s adopted decoherence, then the interpretation has to postulate that at some point the other branches disappear. MWI says this never happens, that there is just decoherence, with all the resulting consequences.
An MWI advocate like Carroll will often just use the word “decoherence” without mention of the collapse because they don’t think there is any collapse. And there are plenty of people who are loose with these phrases that use “decohere” when they mean “collapse.” (I know I have historically.) But strictly speaking, they’re not the same thing.
“My question is, under MWI, what do we call Alice’s interaction with the electron, such that in each branch she now knows something about its state, if not “collapse”?”
We could call it an epistemic collapse relative to Alice as an observer, but it wouldn’t be an objective one. There’s a version of Alice looking at each result. The results and their causal effects are just decohered from each other.
“Getting back to my question, when Alice interacts and branches, do you agree the wave-function has been changed for both?”
Not immediately, no. Under MWI, the thing to remember is for us to know about the correlation, the resulting branches from Alice and Bob have to eventually connect. When that happens, only compatible results will be coherent with each other and interact. The others will be decohered and pass each other by. The result is from a classical world perspective, we get the results of the entanglement experiments that have been performed.
It’s generally recognized that Bell’s theorem doesn’t apply to the MWI, since one of Bell’s assumptions are violated: https://plato.stanford.edu/entries/bell-theorem/#SuppAssu
On the SR scenario, thanks! I’d forgotten about that conversation and paper. I remain unconvinced by the reversible causation, but I can see how it gives a collapse advocate an out.
I agree collapse isn’t decoherence. Never said otherwise.
“We could call it an epistemic collapse relative to Alice as an observer, but it wouldn’t be an objective one.”
Are you denying the wave-function of the electron is (ontologically) different for her (in each branch) than it was prior to the branch? Why does the first test cause a branch but repeating that test doesn’t if the wave-function hasn’t changed as a result of that first test?
Although later you seem to agree the wave-function has changed for her, so maybe I misunderstand?
“When that happens, only compatible results will be coherent with each other and interact. The others will be decohered and pass each other by.”
I gotta be honest: given all the ways the information could interact (or not), that proposition seems complicated to the point of impossibility.
It also doesn’t seem to take the wave-function very seriously if the same wave-function can be different for Alice and Bob despite it supposedly describing the same quantum system. (I’d like to see the math on that.)
It allows for reality-violating measurements that necessarily can never connect or they’d violate reality. Expecting decoherence to account for that… I just don’t see it. Why would certain branches meet and others “pass by”?
I don’t deny that Alice’s measurements change the wave function. The wave function evolves with time. I only deny that her actions instantly change Bob’s portion of it.
Some of the writers (Wallace and Vaidman) do make a point that the wave function does span both Alice and Bob’s particles, and describe that as a sort of state non-locality, but not one that enables any kind of action at a distance.
I don’t understand why you think this isn’t taking the wave function seriously. Your conception of it seems to be this thing that is somehow controlling the dynamics of the particles in an ongoing non-local fashion. I see it as conceptually one wave function, but only in the same sense that the galaxy is conceptually one entity, but not in any way that enables non-local actions (except in the case of collapse in other interpretations).
Decoherence can account for why we see one result but not the others. The same factors that deny us interaction with those other results are what cause decohered branches to pass each other by. For decoherence more generally: https://plato.stanford.edu/entries/qm-decoherence/
Now, I will admit that the idea that the original measurement is still encoded in the branch by the time it reaches Bob seems hard to believe. I’m not clear on the microphysics of that myself. For Tipler’s discussion of it: https://arxiv.org/pdf/quant-ph/0003146.pdf
There are other papers that tackle this issue cited by the SEP article: https://plato.stanford.edu/entries/qm-manyworlds/#7
In the end, I can only recommend you do your own reading. Don’t trust my (almost certainly flawed) understanding of it. Usually these are not the points the opponents of MWI attack it on. I’m pretty sure they would if they thought these were vulnerabilities.
“I don’t understand why you think this isn’t taking the wave function seriously.”
Because taking it at face value — as describing the entangled system — means that system changes everywhere at once. That’s what entangled means.
“In the end, I can only recommend you do your own reading.”
Dude, I’ve been reading quantum physics topics since before quarks. (Currently I’m learning the math behind the Schrödinger equation! I’ve gotten a whole new appreciation for e-to-the-eye-pi!)
On the wave function, based on what I’ve read from several physicists, I suspect you’re missing something. (Not that I have any clue what it might be.)
I meant reading specifically about the stuff we discussed. Even the experts have blind spots and have to continue learning.
Which is exactly why it’s good for fresh eyes to question what might well be group-think!
LikeLiked by 1 person
FWIW, a related off-the-wall probably wrong idea. I stuck it on a recent post with very few comments as an out of the way place to record it until I can look into it more. It does detail the kind of scenario I’m talking about.