Matt O’Dowd is a first class science communicator. In this latest video, he does an excellent job explaining decoherence, and why the MWI (many worlds interpretation) ends up being so tempting when you see it through.

Of course, this doesn’t mean MWI is the right interpretation, but it does demonstrate why many find it tempting. (At least once they get over the visceral reaction we all seem to initially have for it.)

But it’s worth noting that decoherence actually is compatible with pilot-wave and many other interpretations, and reportedly even some versions of Copenhagen. So don’t let MWI cause you to dismiss it!

42 thoughts on “An excellent explanation of quantum decoherence, and how it might lead to many worlds”

I have long thought explaining decoherence is key to understanding the quantum world. One theory I’ve heard suggests it’s due to gravity, so decoherence happens when an interaction involves something with enough mass to have enough gravity. It would explain why coherence is a small-scale thing.

FWIW, my thought watching that video is that the photon (or electron) wave-function does “collapse” when the particle interacts with the detection circuit. I think that interaction constitutes localizing the wave-function — we now know exactly where the particle is: at that electron it just excited. That flying photon (or whatever particle) has now been “measured”.

I suspect a new wave-function starts with that excited electron, but as O’Dowd explains, current flow in a wire is a very messy environment, and there’s no useful coherent wave-function there. (There are billions of them as each electron interacts with its neighbors passing the voltage and current along the wire.)

I am not at all sympathetic to the view that the scientist is in superposition until she opens the box and finds a pissed off cat or a sleeping one (animal rights prevents killing them anymore). And I’m even less sympathetic to the view that her friends are in superposition until they hear the results.

And you know I’m not sympathetic to the Sean Carroll version of the MWI. 😉

One question I have is whether the wave-function is what collapses or is it the wave that collapse or are they are both ways of saying the same thing. To me the wave collapsing makes sense. I understand the wave function to be simply a description of a wave which is spread out and without specific location. When the wave interacts the wave’s energy and information is transferred to whatever it interacts with and the wave function is then no longer a valid description of the wave (now particle). Maybe my view is overly simplistic but this removes all mystery or need to for mysterious explanations about what is going on.

Whether the wave if just a mathematical function, or a physical thing, depends on which interpretation you favor. Most Copenhagen variants see it as a mathematical function. MWI sees is as the primary physical reality. Pilot-wave sees both it and the particle as physical.

In terms of decoherence, the key thing to understand is that the collapse is epistemic, not ontological. There is no objective collapse, just the wave becoming enmeshed into the environment, spread out, fragmented, decohered.

But if decoherence is wrong and we still have a collapse, and it’s ontological, then that’s a pretty mysterious thing. If it’s only epistemic, then we’re still left explaining how the interference patterns form, which itself is mysterious. That’s the crux. Quantum physics is either mysterious or bizarre. It won’t let you pass with any “reasonable” view of reality.

“If it’s only epistemic, then we’re still left explaining how the interference patterns form, which itself is mysterious.” I felt that O’Dowd did explain exactly that. You;re right – he’s a great science communicator.

“I understand the wave function to be simply a description of a wave which is spread out and without specific location.”

It may help to consider that the wave-function is a mathematical way of saying how likely it is that a particle is found in any given place. It’s math that, given a point in space, returns a probability.

The big question is what it means ontologically. My amateur guess about it is similar to what you said. I focus on the quantum field view that a “particle” is a wavelet in a quantum field.

Suppose when a particle is in flight — not localized by any interaction — the energy spreads out through the field (at light speed). But because the energy is spread out, at any point it’s far below the quanta threshold of a single particle. So there’s no “particle” present anywhere.

At some point that energy interacts with something else, but interactions are particle-based, not field-based. So all the spread out energy is “sucked” into the interaction, the “particle” appears, and that’s the end of the wave-function for that particle.

The particle it interacted with (which is its own wave-function that collapses in the interaction) now has a new wave-function that, because it’s embedded in matter, decoheres quickly as it’s divided between all the local particles it interacts with.

Unfortunately there is still some mystery. “Collapse” (of the spread out energy) seems to happen instantly, just as entangled particles appear to affect each other instantly. No light speed limit. But since the limit can be seen as on information or causality, perhaps energy in the quantum field can “drain” faster than light into an interaction.

The other mystery is why the wave energy interacts with any given particle. Maybe the universe really is random on that point.

We still have to account for how interference of single particles occurs and what seems to be instantaneous collapse of the wave-function (and why it picks any given interaction — MWI answers that all do occur). I saw a headline, but didn’t read the article, about scientists actually measuring wave-function collapse. Maybe we just think the collapse is instantaneous.

OTOH, entanglement seems to be instantaneous, so there is still an example of apparent FTL interaction to explain, and I believe collapse and entanglement are the same sort of thing. Weirdly FTL.

I haven’t paid much attention to relational theories. I see relations as emerging from things, so I tend to see them as secondary. You can’t have a relation without two things to form that relation. I see the operator or function on that relation as derived.

There are two ways of discussing the collapse: espistemic and ontological. Epistemic, it’s when we are no longer able to observe the effects of a wave, and can only observe the effects of the particle. But the key question is the ontology. I think decoherence says that the ontology is that the wave becomes fragmented, spread out, entangled with its environment.

So interaction with the detection circuit begins the process, but as O’Dowd mentions, for a very simple detection system, say another quantum particle, it’s still conceivable that we might introduce an offset and get the coherence back. But as the effects of the wave become increasingly entangled with a complex detection device, or the environment overall, it quickly becomes inconceivable. Technically information is always conserved, but any ability we have to reconstruct the increasingly fragmented and spread out effects fall apart. When that happens, all we have left is the localized particle.

I definitely know you’re not sympathetic to the MWI. But from how decoherence works, can you see the logic for it? That we have to postulate something new in the system (separate pilot-wave, quantum gravity, etc) to prevent it?

“Technically information is always conserved, but any ability we have to reconstruct the increasingly fragmented and spread out effects fall apart.”

I think that’s an important point. If I write a message on a piece of paper, burn the paper, mix up the ashes and toss them to the wind, the information contained in the message still exists in some quantum sense. But there is no way, perhaps even in principle, to recover that message.

“But from how decoherence works, can you see the logic for it?”

Actually, no. Because, firstly, we don’t really understand decoherence, so I don’t see how we can form arguments based on it. Secondly, it seems more to me that wave-functions collapse or damp out in decoherence rather than continue to play out in new universes. (Thirdly, I see MWI as ontologically absurd.)

I think the “nothing new here” bullet point of MWI is suspect and possibly even false. It might be false because it does require something new: an explanation of why we’re in any given branch. Why aren’t we in a branch where “impossible” (long odds) things happen regularly? Why are we in the branch where the odds usually pay off?

But it’s at least suspect because reality isn’t constrained by the idea that it has to be as simple as possible. Maybe the new thing — explaining decoherence — is necessary.

“Because, firstly, we don’t really understand decoherence, so I don’t see how we can form arguments based on it. ”

It’s definitely possible that decoherence may not be reality. And we don’t know what other factors may constrain the implications. But what about decoherence itself isn’t understood?

But I don’t see how that obviates the logic. To avoid MWI, it still seems to me that we still need something else to remove the other branches. Or reasons to dismiss decoherence entirely.

MWI, AIUI, assumes no decoherence. Instead, it assumes different branches carry on superposed possibilities. (In the detector O’Dowd described, under MWI, there is a branch for each detector pixel. I forget if he showed that in the earlier video or this one.)

Without MWI we just need to fully explain decoherence, and that seems more reasonable to me. Something causes the quantum world to damp out. It may well be connected with quantum gravity — the scales seem to match.

Hmmm, ok. It seems like you might be equating decoherence with wave function collapse. But decoherence is meant to explain the appearance of the collapse. Decoherence is usually regarded as an integral part of MWI. From the Wikipedia on decoherence:

Before an understanding of decoherence was developed, the Copenhagen interpretation of quantum mechanics treated wave-function collapse as a fundamental, a priori process. Decoherence provides an explanatory mechanism for the appearance of wave function collapse and was first developed by David Bohm in 1952, who applied it to Louis DeBroglie’s pilot-wave theory, producing Bohmian mechanics,[22][23] the first successful hidden-variables interpretation of quantum mechanics. Decoherence was then used by Hugh Everett in 1957 to form the core of his many-worlds interpretation.

Based on the discussion the other day, I actually suspect entanglement is decoherence from the outside (and decoherence entanglement from the inside). They are two sides of the same coin. (Unless I’m utterly confused, which is possible.)

“Decoherence is usually regarded as an integral part of MWI.”

Sure, but it’s a distinct thing of its own that predates MWI. It went through a brief period of mini-popularity (I wanna say in the 90s? there were websites) exactly because, as you say, it seemed an explanation of WF collapse.

I spent my third year at Oxford specialising in quantum mechanics and got a first, so I guess at the time I could do the calculations (couldn’t now), but didn’t gain an intuitive understanding of what is going on, so videos like this help, on the assumption that the presenter does understand the maths behind his simplified explanations. It would be interesting to get back into this, although I’m wary that it’s only by going back to the maths, which gets intractable very quickly as systems get just a bit more complex, that you can be sure you’re not just speculating.

That said, to speculate anyway, perhaps everything just remains in wave function space, it’s just that our detection devices narrow down the probability distributions enough that for all intents and purposes the concept of a particle at a location is a good enough approximation to reason with (or do bulk physics with). It seems evident that all interactions actually need to happen over distributed space and time, not at a point location and instantaneously, in order to have enough energy make a detection trigger a deterministic onward chain of consequences.

O’Dowd is a physicist, so I’m sure he understands the math. (Or like you, has understood it at some point.) But I certainly don’t. Although I know it well enough to grasp the idea that the Schrodinger equation describes the evolving wave, but not the wave function collapse. And that if we just let it play out, we get spreading rather than collapsing waves.

The key question is what is happening when “our detection devices narrow down the probability distributions”? Is the Schrodinger equation the full story? Or is there more? If we say there is, do we have any way to justify it?

“decoherence actually is compatible with pilot-wave and many other interpretations, and reportedly even some versions of Copenhagen.”

Well it had better be, if these “interpretations” are all supposed to be interpretations of the same quantum formalism! (I hear there are proofs, for some pairs of interpretations, that they give the same empirical predictions in all cases. Of course, the thing about proofs is: they have premises.)

In my view, decoherence is not in itself a big selling point for MWI; instead, it removes objections to MWI. Starting with the fact O’Dowd focuses on: using decoherence, you can explain why it seems to us that there is only one result of a quantum coin-flip, consistently with MWI, even though that single result is not the whole ontological truth.

But wait, there’s more! Decoherence math (if I understood the Wiki page on it correctly) implies that there is no “magic moment” when the universe splits. It just becomes more and more useful *for all practical purposes* to treat it that way, and this happens extremely quickly. Now how much would you pay – but wait, there’s more! Because it is always possible in principle to regard our universe as singular, if you identify “our universe” with the total universal wavefunction. It’s just that it’s convenient and natural to speak of the extremely weakly-interacting possible histories as “whole ‘nother worlds” because any probabilities associated with them are minuscule compared to more mundane uncertainties of life.

I see we grew up watching the same Ronco commercials on TV 🙂

That matches my understanding. There’s no magical moment when another whole universe gets created. There’s only our gradual (but rapid) loss of access to the other branches, whether because we can’t access it from the branch we now happen to be on, or some other reason.

Is there some prediction that MWI has made that has been proven right? Or even a prediction of any sort? Or something additional it explains – like gravity, for example – that the other interpretations don’t.

In the absence of something compelling, I think the default position should be a formula is just a formula. It abstractly describes something but isn’t a literal representation of reality.

All the successful interpretations make the same testable predictions. They do make diverging predictions, but they’re currently all untestable. Which is why physics is unable to break the logjam.

There have been cases in the history of science where someone stumbled on a useful equation, but cautioned everyone not to worry. It’s not like this crazy thing is true. It’s just a calculating convenience, an accounting mechanism. No need to panic. (See Copernicus and Max Planck.)

But the longer we go without evidence for something in addition to the Schrodinger equation, the more likely it is to be describing reality, with all the absurd seeming results that follow. And it should be noted that all the proposed postulates to avoid that absurdity are themselves absurd in their own way.

You always seem to be wanting predictions that can be tested for most things. A fair ask in my view. If whatever the view is can’t be tested, the simplest view is that it is just an equation and isn’t literally describing some deeper reality which, in the case of MWI, is pretty absurd.

The equations are accurate and describe something about reality but that doesn’t mean they should be taken literally as MWI seems to want to take them.

I definitely want unique predictions, and testable is even better. The lack of testability is why I remain agnostic toward quantum interpretations. I only spend time defending MWI because I think people are letting a visceral dislike of it cloud their assessment. (If you search the archives, I’ve often argued with MWI proponents, when they put it forth as the obviously true one.)

The equations are accurate. The MWI argument is to ask what other postulate besides the equation has been shown to be accurate? Add no other postulates and MWI seems to fall out. Doesn’t mean it’s reality, but I acknowledge the logic.

Relational quantum mechanics (RQM) is the only model out there that is not absurd. In fact, RQM is the most pragmatic approach in that it eschews the Copenhagen interpretation all together along with its wave function and magical wave function collapse. It’s a pragmatic ontology that makes no distinction between the quantum and classical worlds, thereby eliminating the mystery of things that cannot be seen. RQM merely states the facts.

Biases are difficult to overcome, especially a bogus intellectual construct that has embedded itself into the fabric of our culture. Wave function collapse is analogous to the the Catholic doctrine of the trinity, it’s impossible to prove nevertheless, we are told to trust the priesthood because they are the “experts”. Wyrd commented above: “I haven’t paid much attention to relational theories. I see relations as emerging from things, so I tend to see them as secondary.” Secondary or not, RQM is upfront right out of the gate. RQM makes no attempt to address causation and neither does any other theory within the catalogue of theories contained within the library of physics. And in spite of that fact, we are all to quick to dismiss anything that does not conform to our own cherished world view. Therefore, I find Wyrd’s objection to RQM to be short sighted and incoherent. But that’s what makes our experience a subjective one right?

What’s more RQM at least attempts to offer an understanding of gravity and to bridge the gap between the quantum and macro worlds. So it tries to offer more than yet another view of QM.

I think RQM’s absurdity is in its sparse realism, the idea that attributes only exist on interactions. Its response to what their state is between the interactions seems to amount to: “Shut up!” I also think it dodges the broader implications of its relativity. You can say that isn’t absurd. Since absurdity lies in the eye of the beholder, all I can say is I find it absurd in its own way.

The only reason you find sparse realism absurd Mike, is because you do not understand the context of RQM; and that context is motion and form. There is “never” a time nor a condition where there are no interactions taking place between systems because motion and form is a continuum. To state it another way: There are no states in between interactions, there are “only” the interactions of systems.

It’s relativity is straight forward: As a unification theory, there is nothing special underwriting interactions at the quantum level that do not intrinsically happen at the classical level. Those universal interactions take place in there respective worlds as well as cross the imaginary boundaries we created between the quantum world and the classical world. The imaginary boundary separating the quantum and classical worlds is just like the imaginary boundary we created between mind and matter. It’s all in our heads and has absolutely nothing to do with the true nature of reality, that’s why it’s called subjectivity.

Could be Lee. My exposure to RQM has come from an interview of Seth Cottrell (when I first head of it) and the Wikipedia and SEP articles. Can you (or anyone else) suggest a better source?

I always enjoy Matt O’Dowd’s videos. In this case, perhaps because I viewed this one outside of the overall sequence, I found it somewhat confusing. I think what confused me was when the graphic appeared of parallel double-slit experiments with brains and eyeballs, and in which he concluded with the laser guns where he suggested it could theoretically be possible to generate “a pair of photons”, one from each laser gun presumably, and reconstruct an interference patterns. But this makes very little sense to me because the craziest part of the double-slit experiment is not the interference pattern, but the fact that the interference pattern can be produced one particle at a time. This requires that individual photons interfere with themselves.

So, I think what he was saying is that at some level the one photon that entered the experiment became every possible outcome, but they decohered from one another, which means that as the observer in my particular branch I can no longer interact with the others. The wave function, however, is still propagating and possibly could interact with itself again should it ever come back into phase. And I guess that’s where he loses me. Because if that’s the case then all the computers and eyeballs and brains are in the same universe…? Or world?

Related tangentially, I just finished listening to Lee Smolin’s book Einstein’s Unfinished Revolution. The first two-thirds was a good review of all the various interpretations of quantum mechanics, which was interesting but not too much new material. The last third or quarter I found quite interesting, though. One interesting note is what he calls the cosmological fallacy, which is that we can’t take a theory like quantum mechanics, that applies only to limited systems, and presume it applies to the universe as a whole. O’Dowd happened to mention “the wave function of the universe” and it is said often I think, but in fact there is perhaps some reasonable disagreement about whether such a notion is even meaningful in the context of quantum mechanics.

Lastly, to clarify one point, the wave equation itself and its description of the evolution of a quantum system is not defined as giving the probability of measuring a particular outcome in every interpretation, right? Born’s Rule does this. And Born’s Rule is an independent hypothesis that says the probability of a particular measurement is equal to the square of the amplitude of the wave function. It was essentially a guess on Born’s part… There was no mathematical theory or physical theory behind it. He just thought it made sense, and he was right!

“Because if that’s the case then all the computers and eyeballs and brains are in the same universe…? Or world?”

Here you’re looking at the core of the Everett interpretation, and why it could be seen as problematic to call it “many worlds”. You could look at all the computers and eyeballs as being in the same universe or world. But due to decoherence, they’ve lost any ability to interact with each other, much as radio signals at different frequencies and wavelengths don’t interact with each other. They are separate causal frameworks. For all intents and purposes, they can be seen as separate universes. But they exist in the same spacetime (unless spacetime itself is quantum, then things get more complicated).

It’s important to understand that if the various branches of the wave have any chance of interacting with each other, it has to happen soon after decoherence begins. Once macroscopic objects are involved, it simply becomes hopelessly inconceivable. Again, for all intents and purposes, they’re in separate universe. (If decoherence is the only thing that happens here. Removing the other branches requires additional postulates.)

Smolin’s book sounds interesting. I might have to check it out. But I don’t think the wave function of the universe necessarily falls under the cosmological fallacy. It doesn’t refer to the universe itself being a wave function, but that the sum total of everything within the universe amounts to one large wave function. (Although again, if spacetime is quantum, this picture becomes far blurrier.)

My understanding is that Born’s rule applies for all interpretations in an epistemic fashion. It predicts the probability of having various observations. But the ontology behind those observations are what vary.

Thanks for the clarifications, Mike. I realized in reading your reply here that when I think of MWI, I have a tendency to view it in terms of its original form, in which the worlds split apart from one another at the point when a measurement is made. So the process was discrete, in a sense. Since measurement is such an ambiguous process to define, and challenging in terms of both its philosophical and physical ramifications, I think decoherence is applied in an effort to make the process less “special” for lack of a better term. This way it’s got nothing to do with human observation or knowledge, and has its roots in physical processes that occur regardless of who is or isn’t watching. I don’t fully understand how it relates to MWI in this form, but will do some digging…

Michael, from what I understand, decoherence was its original form, although it wasn’t called that when Everett did his initial paper on it. Bryce DeWitt, who years later coined the term “many worlds” probably emphasized it in the way you’re thinking. (And which Sean Carroll tends to emphasize; which I think is a mistake.)

That said, I’m saying that as someone who’s never read any of the original material by these people, just the popular science descriptions of them.

So I’ve never read anything on MWI except the popular science descriptions, either. I’m going largely on the few papers on decoherence by Zurek, who I mentioned last time we discussed decoherence, and the discussion of this topic that Smolin gave in his book, which is fresh on my mind.

My understanding is the original formulation of MWI was by Everett, and maybe it wasn’t called MWI at the time, but this is what I was referring to. This was in the 50’s I think, and I believe work on decoherence began in the 70’s. I also think DeWitt resurrected Everett’s work in the 70s, so your notion makes sense. Given the two events are contemporaneous, perhaps the work on decoherence enabled a fresh look at Everett’s original thesis.

Here’s the part that, as a rank amateur, I have trouble with. Decoherence as I understand it from reading Zurek is a process in which environmental interactions sort of “dampen” portions of the wave function that are not mathematically unchanged by environmental interaction. Those interactions essentially reinforce, or select for, those portions of the wave function that are unchanged by such interactions. So, what I’m guessing is that of all possible “outcomes” of the wave equation, only some remain viable in a classical sense. But it must be more than one. And somehow, we only see one of them, so perhaps other viable states–meaning states that would not change as a result of interacting with the environment–could be interpreted as “other worlds.” I’m still not sure of the idea behind why we see one and not the other, if multiple “outcomes” are resilient to, or selected by, environmental interactions. But perhaps that’s where MWI starts to make some sense.

So… perhaps the most up to date version of MWI is not that all possible outcomes of the wave function are real, but only a few, and there remains the mystery of why we interact with only one of those few…? This is sort of a refined approach it seems to me, in which the number of worlds is somewhat less than the original form/interpretation of MWI, but in which there are still numerous viable branches. If I’m reading this correctly, it remains unclear why we see one and not the other, but I’m quite sure I am missing something. 🙂

Decoherence provides an explanatory mechanism for the appearance of wave function collapse and was first developed by David Bohm in 1952, who applied it to Louis DeBroglie’s pilot-wave theory, producing Bohmian mechanics, the first successful hidden-variables interpretation of quantum mechanics. Decoherence was then used by Hugh Everett in 1957 to form the core of his many-worlds interpretation. However, decoherence was largely ignored for many years (with the exception of Zeh’s work), and not until the 1980s did decoherent-based explanations of the appearance of wave-function collapse become popular,…

The idea had a surge of popularity in the 1980s and 1990s, but does date back to much earlier QM.

Hmmm, something happened to the quote. It was basically that “Decoherence provides an explanatory mechanism for the appearance of wave function collapse and was first developed by David Bohm in 1952,” Everett used it in 1957.

Thanks, Wyrd! I’m curious how the idea has evolved over the years, and whether or not Everett’s original use is consistent with the ideas today or not. Do you know?

Sorry Mike. I should not have said are not in my italicized section. I should have said are. Perhaps you could correct that to avoid confusion… Thanks!

Wyrd, the quote looks right to me, and matches what I recall from the Wiki article. But that language is a bit misleading. As I understand it, decoherence as its own theory originates with Zeh c. 1970, but he developed it to elaborate on the core mechanisms of the pilot-wave and many worlds interpretations. It ended up being useful for many others, even some variants of Copenhagen.

Michael, I don’t know that the core interaction of MWI has changed since Everett’s first formulation. What Everett basically did was ask, what if there is no collapse? What if we just follow the Schrodinger equation, come what may?

Under most interpretations, the wave spreads and branches until something causes it to collapse into a particle. But what if on the interaction we take to be the collapse, instead the spreading and branching propagate to the other side of the interaction? So now we have two entangled systems.

But as the effects of the interaction continue to spread, the entanglement spreads ever farther out. The original wave is now hopelessly decohered, each fragment only able to interact with equivalent fragments from the other waves. Each fragment now forms its own branch.

The questions is, why don’t we perceive this? The answer, according to Everett, is to remember that we ourselves are quantum systems. When the fragments reach us, they cause us, or rather the quantum particles of which we’re made, to undergo the same process. We fragment among several branches. On any one branch, a version of us looks at the fragment matched up with his and asks, “Why do I see only one?”

The question comes down to, is there anything that causes the other branches not to exist? I don’t think this is a question we know the answer to, which is why MWI can’t be dismissed as a possibility, but also why it can’t be taken as settled.

I have long thought explaining decoherence is key to understanding the quantum world. One theory I’ve heard suggests it’s due to gravity, so decoherence happens when an interaction involves something with enough mass to have enough gravity. It would explain why coherence is a small-scale thing.

FWIW, my thought watching that video is that the photon (or electron) wave-function does “collapse” when the particle interacts with the detection circuit. I think that interaction constitutes localizing the wave-function — we now know exactly where the particle is: at that electron it just excited. That flying photon (or whatever particle) has now been “measured”.

I suspect a new wave-function starts with that excited electron, but as O’Dowd explains, current flow in a wire is a very messy environment, and there’s no useful coherent wave-function there. (There are billions of them as each electron interacts with its neighbors passing the voltage and current along the wire.)

I am not at all sympathetic to the view that the scientist is in superposition until she opens the box and finds a pissed off cat or a sleeping one (animal rights prevents killing them anymore). And I’m even less sympathetic to the view that her friends are in superposition until they hear the results.

And you know I’m not sympathetic to the Sean Carroll version of the MWI. 😉

LikeLiked by 2 people

One question I have is whether the wave-function is what collapses or is it the wave that collapse or are they are both ways of saying the same thing. To me the wave collapsing makes sense. I understand the wave function to be simply a description of a wave which is spread out and without specific location. When the wave interacts the wave’s energy and information is transferred to whatever it interacts with and the wave function is then no longer a valid description of the wave (now particle). Maybe my view is overly simplistic but this removes all mystery or need to for mysterious explanations about what is going on.

LikeLiked by 2 people

Whether the wave if just a mathematical function, or a physical thing, depends on which interpretation you favor. Most Copenhagen variants see it as a mathematical function. MWI sees is as the primary physical reality. Pilot-wave sees both it and the particle as physical.

In terms of decoherence, the key thing to understand is that the collapse is epistemic, not ontological. There is no objective collapse, just the wave becoming enmeshed into the environment, spread out, fragmented, decohered.

But if decoherence is wrong and we still have a collapse, and it’s ontological, then that’s a pretty mysterious thing. If it’s only epistemic, then we’re still left explaining how the interference patterns form, which itself is mysterious. That’s the crux. Quantum physics is either mysterious or bizarre. It won’t let you pass with any “reasonable” view of reality.

LikeLiked by 1 person

“If it’s only epistemic, then we’re still left explaining how the interference patterns form, which itself is mysterious.” I felt that O’Dowd did explain exactly that. You;re right – he’s a great science communicator.

LikeLiked by 1 person

I agree that he did, from the MWI perspective.

LikeLike

“I understand the wave function to be simply a description of a wave which is spread out and without specific location.”It may help to consider that the wave-function is a mathematical way of saying how likely it is that a particle is found in any given place. It’s math that, given a point in space, returns a probability.

The big question is what it

meansontologically. My amateur guess about it is similar to what you said. I focus on the quantum field view that a “particle” is a wavelet in a quantum field.Suppose when a particle is in flight — not localized by any interaction — the energy spreads out through the field (at light speed). But because the energy is spread out, at any point it’s far below the quanta threshold of a single particle. So there’s no “particle” present anywhere.

At some point that energy interacts with something else, but interactions are particle-based, not field-based. So all the spread out energy is “sucked” into the interaction, the “particle” appears,

and that’s the end of the wave-function for that particle.The particle it interacted with (which is its own wave-function that collapses in the interaction) now has a new wave-function that, because it’s embedded in matter, decoheres quickly as it’s divided between all the local particles it interacts with.

Unfortunately there is still some mystery. “Collapse” (of the spread out energy) seems to happen instantly, just as entangled particles appear to affect each other instantly. No light speed limit. But since the limit can be seen as on information or causality, perhaps energy in the quantum field can “drain” faster than light into an interaction.

The other mystery is why the wave energy interacts with any given particle. Maybe the universe really is random on that point.

LikeLike

I think I’m with you. I just see the wave function as a mathematical expression.

Still not sure I understand it completely but it seems to me relational quantum mechanics makes the most sense.

LikeLike

We still have to account for how interference of single particles occurs and what seems to be instantaneous collapse of the wave-function (and why it picks any given interaction — MWI answers that all do occur). I saw a headline, but didn’t read the article, about scientists actually measuring wave-function collapse. Maybe we just

thinkthe collapse is instantaneous.OTOH, entanglement seems to be instantaneous, so there is still an example of apparent FTL interaction to explain, and I believe collapse and entanglement are the same sort of thing. Weirdly FTL.

I haven’t paid much attention to relational theories. I see relations as emerging from things, so I tend to see them as secondary. You can’t have a relation without two things to form that relation. I see the operator or function on that relation as derived.

LikeLike

There are two ways of discussing the collapse: espistemic and ontological. Epistemic, it’s when we are no longer able to observe the effects of a wave, and can only observe the effects of the particle. But the key question is the ontology. I think decoherence says that the ontology is that the wave becomes fragmented, spread out, entangled with its environment.

So interaction with the detection circuit begins the process, but as O’Dowd mentions, for a very simple detection system, say another quantum particle, it’s still conceivable that we might introduce an offset and get the coherence back. But as the effects of the wave become increasingly entangled with a complex detection device, or the environment overall, it quickly becomes inconceivable. Technically information is always conserved, but any ability we have to reconstruct the increasingly fragmented and spread out effects fall apart. When that happens, all we have left is the localized particle.

I definitely know you’re not sympathetic to the MWI. But from how decoherence works, can you see the logic for it? That we have to postulate something new in the system (separate pilot-wave, quantum gravity, etc) to prevent it?

LikeLike

“Technically information is always conserved, but any ability we have to reconstruct the increasingly fragmented and spread out effects fall apart.”I think that’s an important point. If I write a message on a piece of paper, burn the paper, mix up the ashes and toss them to the wind, the information contained in the message still exists in some quantum sense. But there is no way, perhaps even in principle, to recover that message.

“But from how decoherence works, can you see the logic for it?”Actually, no. Because, firstly, we don’t really understand decoherence, so I don’t see how we can form arguments based on it. Secondly, it seems more to me that wave-functions collapse or damp out in decoherence rather than continue to play out in new universes. (Thirdly, I see MWI as ontologically absurd.)

I think the “nothing new here” bullet point of MWI is suspect and possibly even false. It might be false because it does require something new: an explanation of why we’re in any given branch. Why aren’t we in a branch where “impossible” (long odds) things happen regularly? Why are we in the branch where the odds usually pay off?

But it’s at least suspect because reality isn’t constrained by the idea that it has to be as simple as possible. Maybe the new thing — explaining decoherence — is necessary.

LikeLike

“Because, firstly, we don’t really understand decoherence, so I don’t see how we can form arguments based on it. ”

It’s definitely possible that decoherence may not be reality. And we don’t know what other factors may constrain the implications. But what about decoherence itself isn’t understood?

LikeLike

Your first two sentences are part of the answer to your question. We don’t understand the details.

LikeLike

But I don’t see how that obviates the logic. To avoid MWI, it still seems to me that we still need something else to remove the other branches. Or reasons to dismiss decoherence entirely.

LikeLike

MWI, AIUI, assumes no decoherence. Instead, it assumes different branches carry on superposed possibilities. (In the detector O’Dowd described, under MWI, there is a branch for each detector pixel. I forget if he showed that in the earlier video or this one.)

Without MWI we just need to fully explain decoherence, and that seems more reasonable to me.

Somethingcauses the quantum world to damp out. It may well be connected with quantum gravity — the scales seem to match.Entanglement,… now, that’s spooky. 😉

LikeLike

Hmmm, ok. It seems like you might be equating decoherence with wave function collapse. But decoherence is meant to explain the

appearanceof the collapse. Decoherence is usually regarded as an integral part of MWI. From the Wikipedia on decoherence:Based on the discussion the other day, I actually suspect entanglement is decoherence from the outside (and decoherence entanglement from the inside). They are two sides of the same coin. (Unless I’m utterly confused, which is possible.)

LikeLike

“Decoherence is usually regarded as an integral part of MWI.”Sure, but it’s a distinct thing of its own that predates MWI. It went through a brief period of mini-popularity (I wanna say in the 90s? there were websites) exactly because, as you say, it seemed an explanation of WF collapse.

LikeLike

I spent my third year at Oxford specialising in quantum mechanics and got a first, so I guess at the time I could do the calculations (couldn’t now), but didn’t gain an intuitive understanding of what is going on, so videos like this help, on the assumption that the presenter does understand the maths behind his simplified explanations. It would be interesting to get back into this, although I’m wary that it’s only by going back to the maths, which gets intractable very quickly as systems get just a bit more complex, that you can be sure you’re not just speculating.

That said, to speculate anyway, perhaps everything just remains in wave function space, it’s just that our detection devices narrow down the probability distributions enough that for all intents and purposes the concept of a particle at a location is a good enough approximation to reason with (or do bulk physics with). It seems evident that all interactions actually need to happen over distributed space and time, not at a point location and instantaneously, in order to have enough energy make a detection trigger a deterministic onward chain of consequences.

LikeLiked by 1 person

O’Dowd is a physicist, so I’m sure he understands the math. (Or like you, has understood it at some point.) But I certainly don’t. Although I know it well enough to grasp the idea that the Schrodinger equation describes the evolving wave, but not the wave function collapse. And that if we just let it play out, we get spreading rather than collapsing waves.

The key question is what is happening when “our detection devices narrow down the probability distributions”? Is the Schrodinger equation the full story? Or is there more? If we say there is, do we have any way to justify it?

LikeLike

“decoherence actually is compatible with pilot-wave and many other interpretations, and reportedly even some versions of Copenhagen.”

Well it had better be, if these “interpretations” are all supposed to be interpretations of the same quantum formalism! (I hear there are proofs, for some pairs of interpretations, that they give the same empirical predictions in all cases. Of course, the thing about proofs is: they have premises.)

In my view, decoherence is not in itself a big selling point for MWI; instead, it

removes objectionsto MWI. Starting with the fact O’Dowd focuses on: using decoherence, you can explain why itseems to usthat there is only one result of a quantum coin-flip, consistently with MWI, even though that single result is not the whole ontological truth.But wait, there’s more! Decoherence math (if I understood the Wiki page on it correctly) implies that there is no “magic moment” when the universe splits. It just becomes more and more useful *for all practical purposes* to treat it that way, and this happens extremely quickly. Now how much would you pay – but wait, there’s more! Because it is always possible in principle to regard our universe as singular, if you identify “our universe” with the total universal wavefunction. It’s just that it’s convenient and natural to speak of the extremely weakly-interacting possible histories as “whole ‘nother worlds” because any probabilities associated with them are minuscule compared to more mundane uncertainties of life.

LikeLiked by 2 people

I see we grew up watching the same Ronco commercials on TV 🙂

That matches my understanding. There’s no magical moment when another whole universe gets created. There’s only our gradual (but rapid) loss of access to the other branches, whether because we can’t access it from the branch we now happen to be on, or some other reason.

LikeLike

Is there some prediction that MWI has made that has been proven right? Or even a prediction of any sort? Or something additional it explains – like gravity, for example – that the other interpretations don’t.

In the absence of something compelling, I think the default position should be a formula is just a formula. It abstractly describes something but isn’t a literal representation of reality.

LikeLike

All the successful interpretations make the same testable predictions. They do make diverging predictions, but they’re currently all untestable. Which is why physics is unable to break the logjam.

There have been cases in the history of science where someone stumbled on a useful equation, but cautioned everyone not to worry. It’s not like this crazy thing is true. It’s just a calculating convenience, an accounting mechanism. No need to panic. (See Copernicus and Max Planck.)

But the longer we go without evidence for something in addition to the Schrodinger equation, the more likely it is to be describing reality, with all the absurd seeming results that follow. And it should be noted that all the proposed postulates to avoid that absurdity are themselves absurd in their own way.

LikeLike

You always seem to be wanting predictions that can be tested for most things. A fair ask in my view. If whatever the view is can’t be tested, the simplest view is that it is just an equation and isn’t literally describing some deeper reality which, in the case of MWI, is pretty absurd.

The equations are accurate and describe something about reality but that doesn’t mean they should be taken literally as MWI seems to want to take them.

LikeLike

I definitely want unique predictions, and testable is even better. The lack of testability is why I remain agnostic toward quantum interpretations. I only spend time defending MWI because I think people are letting a visceral dislike of it cloud their assessment. (If you search the archives, I’ve often argued with MWI proponents, when they put it forth as the obviously true one.)

The equations are accurate. The MWI argument is to ask what other postulate besides the equation has been shown to be accurate? Add no other postulates and MWI seems to fall out. Doesn’t mean it’s reality, but I acknowledge the logic.

LikeLike

Relational quantum mechanics (RQM) is the only model out there that is not absurd. In fact, RQM is the most pragmatic approach in that it eschews the Copenhagen interpretation all together along with its wave function and magical wave function collapse. It’s a pragmatic ontology that makes no distinction between the quantum and classical worlds, thereby eliminating the mystery of things that cannot be seen. RQM merely states the facts.

Biases are difficult to overcome, especially a bogus intellectual construct that has embedded itself into the fabric of our culture. Wave function collapse is analogous to the the Catholic doctrine of the trinity, it’s impossible to prove nevertheless, we are told to trust the priesthood because they are the “experts”. Wyrd commented above: “I haven’t paid much attention to relational theories. I see relations as emerging from things, so I tend to see them as secondary.” Secondary or not, RQM is upfront right out of the gate. RQM makes no attempt to address causation and neither does any other theory within the catalogue of theories contained within the library of physics. And in spite of that fact, we are all to quick to dismiss anything that does not conform to our own cherished world view. Therefore, I find Wyrd’s objection to RQM to be short sighted and incoherent. But that’s what makes our experience a subjective one right?

Peace

LikeLiked by 2 people

What’s more RQM at least attempts to offer an understanding of gravity and to bridge the gap between the quantum and macro worlds. So it tries to offer more than yet another view of QM.

LikeLike

I think RQM’s absurdity is in its sparse realism, the idea that attributes only exist on interactions. Its response to what their state is between the interactions seems to amount to: “Shut up!” I also think it dodges the broader implications of its relativity. You can say that isn’t absurd. Since absurdity lies in the eye of the beholder, all I can say is I find it absurd in its own way.

LikeLiked by 1 person

The only reason you find sparse realism absurd Mike, is because you do not understand the context of RQM; and that context is motion and form. There is “never” a time nor a condition where there are no interactions taking place between systems because motion and form is a continuum. To state it another way: There are no states in between interactions, there are “only” the interactions of systems.

It’s relativity is straight forward: As a unification theory, there is nothing special underwriting interactions at the quantum level that do not intrinsically happen at the classical level. Those universal interactions take place in there respective worlds as well as cross the imaginary boundaries we created between the quantum world and the classical world. The imaginary boundary separating the quantum and classical worlds is just like the imaginary boundary we created between mind and matter. It’s all in our heads and has absolutely nothing to do with the true nature of reality, that’s why it’s called subjectivity.

Peace

LikeLiked by 1 person

Could be Lee. My exposure to RQM has come from an interview of Seth Cottrell (when I first head of it) and the Wikipedia and SEP articles. Can you (or anyone else) suggest a better source?

LikeLike

I always enjoy Matt O’Dowd’s videos. In this case, perhaps because I viewed this one outside of the overall sequence, I found it somewhat confusing. I think what confused me was when the graphic appeared of parallel double-slit experiments with brains and eyeballs, and in which he concluded with the laser guns where he suggested it could theoretically be possible to generate “a pair of photons”, one from each laser gun presumably, and reconstruct an interference patterns. But this makes very little sense to me because the craziest part of the double-slit experiment is not the interference pattern, but the fact that the interference pattern can be produced

one particle at a time. This requires that individual photons interfere with themselves.So, I think what he was saying is that at some level the one photon that entered the experiment became every possible outcome, but they decohered from one another, which means that as the observer in my particular branch I can no longer interact with the others. The wave function, however, is still propagating and possibly could interact with itself again should it ever come back into phase. And I guess that’s where he loses me. Because if that’s the case then all the computers and eyeballs and brains are in the same universe…? Or world?

Related tangentially, I just finished listening to Lee Smolin’s book

Einstein’s Unfinished Revolution. The first two-thirds was a good review of all the various interpretations of quantum mechanics, which was interesting but not too much new material. The last third or quarter I found quite interesting, though. One interesting note is what he calls the cosmological fallacy, which is that we can’t take a theory like quantum mechanics, that applies only to limited systems, and presume it applies to the universe as a whole. O’Dowd happened to mention “the wave function of the universe” and it is said often I think, but in fact there is perhaps some reasonable disagreement about whether such a notion is even meaningful in the context of quantum mechanics.Lastly, to clarify one point, the wave equation itself and its description of the evolution of a quantum system is not defined as giving the probability of measuring a particular outcome in every interpretation, right? Born’s Rule does this. And Born’s Rule is an independent hypothesis that says the probability of a particular measurement is equal to the square of the amplitude of the wave function. It was essentially a guess on Born’s part… There was no mathematical theory or physical theory behind it. He just thought it made sense, and he was right!

Michael

LikeLiked by 1 person

“Because if that’s the case then all the computers and eyeballs and brains are in the same universe…? Or world?”

Here you’re looking at the core of the Everett interpretation, and why it could be seen as problematic to call it “many worlds”. You

couldlook at all the computers and eyeballs as being in the same universe or world. But due to decoherence, they’ve lost any ability to interact with each other, much as radio signals at different frequencies and wavelengths don’t interact with each other. They are separate causal frameworks. For all intents and purposes, they can beseenas separate universes. But they exist in the same spacetime (unless spacetime itself is quantum, then things get more complicated).It’s important to understand that if the various branches of the wave have any chance of interacting with each other, it has to happen soon after decoherence begins. Once macroscopic objects are involved, it simply becomes hopelessly inconceivable. Again, for all intents and purposes, they’re in separate universe. (If decoherence is the only thing that happens here. Removing the other branches requires additional postulates.)

Smolin’s book sounds interesting. I might have to check it out. But I don’t think the wave function of the universe necessarily falls under the cosmological fallacy. It doesn’t refer to the universe itself being a wave function, but that the sum total of everything

withinthe universe amounts to one large wave function. (Although again, if spacetime is quantum, this picture becomes far blurrier.)My understanding is that Born’s rule applies for all interpretations in an epistemic fashion. It predicts the probability of having various observations. But the ontology behind those observations are what vary.

LikeLike

Thanks for the clarifications, Mike. I realized in reading your reply here that when I think of MWI, I have a tendency to view it in terms of its original form, in which the worlds split apart from one another at the point when a measurement is made. So the process was discrete, in a sense. Since measurement is such an ambiguous process to define, and challenging in terms of both its philosophical and physical ramifications, I think decoherence is applied in an effort to make the process less “special” for lack of a better term. This way it’s got nothing to do with human observation or knowledge, and has its roots in physical processes that occur regardless of who is or isn’t watching. I don’t fully understand how it relates to MWI in this form, but will do some digging…

LikeLike

Michael, from what I understand, decoherence was its original form, although it wasn’t called that when Everett did his initial paper on it. Bryce DeWitt, who years later coined the term “many worlds” probably emphasized it in the way you’re thinking. (And which Sean Carroll tends to emphasize; which I think is a mistake.)

That said, I’m saying that as someone who’s never read any of the original material by these people, just the popular science descriptions of them.

LikeLiked by 1 person

So I’ve never read anything on MWI except the popular science descriptions, either. I’m going largely on the few papers on decoherence by Zurek, who I mentioned last time we discussed decoherence, and the discussion of this topic that Smolin gave in his book, which is fresh on my mind.

My understanding is the original formulation of MWI was by Everett, and maybe it wasn’t called MWI at the time, but this is what I was referring to. This was in the 50’s I think, and I believe work on decoherence began in the 70’s. I also think DeWitt resurrected Everett’s work in the 70s, so your notion makes sense. Given the two events are contemporaneous, perhaps the work on decoherence enabled a fresh look at Everett’s original thesis.

Here’s the part that, as a rank amateur, I have trouble with. Decoherence as I understand it from reading Zurek is a process in which environmental interactions sort of “dampen” portions of the wave function that are not mathematically unchanged by environmental interaction. Those interactions essentially reinforce, or select for, those portions of the wave function that

areunchanged by such interactions. So, what I’m guessing is that of all possible “outcomes” of the wave equation, only some remain viable in a classical sense. But it must be more than one. And somehow, we only see one of them, so perhaps other viable states–meaning states that would not change as a result of interacting with the environment–could be interpreted as “other worlds.” I’m still not sure of the idea behind why we see one and not the other, if multiple “outcomes” are resilient to, or selected by, environmental interactions. But perhaps that’s where MWI starts to make some sense.So… perhaps the most up to date version of MWI is not that all possible outcomes of the wave function are real, but only a few, and there remains the mystery of why we interact with only one of those few…? This is sort of a refined approach it seems to me, in which the number of worlds is somewhat less than the original form/interpretation of MWI, but in which there are still numerous viable branches. If I’m reading this correctly, it remains unclear why we see one and not the other, but I’m quite sure I am missing something. 🙂

Michael

LikeLike

From the Wikipedia article on decoherence:

The idea had a surge of popularity in the 1980s and 1990s, but does date back to much earlier QM.

LikeLiked by 2 people

Hmmm, something happened to the quote. It was basically that

“Decoherence provides an explanatory mechanism for the appearance of wave function collapse and was first developed by David Bohm in 1952,”Everett used it in 1957.LikeLike

Thanks, Wyrd! I’m curious how the idea has evolved over the years, and whether or not Everett’s original use is consistent with the ideas today or not. Do you know?

LikeLike

Sorry, can’t say I do. I’ve never had much interest in the MWI.

LikeLike

Sorry Mike. I should not have said

are notin my italicized section. I should have saidare. Perhaps you could correct that to avoid confusion… Thanks!LikeLiked by 1 person

Wyrd, the quote looks right to me, and matches what I recall from the Wiki article. But that language is a bit misleading. As I understand it, decoherence as its own theory originates with Zeh c. 1970, but he developed it to elaborate on the core mechanisms of the pilot-wave and many worlds interpretations. It ended up being useful for many others, even some variants of Copenhagen.

Michael, I don’t know that the core interaction of MWI has changed since Everett’s first formulation. What Everett basically did was ask, what if there is no collapse? What if we just follow the Schrodinger equation, come what may?

Under most interpretations, the wave spreads and branches until something causes it to collapse into a particle. But what if on the interaction we take to be the collapse, instead the spreading and branching propagate to the other side of the interaction? So now we have two entangled systems.

But as the effects of the interaction continue to spread, the entanglement spreads ever farther out. The original wave is now hopelessly decohered, each fragment only able to interact with equivalent fragments from the other waves. Each fragment now forms its own branch.

The questions is, why don’t we perceive this? The answer, according to Everett, is to remember that we ourselves are quantum systems. When the fragments reach us, they cause us, or rather the quantum particles of which we’re made, to undergo the same process. We fragment among several branches. On any one branch, a version of us looks at the fragment matched up with his and asks, “Why do I see only one?”

The question comes down to, is there anything that causes the other branches not to exist? I don’t think this is a question we know the answer to, which is why MWI can’t be dismissed as a possibility, but also why it can’t be taken as settled.

LikeLike

“Wyrd, the quote looks right to me, and matches what I recall from the Wiki article.”It should. It’s the same quote you used above. 😉

LikeLike

So it is. No wonder it seemed so familiar!

LikeLiked by 1 person