The entanglements and many worlds of Schrödinger’s cat

I recently had a conversation with someone, spurred by the last post, that led to yet another description of the Everett many-worlds interpretation of quantum mechanics, which I think is worth putting in a post. It approaches the interpretation from a different angle than I’ve used before.

As mentioned last time, the central mystery of quantum mechanics is that quantum particles move like ever spreading waves, but when they hit something and leave a mark, they do so like little localized balls. Prior to making the mark, that is, prior to a measurement, we say that the particle is in a superposition of multiple states, with those states perhaps involving different positions, momenta, spin directions, or other properties. This is modeled mathematically by the quantum wave function. In Copenhagen and similar collapse interpretations, measurement causes a wave function collapse. All the possible outcomes are instantly reduced to one.

An interesting aspect of this is that if two particles interact, we can often use properties measured from one of the particles to know things about the other particle, kind of like if I have a physical copy of today’s New York Times, I know what everyone else’s copy says.

What makes the quantum version interesting is that such particles, after the interaction, now share a common wave function, that is, they now exist in the same combined superposition, one where every possible combination of their affected properties is an element in the overall superposition. In collapse interpretations, the value of a particle’s spin, for instance, isn’t set until the measurement. In the case of two particles entangled on their spin, when one is measured and collapses to a definite answer, the value of the other particle is also set, apparently instantly.

The thing about entanglement is that there’s nothing in the mathematics constraining this phenomenon to only pairs of particles. It’s routinely observed in much larger collections. For example, all the elementary particles in an atom are entangled with each other to varying degrees, and entanglement is a central feature of quantum computing. According to the mathematics, it can happen among thousands, millions, or even 1030 particles.

Like perhaps the number of particles in a domestic cat.

This is what led to Erwin Schrödinger’s famous (infamous?) thought experiment. Put a cat in a box, with a radioactive element that has a 50% chance of decaying within a certain time period, along with a device that will detect when the decay has happened and release a poison that will kill the cat. If Schrödinger closes the box and waits the specified time, he has a 50% chance of discovering a live cat, and a 50% chance of discovering a dead one. But it’s more complicated than that.

According to his own equation, the radioactive element goes into a superposition of having decayed and not having decayed. Due to their interactions, this means the element becomes entangled with the atoms in the detector, and those atoms become entangled with the atoms in the poison, which become entangled with the atoms in the cat. In other words, according to quantum theory, the cat is in a superposition of being both alive and dead at the same time.

Following the most common interpretations of quantum mechanics in circulation when Schrödinger came up with this scenario, the cat only takes a definite state when Schrödinger opens the box and observes the result. Schrödinger’s point was that this is clearly absurd and shows that quantum theory has a serious problem.

But there’s a major assumption in Schrödinger’s scenario. Don’t feel bad if you don’t see it. Physicists missed it for decades. The assumption is that quantum physics doesn’t apply to Schrödinger himself, or observers in general, that the mathematical structure of the theory stops working with the observation.

On the face of it, this seems reasonable. We don’t observe cats in superpositions, nor scientists doing quantum measurements. But as Hugh Everett pointed out in 1957, this may be sloppy thinking. It assumes that we ourselves are not quantum systems, and fails to consider what we would see if we were.

So, according to the math, what happens when Schrödinger opens the box? The atoms of the radioactive element, detector, poison, cat, and anything else in the box become entangled with the atoms in Schrödinger’s body. In other words, Schrödinger himself goes into superposition, of seeing a live cat and of seeing a dead one. To be clear, each version of Schrödinger only sees the cat in one state, but every state is observed by a version of Schrödinger.

If Schrödinger himself is in an isolated room and calls up his friend outside, Wigner, and says, “Wigner, here are the results of the experiment,” the phone system and Wigner become entangled with the element-detector-poison-cat-Schrödinger system, with a version of Wigner hearing the cat lived, and a version hearing the cat died.

If Wigner broadcasts the results to the world, a version of the world hears about the cat having lived, and a version of the world hears about it dying. The causal effects of the cat living propagate in tandem with the causal effects of the cat dying, entangling the particles in the entire world. We now have a world where the cat lived, and a world where the cat died.

(Technically, due to electromagnetic interactions, the whole planet would have been entangled much faster, but let’s not be pedantic unless it’s relevant.)

Under this interpretation, there is never a wave function collapse, although there is the appearance of one for an observer on any particular branch of the wave function when they lose access to all the other outcomes. This happens through a process called decoherence. One side of this is interaction with the environment breaking up the wave of the system being measured. But the other side is the causal effects of that system propagating into the environment, with the environment and system becoming entangled with each other.

Is this reality? It depends on how well the wave function itself represents reality and how complete it is. As noted in the last post, I do think it represents reality at least at some level. But I think the question of its completeness remains open. I’m keeping an eye on the experiments that stress the core formalism.

Key questions: will we find evidence for a physical collapse of some type? Or for other additional variables that ensure only one outcome? Or will the universe spring something completely unexpected on us, as it’s done before? Only time will tell.

Unless of course I’m missing something?

71 thoughts on “The entanglements and many worlds of Schrödinger’s cat”

1. Re “As mentioned last time, the central mystery of quantum mechanics is that quantum particles move like ever spreading waves, but when they hit something and leave a mark, they do so like little localized balls. Prior to making the mark, that is, prior to a measurement, we say that the particle is in a superposition of multiple states, with those states perhaps involving different positions, momenta, spin directions, or other properties. This is modeled mathematically by the quantum wave function. In Copenhagen and similar collapse interpretations, measurement causes a wave function collapse. All the possible outcomes are instantly reduced to one.”

Again, there is a confusion between a description of reality and reality itself. Wave functions are a descriptions of . . . what exactly we do not know. Wave mechanics had a competitor theory called matric mechanics (I had to learn about both), but finally it was shown that the two theoretical treatments were equivalent, so we didn’t need two. But in Matrix Mechanics we didn’t have waves, so what dis the matrices represent?

In both cases, they represented a way to link quantum behaviors to those we had already encountered. But is this a reasonable expectation? It was certainly worth a try, but should human scale behaviors exhibit rules that apply to atomic scale behaviors? Is there any reason to believe that?

So, we have these wave functions, which are an attempt to describe something. But in atomic electrons, for example, the wave function has to be squared to derive the probability of an electron’s position. So, the wave function is the square root of a probability? (WTF?)

So, when we address a situation as you describe above we have a multiplicity of possibilities, represented by a multiplicity of wave functions, but we cannot distinguish between them, so we are confused. Then when the interaction occurs (not an observation as that is not a thing; we observe by interacting), the result is not any of those descriptions and we say that those descriptions collapsed. But since descriptions aren’t things, what has collapse, other than our mistaken possibilities of things in advance.

Liked by 4 people

1. “Again, there is a confusion between a description of reality and reality itself. Wave functions are a descriptions of . . . what exactly we do not know.”

Yes! I think this is a very important point to keep in mind! The Schrödinger equation deals with what I recently heard described as “the flow of probability currents” (the relativistic version looks a lot like the equation for electrical current flow — the one stating current flow into a region matches current flow out of it; current is conserved and so is probability).

The very intriguing question, assuming the MWI is correct, is — given an almost infinite number of identical worlds (let alone all the very different ones) — how does matter physically coincide? What could be going on there?

Liked by 1 person

2. Again 🙂 I’d point out that we have the wave function in the first place for a reason, to account for the observed interference effects. I’m totally open to the possibility that the reality that produces those effects is something different than what the wave function models so accurately, but I need to see that alternative laid out, not just vaguely referenced. (I can deny any theory I dislike by vaguely promising there’s a less shocking alternative somewhere somehow.) Until then, I think it makes sense to assume the wave function is modeling a structure that’s real at some level of reality, and explore the implications.

In terms of probability and the wave function, it’s even weirder than that. The square is of complex numbers. For a long time people thought the complex numbers were a mathematical convenience, that if we really worked hard enough, we could have the same results without them. But it’s looking increasingly evident that the complex numbers aren’t optional. We can try to hide from the implications of things like this, or embrace them and try to figure out what it’s saying about reality.

Like

2. We’ve discussed this so often I’m not sure there’s any new ground for us (unless you want to get into the weeds). For the record, in the paper that caused all the fuss, Everett wrote: “Alternative 2: To limit the applicability of quantum mechanics by asserting that the quantum mechanical description fails when applied to observers, or to measuring. apparatus, or more generally to systems approaching macroscopic size.” He goes on to discard the alternative because, “For what n might a group of n particles be construed as forming a measuring device so that the quantum description fails?”

But, as I’m sure know, I do not. I think the lone voice of a single quantum particle (such as the radioactive atom in the cat box) is almost instantly (Planck time scale) swamped out by the trillions of other voices around it (and all the photons streaming through it). Our daily experience tells us classical reality emerges from quantum behavior. The question (I believe) is where and how. I, too, am watching experiments seeking the Heisenberg Cut. I’m impressed, not by the gradually increasing size, but by the extreme isolation from the environment these require. I think that says it all: the environment “collapses” quantum behavior; classical behavior emerges.

What might be an interesting discussion point is that there are two ways of formulating the MWI. One of them doesn’t have the energy/mass splitting issue, which is nice. The first, I think more common, one is what I think of as the Sean Carroll version based on lectures where he uses his beam splitter device to, he tells the audience, create two of him, one that jumps left, one that jumps right, depending on the result of his beam splitter. The other is, I believe and label as, due to David Deutsch.

In the Carroll version, we’d use the Schrödinger equation to model something similar to a particle tunneling through a wall. We’d see the wave of probability approach the wall and most of it bounces back while a small part continues on through the wall. Now there are two humps of probable location for the particle. Under the (Carroll) MWI reality just branched. This version needs to account for how energy/mass is split without affecting gravity (because gravity is due to mass).

Also note that what’s happening here is not a superposition. The single wavefunction just now has two locations where it’s probable for the particle to be. It’s like atomic orbitals — one electron, but it is likely to be found in multiple locations (those lobes seen in the picture of your previous post). The superposition comes from the separate humps interacting with the rest of reality. If a particle tunnels out of a box, reality branches and the version of the particle inside the box interacts with that environment while the one outside affects things outside. Things get complex at this point.

In the Deutsch version, AIUI, the beam splitter is represented by a superposition of two worlds, one with a particle that goes through the mirror, one with a particle that reflects. There’s no actual “branching” (except effectively) — the wavefunctions for those two worlds are identical up to that point but then diverge. This formulation has no energy/mass problem.

All versions need to recover the Born rule, and as I mentioned last time, I think all versions have an issue with matter coincidence. I don’t believe the Pauli Exclusion Principle works to explain it. The quantum states that account for it are specific to the properties of fermions — it emerges naturally from the math used to describe them. I don’t believe being part of a larger system frees them from that.

Liked by 2 people

1. Definitely we’ve been over these points many times. So some of this will by necessity be a repeat, although maybe new information will come out.

On the lone voice issue, I think you’re right, for the vast majority of particle interactions the causal effects are limited and washed out in the sea of other interactions. But if we’ve arranged a large population of those particles that are designed to magnify the causal effects of one particle, then that particle can have wide ranging effects. It can’t do it by itself. It requires the right conditions (a detector in a lab, or perhaps a biological system that magnifies the effects of a change in DNA). Consider that the only way we can ever know about what happened with one particle is if its causal effects are somehow magnified so that they impinge on our sensory systems.

On Deutsch vs Carroll’s description of many-worlds, I don’t know if you remember, but I did a post on these a while back, along with the description of a gradually splitting world, describing how they are all actually different descriptions of the same ontology. But it’s hard to explain. It took me several months of thinking about it to understand how they’re the same, for it to click into place. (It’s worth noting that Deutsch himself doesn’t see his description as ontologically distinct.)

In this post, I was actually careful not to talk about when worlds split. So let’s consider it under the different descriptions. Under the gradually splitting world description, the world splits as the entanglement spreads, and a world is an element in that entanglement network. Under Deutsch’s description, there were always multiple worlds, so the entanglement spreading represents a spreading divergence between groups of worlds. Under Carroll’s description, what was once one world, is now described as two diverging worlds.

Another way to think of it is as the universal wave function having a large number of slices. (Imagine a one dimensional horizontal plot of a wave function being sliced into tiny vertical sections.) Groups of those slices move in tandem, until some measurement-like event causes portions of that group to start diverging from each other. We can say a world is the group of slices that are moving in tandem, and so that a world splits when its constituent slices start diverging. Or we can equate a world with just one of the slices, with interference from ever fewer slices as more and more diverge from each other.

Ok, that’s my shot at trying to explain it this time. As I said, it’s hard to explain, so no one attempt is likely to work.

I think it would be nice if the Born rule could be derived in many-worlds, but it’s never struck me as mandatory. If the Born rule has to be accepted as a postulate, that still leaves it more parsimonious than any other interpretation. (I think Lev Vaidman has actually bitten this bullet, but most Everettians are good with one of the derivations out there.)

On the Pauli Exclusion Principle, if there is a universal wave function, then simply being on different branches is being in different states for the quantum system that is the universe. Just because all the states have been used in some subsystem, like an atom, doesn’t mean they’ve all been used for the universe as a whole. Unless I’m missing something?

Liked by 1 person

1. I’ve been looking into the specific mechanics of single particle detection, so I’ve been reading about photomultiplier tubes, single-photon avalanche diodes (SPADs), and photographic film. There are a number of concrete situations in terms of how the effect of a single particle is magnified into something we detect.

Some cases are, in a sense, bigger and easier to grasp. We can detect flying electrons when they hit a fluorescent screen, exciting other electrons to emit photons (and then we’re back to detecting those). The first spin experiments used silver atoms that accumulated to the point of microscope visibility on a glass plate.

One thing in common is that the putative superposition at the level of the detector is always between happening and not happening. It either “sees” the particle or doesn’t. (Canonical wavefunction collapse — the probability of the particle interacting here and now either suddenly becomes 1.0 or 0.0, because it either did or didn’t.

We characterize the situation by a superposition of larger objects, cats, detectors, but that begs the question. It assumes superposition of classical objects is meaningful in the first place (an unproven fact). Although never observed, the cat is assumed in superposition because it must be so (let alone Wigner). The thing I’ve found is that, when one looks closely at the interactions of detecting single particles, it seems apparent that the quantum state is very quickly lost in the cascading interactions of the first dozens of particles of the detector. These interactions quickly turn into something classical. There doesn’t seem room for a superposition with another detector inches away. The energy of the incoming particle causes an interaction that cascades to something detectable. It seems that simple.

A photomultiplier tube, for instance. A photon hits a metal plate. Einstein got his Nobel prize for the photoelectric effect. The energy frees an electron from the metal. It flies towards and hits a more positive dynode where more electrons are released due to secondary emission. Those fly towards a second yet more positive dynode, more electrons are released, and more stages of dynodes amplify the current to the point it can be detected. A good question is at what point does this become classical?

Can you give me a link to which post you mean? I’d like to read it again.

I’m not sure what you mean by “slices” of the wavefunction. The only sub-components a wavefunction has is being a superposition of other wavefunctions. (Also, crucially, are we talking about the wavefunction math or its putative physical existence?) We can take time slices in time-dependent wavefunctions, but I don’t think that’s what you mean. What sort of “cross-section” do you mean?

I’m with you in not much being bothered by the Born Rule. At most it seems an oddity that low-probability events always happen, but, meh, whatever. I’ve met those who find it a deal-breaker, and it does somewhat belie the idea that the MWI needs no new math. To be a full theory, it does have to recover our experience of the Born Rule. Under Copenhagen, the randomness is axiomatic, but the MWI denies that randomness and thus must account for our experience of probability.

Pauli Exclusion is based on just the quantum states of the electrons involved. There’s no overall superposition wavefunction state that — in terms of the PEP — allows filling more positions. We’re back to the mysteries of superposition and the Heisenberg Cut. How meaningful is superposition of macro objects?

Like

1. I don’t have anything intelligent to say about the detection methods. I hope you’re planning a post on them. It sounds interesting.

Sorry, meant to provide a link in the previous comment.

The nature of splitting worlds in the Everett interpretation

You also might find Disagreeable Me’s comments on this interesting. He tried to convince me several months earlier that it was the same ontology. His way of explaining it might click better with you.

David Deutsch’s version of many worlds

On the slices, don’t overthink it. Just imagine that there’s some minimal unit that the universal wave function can’t be further divided into. (It may not be reality, but it works for me as a crutch.) I think that gets to what Deutsch calls a world. (He actually uses the word “universe”, but it’s the same concept.)

Getting to your point, Carroll’s version of a world is a state of the universal wave function with a discrete amplitude. Carroll’s worlds have varying amplitudes, or “thickness”. A typical Carroll-world contains vast multitudes of Deutsch-worlds. When a Carroll-world “splits”, it’s identical to groups of Deutsch-worlds diverging from each other. The key is that both are descriptions of the same underlying ontology, the universal wave function evolving into ever more states.

I have no idea if that helps or just muddies things more.

Certainly if there is no universal wave function, then we do run out of room for additional states. But then if there’s no universal wave function, we’re done with Everett anyway. As you noted, how big can the quantum world get? It seems to me that we’ll never be able to verify the universal wave function, only fail to falsify it with systems of ever larger size.

Like

1. I do plan to post about single “particle” detection, yes. It generally involves a detector element with a lot of energy in a kind of coiled spring state. The single particle causes enough of the right kind of disturbance to release the spring. The classical detection energy comes from the device. The particle just sets it free. The mousetrap principle.

I’ll look at the post more carefully when I have more time. I see we already chatted there. At first blush I disagree the Carroll and Deutsch ontologies are the same (other than, perhaps, effectively). My impression of the Deutsch formulation is that all possible worlds existed all along in superposition, the universal wavefunction starts and always contains them all. Each evolves over time according to its own wavefunction. Under Carroll’s version, a wavefunction does split so the universal one has more components over time.

On the slices I’m afraid we have a disconnect, which is why I asked about math versus some putative reality. Mathematically, a wavefunction is something like:

$\displaystyle|\Psi\rangle(x,t)=\eta{e}^{{ikx}-{i}\omega{t}}$

Which is the wavefunction for a free particle in one dimension. More complicated systems have more complicated wavefunctions, but they are all mathematical functions, Ψ(r,t), that take a position, r (which is [x,y,z] in 3D space), and a time, t, and return a complex vector representing the quantum state at that location and time. I don’t know how to apply “slice” to that. Or rather, I can think of many ways to divide it up, but I don’t follow how any of them help or clarify.

The most obvious way to slice is the sub-component wavefunctions:

$\displaystyle|\Psi\rangle=|\psi\rangle_{1}+|\psi\rangle_{2}+|\psi\rangle_{3}+\ldots$

Under MWI, those would be many worlds that comprise the universal Ψ.

Besides this, other than slicing in time or space, I know of no other way to sub-divide the wavefunction. If you mean “wavefunction” as a placeholder for a putative reality, that’s a separate discussion.

As with SUSY and dark matter particles, at what point does failing close the window? These experiments require extreme isolation from the environment. Why is it such a challenge to accept that the quantum state is fragile? Why such faith in a quantum world that so far shows no signs of appearing? (I don’t mean you personally, I mean that as a generic question.) In Neal Stephenson’s D.O.D.O., a chamber surrounded by a thick jacket of liquid hydrogen in a Bose-Einstein state, allows macro quantum effects, such as humans in superposition. It’s basically a working Schrödinger’s Cat Box. The idea is that isolation inside a quantum system allows quantum effects at any level. (The obvious flaw is the chamber itself, and any humans in it, would decohere any superposition, but cool SF idea.)

One thought I’ve had is that it doesn’t take much mass for the deBroglie wavelength of an object to drop to Planck levels. I’ve been wondering what that might do to wave mechanics.

Like

2. As I noted above, I didn’t expect to succeed this time. It took me a while to get it. I’ll just say a couple more things. Deutsch does not envision separate universal wave functions. His view maps back to one universal wave function, just as Carroll’s does. The thing to do is think about how that would work. Remember he has interference effects between worlds. And consider this. If we have two identical worlds, each with their own energy, what makes them two rather than one world with the combined energy? It’s not like there are borders, particularly if there is interference between them.

On the comparison with SUSY, I think you might have misread what I wrote. We can only fail to falsify it means that it becomes more plausible the longer we don’t falsify it. The longer the formalism isn’t falsified, the bigger the systems we preserve quantum effects in, the stronger it looks. Of course, if we do falsify it by discovering an objective collapse or additional variables that significantly change the picture, then it’s a different story.

Like

3. Think you may have misunderstood me? I said “My impression of the Deutsch formulation is that all possible worlds existed all along in superposition, the universal wavefunction starts and always contains them all.” The difference between them is what happens to the world wavefunctions that comprise it. You’re right about his belief about interference; it’s one of the things I’ve planned to look into. Don’t know anything about it, yet, though, so no comment on that aspect.

“And consider this. If we have two identical worlds, each with their own energy, what makes them two rather than one world with the combined energy?”

You’re willing to set aside the Pauli Exclusion Principle because the particles involved are tagged as in different worlds (different component wavefunctions). Why doesn’t that apply to the worlds themselves?

“It’s not like there are borders, particularly if there is interference between them.”

But then why doesn’t the Pauli Exclusion Principle blow everything apart?

I read you fine about failing. Consider what is becoming more plausible in the continued failing to falsify. That quantum effects are real at the small, isolated scale? Kinda know that already. That cats can be in superposition? No evidence so far. Not even for cat’s whiskers! 😺

As we’ve agreed many times, it all depends on the reality of that Heisenberg Cut.

Like

4. Sorry, I did misread you. I was responding to: “Each evolves over time according to its own wavefunction.” But as you noted, the sentences before that imply your use of the word “wavefunction” here doesn’t refer to the universal one. Sorry, my bad.

I think the Pauli Exclusion Principle doesn’t apply to the worlds because they’re each in different states, even Deutsch-worlds. Otherwise isolated quantum systems within those worlds wouldn’t have their own wave function with interference between their own states. But that raises another way to describe the difference between Carroll-worlds and Deutsch-worlds. Carroll-worlds are macroscopically distinct while Deutsch-worlds are only necessarily microscopically distinct.

If the Pauli Exclusion Principle prevented interference, we wouldn’t have quantum effects, but I may not be understanding your question about why it doesn’t blow everything up.

I don’t see the logic about quantum effects being only in small systems. Experiments are reproducing them in ever larger systems. We can focus on the fact that these have to be extremely isolated for us to measure those effects, but I don’t see how the data is saying small. Certainly we’re a long way from a cat, but scientists are entangling macroscopic objects. If we’re going to hit some “cut” or boundary, it seems like it will happen soon.

Like

5. “I think the Pauli Exclusion Principle doesn’t apply to the worlds because they’re each in different states,”

What you’re suggesting requires fermions have an additional property — which world — that we’ve never noticed. A tag that tells them in which larger quantum state they’re in. When it comes to the PEP, the phrase “different states” refers very specifically to the concrete properties of fermions we’ve detected, mass, charge, spin, iso-spin, etc. The mathematics from which the PEP arises include only those properties. There is no extra which world property that’s a part of this picture and would allow coincidence. (A picture, by the way, that explains the Periodic Table.)

The PEP isn’t related to interference. I’m not sure where you got that? “Everything blowing up” was a reference to what would happen if somehow matter did actually coincide — the old “transporting to inside of a wall” thing. Big boom.

“I don’t see the logic about quantum effects being only in small systems.”

When every observation we’ve ever made demonstrates exactly that? We’re not just a long way from a cat, Mike, we’re a long way from a cat’s whisker. I’d say we’ve been hitting the Cut all along, we just don’t know exactly where it is because we don’t yet understand its mechanism.

But whatever. We all have our hopes regarding unproven facts. Ours just has to differ on this one. May I ask, what’s the end game? Do we eventually have actual quantum cats in superposition? Neal Stephenson’s isolated chamber? What do you think these larger experiments would do?

Like

6. In terms of a “world tag”, I’ve actually tried to make it clear that a world isn’t a natural kind, an ontological thing, but a human convention. The word “world” can refer to multiple things. The ontology of the Everett interpretation is the universal wave function.

On the exclusion principle, at this point, I’d suggest finding a physicist to ask for an authoritative answer. From my reading, I’m satisfied there isn’t an issue, but I’m an amateur and any answer I give will be amateurish. I think similar to the energy issue, it’s just something we have to wrap our minds around.

What’s the end game? Heck if I know, but the further scientists are able to push the boundaries of quantum effects, the higher chance we live in a fully quantum universe. Of course, that chance could become zero tomorrow if they actually hit some threshold.

Like

7. These definitely are deep mathematical waters and getting beyond what “quantum state” means WRT the PEP does require swimming in them. (I keep telling you the water’s fine! 🏊🏼‍♂️)

By “world tag” I just meant a (not present as far as we know) property of fermions that gives them the extra degree of freedom necessary to physically coincide. The PEP, as we understand it, embodies all the degrees of freedom we know about. (And, yes, in these discussions, “world”=ψ and “universe”=Ψ — the latter being a superposition of all possible “worlds.” I’m under the impression those are givens; did I say something that made you think I’m not on that page?)

As you know, “no black swans” is almost impossible to prove. As you also know, the best such theories can do is gain some credence over time. I do appreciate the position. Some lab may produce a black swan any day — a meaningful macro superposition or interference. I’d be willing to bet it won’t happen (because it can’t), but I acknowledge that if it continues to not happen it won’t prove anything. (Likewise, not finding SUSY or DM particles doesn’t mean they definitely don’t exist.) It’s either a discovery that changes everything, or a long game of never knowing for sure. We’ve placed our bets! 💰🦢

BTW: I mentioned the deBroglie wavelength. Doing matter wave interference experiments often involves a grating of some kind (rather than just two slits). With larger objects, the grating size gets molecular or atomic. (And, as I mentioned before, the deBroglie wavelength of large objects approaches the Plank scale!)

Like

8. On mathematical waters, for me, it’s a matter of time and energy. Too much investment before I’d trust any resulting conclusions over what I’m getting from the experts.

“I’m under the impression those are givens; did I say something that made you think I’m not on that page?”
Just setting the record straight that I’m on that page too.

I’m not placing bets. I can see the formalism stubbornly continuing to pass tests, but I can equally see something surprising us. I do doubt that any new variables would match the existing guesses, or make QM less disruptive to our view of reality.

Liked by 1 person

2. Why do you want to exclude poor little old me (Pauly)?

Actually, I don’t see why one particle can’t be in many “worlds”. As Mike says, a world is just an accounting device. It might help if we rename it the Many States Of Affairs interpretation. And the states of affairs are global. Everything in this room might be the same regardless of whether, say, Russia invaded Ukraine a minute ago. So there is no puzzle about how the configuration of electrons in my keyboard is compatible with multiple States Of Affairs.

Like

1. The Many State of Affairs interpretation. I like it. It reminds me of the initial name it was published under: the Relative State Formulation.

Interestingly enough, Everett’s working title was the Correlation Interpretation, which today we might think of as the Entanglement Interpretation.

Like

2. Dude! It doesn’t exclude you, it’s in honor of you!

“Actually, I don’t see why one particle can’t be in many ‘worlds’.”

Unless I’m misunderstanding you, that’s the problem — it does!

A “multiple state of affairs” is a nice phrase, but what does it mean mathematically? Doesn’t an “accounting device” need a physical reification? What physical properties allow that accounting?

Let’s explore this scenario: You get a phone call from Wigner about a cat. This “branches” (whatever that means) you into two versions, one that’s happy, one that’s sad. Shortly after, you both get a call from Wigner’s friend about a completely different cat in a different country. Now there are four of you, happy-happy, happy-sad, sad-happy, and sad-sad. That last guy is so sad he turns on a lot of bright lights and heaters to cheer himself up. His walls get warm. Conversely the first guy is so happy he opens his windows in celebration and his walls are cool. Guy #2 was so dismayed by the second call he punched a hole in the drywall.

Four walls in four different states, but presumably the same until they branched. What level of agreement do we have that four sets of particles are involved now? If not, how is it expressed?

In one case the particles are no longer there, but some distance away. In other cases (and these are the ones that bother me), they coincide, but are roughly the same particle. (In this example there were three different energy states.)

The problem I have is that “being in a different state” isn’t magic sauce one slathers on a fermion to make it invisible to the PEP. In the context of the PEP, “being in a different state” has a specific mathematical meaning. (One that accounts for the Periodic Table, so good work, Pauli!) If different quantum states allow an end-run around the PEP, why doesn’t matter change in every quantum system we make? Yet matter remains matter as we know it.

(Keep in mind, with all the other branching going on all the time, there are way, way more than just four of you.)

Like

1. Suppose that we have a simple elegant way to identify “the same particle” in one state-of-affairs and in another. (From what I’ve heard, there isn’t always a clean way to have “the same particle” over time, but whatever.) So now you have one particle in a superposition of states-of-affairs.

I am just failing to see how this violates PEP. Even if it were in a superposition of states in the narrow sense of “states” involved in PEP — which it needn’t be — would *that* be a problem?

Like

1. Well, if the many worlds are all meant to be physically real, then it’s hard to see how the intense electrodynamics of charge as well as the PEP aren’t an issue. Objects in superposition do interfere with each other (hence interference patterns), so why don’t we see the effects of myriad overlapping worlds? Where’s the interference pattern?

If the claim is they’re “decohered” then they aren’t in superposition, and the PEP and EM charge effects should take full effect. (I read once that if 1% of the electrons in your body were removed the resulting explosion would equal billions of atom bombs. Coulomb effects are way more powerful than people realize. A bit of static charge allows a balloon to defy the gravity of a planet.)

In superposition, where’s the interference? Decohered, where’s the bang?

Like

2. Maybe I misunderstand what “superposition” is supposed to mean, but on my understanding of the word, decohered wave-function-lets (aka “worlds”) are still in superposition. They’re just absurdly close to orthogonal, so the interactions are absurdly close to zero.

When I first heard a serious explanation of MWI (not just a sci fi flick), I said “Oh yeah, well how come you never run into half a physicist running down the hall yelling Eureka, my quantum experiment worked!”?

Answer: you do, but his interaction with you is too small for either of you to notice, even with your finest instruments.

Like

3. I’ve found that “decoherence” and “entanglement” are two of the more broadly used words in this field. People mean different things by them, so perhaps we should exchange definitions, because I’m having difficulty understanding what you’re saying in your first paragraph. WRT the latter paragraphs: 😀 😀

I’ll go first:

Superposition: Wavefunctions are linear, so if A and B are both valid wavefunctions, then A+B is also a valid wavefunction. Combining wavefunctions is superposing them. The surface of a lake is superposition of all the wave energy sources affecting the water. Quantum superposition is weird because one thing is in two (or more) states. Spin, for example, is often characterized as a superposition of some basis (up/down, left/right, diagonal/anti-diagonal). Further, in quantum superpositions, the component coefficients must normalize such that their squares sum to one because a quantum superposition is potentiality but for it to actually interact with something it has to be singular. The external interaction requires it be a whole thing, not a ghost of might-be.

I don’t know how to fit the term “decohered” into the above picture, so I’m puzzled by your use of it and of its use in general with regard to the MWI. It’s said to account for why other worlds are invisible to us, but I can’t make sense of that. Normally objects that are decohered relative to each other — such as all the physical objects in our daily experience — are entirely concrete, so I don’t see why the PEP doesn’t (violently) apply.

Decoherence: The loss of phase information such that desired interference effects don’t act as expected. This is catastrophic in quantum computing, and much of the engineering is devoted to maintaining coherence of qubits as long as possible. But a decohered system still is a quantum system and still interferes — it’s just that its phase is randomized.

Two-slit experiment, imagine dust particles in the air interact with the photons altering their phase. This causes “loss of coherence”, and the interference pattern blurs into a blob. But each individual photon takes two paths and interferes with itself as usual. Each photon creates its own one-dot “interference pattern” — but each photon creates a slightly different one due to random phase shifts, so overall the interference pattern disappears.

In a QC, interactions with the environment cause phase shifts in the qubit, which randomizes it, making it useless.

Within a wavefunction the term doesn’t make sense to me, so you’ll have to explain what you mean and define your terms. One result of the environment decohering a quantum system in superposition is forcing it to “pick a lane” — the superposition collapses.

With regard to orthogonality: You’re thinking of projection, where being orthogonal means the projection is zero. But that interaction is part of what happens when we “measure” something — that projection is the notorious “collapse.” I’m not sure it’s a ticket for lack of physical interaction — if anything, it might be the opposite. Like basis vectors, any orthogonal vectors span some section of the space and allow generation of signal. Sine waves, for example, are characterized by two orthogonal vectors that “wave” out of phase with each other. But maybe you mean these things in a way different from the ones I understand?

Like

4. Superpositions as linear combinations: agrees with what I thought. But “for it [a superposition] to actually interact with something it has to be singular” gets tricky under MWI. On MWI, this is only true if we (re?)define “singular” to mean “the only thing happening in OUR world.” Now, that could be a very useful notion of “singular”, but then not much about MWI follows from the observation that, say, spins are singular when measured.

Decoherence: I was forgetting the nuance of this term. The kind of decoherence that most interests me is *rampant* entanglement with the environment. If an originally coherent quantum system has become entangled, perhaps through chain reactions, with an enormous number of environmental particles, then two randomly selected, antecedently possible outcomes of this process are likely to be described by orthogonal vectors, to some good approximation. (Explanation of this claim attempted on request, but I’m hoping you find it obvious.) According to this , orthogonal vectors can be used as a basis for breaking down a complex dynamics problem into sub-problems.

Unless I’m getting something terribly wrong (which is highly possible since I’m a QM noob), you could almost say that each antecedently possible outcome (of this rampant entanglement) is its own little “world”. That is, if you just follow the unitary wavefunction evolution and suppose there’s no (objectively real) collapse.

Like

5. You’re right that, under the MWI, the need for a superposition “pick a lane” doesn’t exist (because “collapse” doesn’t exist). I actually see this as one more problem for the MWI because it makes a hash of reality as we know it. (And I’m not referring to the many worlds as that problem.)

For us to see anything (under Copenhagen), photon matter waves “collapse” in the cells of our retina, starting a chain of (mostly?) classical events that ends in our brain. Under the MWI, however, a photon in superposition with not being there at all causes a superposition in our retina (between being stimulated and not) and this sets off a superposition in our brain of having seen and not seen. And this spreads to our acting on what we saw or not, and the putative superposition spreads. So everything we experience, despite all the things our science tells us, aren’t really experienced, but are just due to spreading superpositions. It’s more that we enter a superposition of having experienced it or not.

I don’t believe reality works like that, but that’s just me.

“If an originally coherent quantum system has become entangled, perhaps through chain reactions, with an enormous number of environmental particles, then two randomly selected, antecedently possible outcomes of this process are likely to be described by orthogonal vectors, to some good approximation.”

Explanation requested because, sorry, I can’t make head or tail of it. Can you use a specific process and trace through what you mean?

“According to this, orthogonal vectors can be used as a basis for breaking down a complex dynamics problem into sub-problems.”

They are. It goes to what I said above about orthogonal vectors spanning a space. The term “span” here has formal mathematical meaning. As your link gets into, orthogonal vectors allow a basis, and that basis can define any vector in the space.

“Orthogonal” casually means “at right angles to” but in higher dimensional spaces than three, the concept of a “right angle” changes meaning. Given three vectors in 3D space, each at a right angle to the other two, no fourth vector can be so placed. QM can have spaces with infinite dimensions, so with higher dimensions we resort to the inner product, which gives us the projection of one vector onto another. If that projection is zero, we consider the vectors orthogonal. (Note that in the Bloch Sphere being orthogonal means being 180° to each other, so the concept of orthogonal is fluid. It’s based only on inner product, not any actual angles.)

What’s important in QM is that different bases can be orthogonal to each other, and this is where Heisenberg Uncertainty actually comes from. For example, position is an orthonormal basis (“orthonormal” means vectors of length one). So is momentum. Crucially, in the Hilbert space, these bases are orthogonal to each other. That’s why we can’t know both with arbitrary precision simultaneously. We have to pick a basis when we measure — are we looking for the location or the momentum? Similarly, in spin experiments, we have to pick the spin axis we measure, and the axes (and their operators) are orthogonal to each other. (This is also the source of the “preferred basis problem” in the MWI, which again raises the question of how we actually ever experience anything under the MWI without picking some basis.)

Anyway, I still don’t know how to fit what you’re saying into any of my understanding, but I’m not sure that’s on you but on the MWI. The more I research it, the more incoherent it seems. I really need to sit down with an expert theoretical physicist. I keep meaning to see if there’s anyone at the U of M that will talk to me.

Like

6. “It’s more that we enter a superposition of having experienced it or not.”

I’m of two minds about this. 😉 Sean Carroll would say that these two branches contain different people. But! — me talking now –since neither has a better claim to be me, I have ceased to exist. You could talk that way if you want to, but I prefer to use fuzzy logic regarding which future people are “me”. So yes, I (to a high degree of being me) will experience many different things, without all these experiences being integrated with one another. Life is complex! I say deal with it.

Regarding dot products. As the number of dimensions approaches infinity, the odds of two vectors of such dimensionality being orthogonal approaches unity. I think it was Frank Wilczek who wrote that in Physics Today. When I read that I suddenly realized – holy crap, I actually understood that sentence!

Why that’s true: imagine two 2D unit vectors of random orientation thrown on the floor. Their dot product is probably less than 1. Draw the projection along either vector’s direction, and now assign each vector a random Z component in [0, 1]. and renormalize them. Throw them on the metafloor. Repeat ad nauseam. Each throw adds another chance for the dot product to be really really small, and the overall effect is a “weakest link in the chain” type effect.

Why this is relevant: The more particles have been affected directly or indirectly by our initial photon (or whatever), the more dimensions it takes to characterize the process.

Like

7. Heh. I’ve never had the identity issues some report with regard to various multiverse theories. My apparent autobiographical past is whatever it is; I’m the person I am today. If there are other copies of me elsewhere, I’m sure they don’t care about (or believe in) me any more than I do them. It’s just never bothered me one way or the other. (Perhaps from reading so much SF involving body and/or mind switches or AIs or various other forms of being.)

Yes, very true about large-dimension vectors. If you’ve ever been exposed to “semantic vectors”, they depend on the approximate orthogonality of large random vectors. Take a vector with 500 real components, each picked randomly, and it will be largely orthogonal to any other random vector you make. Attach these to semantic concepts, and you can do a kind of math with them.

But the dimensionality of the wavefunction is fixed from beginning to end. The number of dimensions is tied to the number of particles being described. It’s a starting condition. Maybe a question to ask is what role you see these vectors playing in your scenario. Generally, a wavefunction is something that takes a spatial position (and possibly a time) and returns a state vector representing the quantum state at that location (and possibly time). We can then take the inner product of that projecting on some basis vector, square that value, and end up with a probability of observing the particle at that place (and possibly time). So, we have vectors that form various bases for measurement, and the vector that’s the current system state.

Like

8. I think the excitement occurs here:
“We can then take the inner product of that projecting on some basis vector, square that value, …”
I’m suggesting that the basis vector in question is some other antecedently possible outcome of the rampant entanglement process. So, the photon hits your eye, vs it flies out the window and hits a tree, to elaborate your earlier example. What is the dot-product of those two outcomes? Basically nada.

Square that value and it’s very nada. On MWI you end up first with a measure of how substantial that result is (the result where you see evidence of the same photon hitting both the tree and your eye). “How thick the world-branch is” is a common phrase. From there, in turn, does come a probability, if those probability derivations work. We can temporarily shelve the issue of whether the Born rule can in fact be justified on MWI.

I don’t know what “the basis problem” of MWI is supposed to be. I see the above reasoning as a solution, not a problem. I suppose the “problem” must be something quite different.

Like

9. Okay, I see what you’re saying. The inner product between any two outcomes is always zero. Possible measurements, e.g. spin-up versus spin-down, are always orthogonal. Finding the photon in the tree versus your eye are orthogonal because different locations are orthogonal. Here we’re using the position operator, and in that basis, all positions are orthogonal.

[Details: Imagine a one-dimensional situation where a particle might be anywhere on the X-axis. There is a state vector in which every possible X location is a component of that vector. The wavefunction gives values to all those components (which, in fact, form a smooth continuum). We get the probability of finding the particle at any given X by applying the location operator, which gives us the square of the inner product of the wavefunction and the location. The basis for the location operator is an infinite set of vectors matching the X-axis, but in which all components are 0.0, except for one, which is 1.0 — the particle is here. These vectors are necessarily orthogonal to each other because none of the 1.0 values match any of the others, so the inner product between any two is zero. There is something similar behind the bases for momentum, spin, etc.]

The putative amplitude of world branches is supposed to recover the Born Rule — it’s a matter of how likely we are to find ourselves in one branch versus another. One has to accept a reality in which, no matter how unlikely an outcome is, it still always happens though few of us will find ourselves in such branches. That means, in terms of the cat-in-the-box, not only are there copies of cats that died every second during the experiment (because the sample could decay at any time and a cat dead one minute is different from a cat dead 30). There are branches where all sorts of other things happened. Cars crashed into the lab. So did meteors. Massive power and equipment fails of any possible nature. To borrow your phrase, “antecedently possible” — if any antecedent leads to an outcome, under the MWI it happens. Under Copenhagen, probability behaves as we observe it to behave — the usual thing usually happens, but sometimes an outside chance happens. (I need to be convinced the universe has it in itself to be physically multiple, so I bet on the latter notion. To me, Occam suggests it’s singular.)

The preferred basis problem is that different possible measurements (say position and momentum) have orthogonal basis. The wavefunction can be described equally as a superposition of one basis or another, but not both at the same time (because they’re orthogonal). So, the question is, which basis does reality pick for the superposition of world wavefunctions?

We’ve come a long way from my issue about fermions coinciding. I’m not sure if it’s any clearer why that’s a problem, but the discussion is plenty interesting regardless.

Like

10. My reading of what MWI says is that “the photon was found in your eye” *gradually* becomes true to degree ~ 1 (in terms of being a many-step causal process) yet quickly (taking << 1 sec). Rather than saying there was a tiny chance we could find "evidence of a photon hitting your eye and evidence of it hitting a tree", I should have said "evidence of the kind of phenomena we could expect if these events somehow interacted". But the example isn't appropriate because in ordinary circumstances you'd never detect an interference pattern between a person and a tree even if there was one. So instead, let's say that Everett predicts that there is no system-size limit where interference effects should disappear altogether. Of course the ability to test this depends on your instruments. And AFAIK some other interpretations might make the same prediction.

For the preferred basis problem, I think the "local" way of viewing Everett helps. That is, rather than regarding the universe as having instantaneously split when you detect spin up (and Alice therefore detects spin down), view the split as propagating from the detection "event". So, the local universe-patch decides the basis. For that matter, so did your instrument decide the basis of which axis the spin was measured.

I suppose you're going to chicken-and-egg problematize this and ask what bases does this classical object called an "instrument" have. I dunno, but for now that looks like a puzzle, not necessarily a problem.

Like

11. Now we seem to be veering off into “causal” MWI versus “instant” MWI. I don’t see what this has to do with anything we’ve discussed, but okay. Sean Carroll has said one can take their pick of whether the entire universe branches at once or whether the branch extends in a causal manner limited to light speed. He feels they amount to the same thing.

Which is fine, but I think causal MWI requires accepting quantum non-locality (or something that explains the results of Bell’s experiments). Imagine the canonical Alex and Blair spin experiment. When both measure the same axis, their results are correlated. Under causal MWI, Alex and Blair both branch, but somehow the branches with Alex-up and Blair-down have to meet and likewise Alex-down and Blair-up. What property of reality prevents Alex-up plus Blair-up and Alex-down plus Blair-down from meeting?

To take this back to the original topic, it’s somewhat like the question of what makes fermions invisible to each other. If Alex and Blair are causally separated, what keeps the branches straight? If reality instantly branches, there’s no issue. Also no issue in the MWI “parallel worlds” formulation with no branching (just parallel world wavefunctions that are identical until they diverge).

Regardless, this doesn’t have anything to do with the preferred basis problem. Here’s a simple example: Spin can be characterized as a superposition of Up+Down or Left+Right or Diagonal+Antidiagonal, but as orthogonal bases, they are mutually exclusive. Characterizing the superposition requires picking one. On a larger scale, what is the basis for the universal wavefunction? It’s a superposition of world wavefunctions, but what is the basis that defines them?

Like

12. “What property of reality prevents Alex-up plus Blair-up … from meeting?” I guess it would have to be, as Alyssa Ney says in that article Mike linked to, that locality-as-separability is violated in 4D spacetime. It is only respected in the high-dimensional Hilbert space.

When exactly do we need to know the basis that defines each world wavefunction? How about waiting until relevant experimental results are in, and then deciding that the basis must have been one of {B1, B2, … BN}?

Like

13. It seems that way to me, too. The MWI still has inseparable states and, thus, quantum non-locality. But that implies the “branching” is instantaneous everywhere.

It’s a good question. It’s the preferred basis question, and (as with many things MWI) there doesn’t seem a clear notion of the necessary mechanics involved to allow a good answer. One answer is: from the start. The universal wavefunction as a superposition of world wavefunctions implies some basis from the start, so reference for all those world wavefunctions.

You see, under Copenhagen, the answer would be “upon measurement” — the superposition is just potentiality and any basis you use to describe that is as good as (but remember mutually exclusive from) any other basis. Measuring picks a basis. But under the MWI, the superpositions supposedly represent “real” worlds so it’s a very good question as to why one basis and not some other with no measurement to pick them out.

The simple example is, if some unmeasured particle is a superposition of spins, are the world branches up/down or left/right or any other orthogonal angles? Under the MWI two worlds are “real” but which two?

Like

14. But is this qualitatively different from the uncertainty we have in any old physics, say classical? What makes a particle start out by flying in one direction, rather than another? After it interacts with your measuring devices, you can say a lot about its initial trajectory, but not before. If we want an epistemic why – why should we say this rather than that about the particle – you just have to wait for data. If we want a metaphysical why, ultimately there isn’t one, for many details, in classical physics. Nor in Copenhagen, as far as I can tell.

Like

15. “But is this qualitatively different from the uncertainty we have in any old physics, say classical?”

On several counts. The quantum superposition of states doesn’t exist in classical mechanics, so questions of basis don’t mean anything. The apparent indefiniteness of QM doesn’t exist in CM — all classical properties are definite. And classical properties always commute; that they don’t in QM is one of its distinguishing features. (I believe the notion of tensor products also doesn’t exist in CM.)

So, yes. 🙂

“After it interacts with your measuring devices, you can say a lot about its initial trajectory, but not before.”

Artillery, ballistic missiles, and a great deal of the space program, depend on our ability to prepare and control a classical trajectory. This works because classical properties commute. We can know with great precision both the location and momentum of a projectile and we have fully deterministic equations to work with.

Like

16. Of course there are differences, but not all of them relate to uncertainty. Some (all?) Everettians are going to say that quantum properties (the universal wavefunction) are also definite. And the Schrodinger equation is deterministic. Meanwhile, in the classical world, someone can shoot down your ballistic missile with an ABM, which you don’t control. Or a meteorite could strike it.

There is definitely a quantitative difference of uncertainty. The Everettian (or any quantum) view of the world is more complex than classical. And while both kinds of physics have parts of the universe that recede too fast for us ever to reach, Everettian QM also has unreachable “worlds”. But both have enormous amounts of stuff we don’t yet know, and plenty stuff we’ll never know.

Like

17. Oh, okay, if you want to focus on just uncertainty, we’re still dealing with two very different things. Classical uncertainty (a meteor taking out your missile, for instance) is just stuff you don’t know — but which physics doesn’t prevent you from knowing (radar could spot the meteorite; the missile could dodge it).

Quantum uncertainty, the Heisenberg Uncertainty, is a mathematical consequence of properties not commuting in QM. It comes, as I discussed above, from the orthogonality of, for instance, the position and momentum bases. Here physics itself prevents knowledge.

I think we’re best served trying to understand the (one) world we know, inhabit, and can test. Faith in other realms we can never reach, in my view, amounts to a religious belief, an abiding faith based on an idea with not one shred of physical evidence supporting it. Just faith that a quantum universe is a thing when all the evidence suggests otherwise.

Like

18. I consider relativity part of classical physics. The speed of light limit definitely puts some relevant information out of our reach. A massive light pulse could be incoming, and you’ll never know before it’s too late.

Like

19. QFT is QM+relativity, but I take your point. Still, we’re talking about two completely different things. Quantum uncertainty is about the lack of precision in measuring something inside our causal sphere. Things outside our light code aren’t uncertain so much as unknown. Either those things become known once the light reaches us, or because of the expansion of the universe, never do and have no causal relationship with us.

Like

20. Sure, but I’m suggesting that “the basis problem” is itself a bunch of unknowns. That is, you find out about bases through interactions. You shouldn’t look to theory to tell you the present configuration of any part of the universe. It is plenty good enough to have ways of getting information that narrows the possibilities.

Like

21. On second thought, the overall effect is worse than a “weakest link in the chain”. But even a weakest-link effect would suffice for the conclusion.

Like

22. I’m not sure you understand the basis problem. It isn’t an issue of “unknowns” but of mutual exclusion. It has nothing to do with what we can figure out or know, but how nature (per the MWI) is supposed to work. (The basis problem is only an issue in the MWI. It doesn’t exist otherwise.)

I have no idea what you mean by “weak link.”

Like

23. I’ve been following this, and I have to admit my own grip on the preferred basis problem isn’t solid. (From what I’ve read, there’s some disagreement about what it is even among physicists.) But my understanding is it pertains to why worlds split (or diverge) according to the measured basis.

Often this is described as being solved by decoherence. (Some people contest this, but it seems like every solution in this space is contested.) My way of thinking about it (which is really just a rephrasing of decoherence) is measurement amplifies the causal effects of the measured basis into the macroscopic environment. It’s those causal effects which lead to the divergence (splitting).

I think what Paul is saying, is that until the interactions involved in measurement, there’s no issue. The situation is identical to any other interpretation.

Unless of course my grip on this is even worse than I thought.

Like

24. “Unless of course my grip on this is even worse than I thought.”

There is something of an “elephant in the room” here that I want to preface this with. Two important points. Firstly, Dunning-Kruger syndrome is not about intelligence. In fact, it’s possible the highly intelligent are most prone to it. Competence in some areas easily leads to a false self-perception of competence in others. Secondly, QM without the math, is Dunning-Kruger territory because the only real truths about QM are in the math. QM is infamously non-intuitive, so no verbal description can ever do it justice. If all one has are words, one’s grip is necessarily worse than one can imagine.

One’s reasons for not learning the math can be entirely legit and reasonable, but regardless of the reason, without the math, no real understanding is possible. (And even with it, no one really understands what the math means.) And, FWIW, per D-K syndrome, one really isn’t aware of the gaps in knowledge, but that gap is readily apparent to others who are familiar with it.

Not that I’m claiming expertise with the QM math — that’s a work in progress — but I do know enough to know where I can speak with confidence and to see where I need to learn more. The deal with D-K syndrome is that one must gain enough understanding to recognize the difference. In QM, that requires learning the math. (The truth is, QM math isn’t really that hard.)

“But my understanding is [the preferred basis problem] pertains to why worlds split (or diverge) according to the measured basis.”

There’s a lot more to it. For one thing, there is no measurement in the MWI, so the term “measured basis” doesn’t have the meaning we’d normally give it. The word “why” and the next sentence invoking the buzzword “decoherence” suggest you may be conflating branching (or diverging) with the preferred basis problem (which is distinct).

Before I dig into this, can you give me a reference to the disagreement among physicists as to the nature of the preferred basis problem? It seems well-defined to me. I do know that some proponents of the MWI dismiss it, but it was one of the few actual MWI topics discussed at that recent “Shoulders of Everett” event. That Don Page guy gave a talk about it having to do with consciousness, which seems a huge step backwards.

At risk of repeating myself, the simplest example illustrating the problem is to consider the quantum spin of a “particle.” (Some of this will connect back to that first lecture from Professor Adams in that MIT 8.04 course.)

Quantum spin can always be characterized by a superposition. Say we’ve measured the Z-axis (which is the basis {|0⟩,|1⟩}) and gotten |0⟩ (spin-up on Z-axis). We know that a second measurement on the Z-axis necessarily always gets the same result, |0⟩, so we might characterize the spin:

$\displaystyle\Psi=|{0}\rangle$

Which is valid, but spin is a quantum property, so it’s equally valid to describe it as the superposition:

$\displaystyle|\Psi\rangle=\frac{1}{\sqrt{2}}\left(|X_+\rangle+|X_-\rangle\right)$

Which is the basis of the X-axis. We can, in fact, describe it as a superposition in any basis we like, although if we don’t use orthogonal axes, we have to include coefficients:

$\displaystyle|\Psi\rangle=\cos\theta|A\rangle+\sin\theta|B\rangle$

Note that this superposition applies to both the above given θ=0° in the first case and θ=90° in the second (and for any basis orthogonal to the {|0⟩,|1⟩} basis). So even that first example is a superposition with coefficients of 1.0 and 0.0.

Note also that, due to having measured the spin, all these superpositions are in reference to that known quantum state. (Known, but I emphasize, still a quantum superposition.)

OTOH, if the spin is unknown due to never having been measured, then all superpositions look like the second one (with equal probabilities for getting spin-up or spin-down should we measure on that basis)

We saw this play out in the experiments Professor Adams discussed in his first lecture. Although the “hardness” measurement gave us a known state, that state was still a superposition of “color” (it was still a superposition of “hardness” with coefficients of 1.0 and 0.0 along one arm with 0.0 and 1.0 in the other). The only final state occurs when a measured particle is destructively localized (i.e. detected).

The point of all this is that (per Copenhagen) all bases are equally valid — but mutually exclusive — descriptions. The superpositions are not, themselves, in superposition with each other. We always have to pick one.

Expanding this to a world, the location of “particles” is a basis — the position basis — and it is mutually exclusive to the momentum basis (hence the Heisenberg Uncertainty Principle). So one example of the preferred basis problem is the question of why the (quantum MWI) world looks localized to us. Why is that preferred (by nature) to the momentum basis?

“Often this is described as being solved by decoherence.”

As I have said many times now, that’s an over-used buzzword that’s slathered over the MWI as if it explains something, but I’ve never seen any account of how it’s supposed to work. If anything, decoherence would seem to put us where this conversation started: questioning why decohered coincident matter (which cannot be in superposition) isn’t subject to the Pauli Exclusion Principle.

I would urge you to stop using the word. (“You keep using that word. I do not think it means what you think it means.”)

“…measurement amplifies the causal effects of the measured basis into the macroscopic environment.”

But how? There is no measurement in the MWI. Without “collapse” how can anything cause anything? A “world” is just an evolving wavefunction. Causality would seem to depend on localization (“measurement”).

“I think what Paul is saying, is that until the interactions involved in measurement, there’s no issue.”

But there is. Equally valid bases are mutually exclusive pre-“measurement” superpositions the MWI claims are real. Under the MWI, we’re in different wavefunctions (worlds) in superposition.

But a superposition of what basis? That’s the question.

Like

25. When I was a teenager, I had learned 6502 assembly language on the Atari 400 and thought I was hot stuff. In particular, I knew about the 8k memory region that the ROM in a game cartridge inserted into the computer’s cartridge slot was mapped to. When one of my friends told me that a new game cartridge had 16k, I told them they had to be wrong. There was only an 8k region to work with. I rejected what he said because I took myself to be the expert, and he wasn’t. When we went to the store and looked at the box the game came in, there it was in plain text: “16k.” My friend knew less about the Atari platform than I did, but he was right and I was wrong. (Turns out there was this thing called bank switching.)

That was Dunning-Kruger. I thought I had more expertise than I did. The comment I made above wasn’t, because I was upfront about the limitations in my knowledge. If you think understanding your limitations is Dunning-Kruger, you need to go read about it again. You might have Dunning-Kruger about Dunning-Kruger. 🙂

You, on the other hand, have learned some QM101 math, and you seem like my teenage self, to the extent when someone presents anything to you that doesn’t accord with your understanding, you assume it must be wrong and reject it, because you see yourself as the expert. Even when what you’re being told is sourced from real experts.

In other words my friend, before you fling D-K accusations, you need to take a hard look in the mirror. Sorry, but you’ll need a PhD in physics (at least) before I take your statements about what the math means over the statements of world experts on this subject.

“Before I dig into this, can you give me a reference to the disagreement among physicists as to the nature of the preferred basis problem? ”

I’ve seen it mentioned in different places, but over an extended period and so didn’t note them. Here are a couple I was immediately able to pull up.
https://physics.stackexchange.com/questions/65177/is-the-preferred-basis-problem-solved/66258 (look at the first answer)

https://arxiv.org/abs/1008.3708

I do understand the idea of bases being mutually exclusive. My comment above was made with that knowledge.

So, the phrase “there is no measurement in the MWI”. Obviously scientists in labs do measurements, so you must mean something else by it? It is true that there’s no collapse in many-worlds, but measurement does happen, and the choices and equipment used make a difference.

Consider, if someone measures the spin of a particle, how does that information get registered in their brain? The measuring apparatus, whatever it is, has to amplify the causal effects of whatever is being measured, so that those effects propagate into the environment, and eventually impinge on the senses of the experimenter. There has to be interactions between the thing being measured, the measuring system, and between the measuring system and the experimenter.

Anytime there are interactions in quantum systems there is entanglement. Which is why, under Everett, it makes sense to speak about the measured system becoming entangled with the environment. The other effect caused by these interactions is the loss of coherence, thus “decoherence”. When an Everettian talks about decoherence, they’re usually talking about entanglement with the environment, that is, the causal effects of the measured outcomes of the basis propagating into the environment.

Importantly, it’s these causal effects from each outcome of the measured basis, these waves of entanglement, that cause divergence in the environment along the lines of that basis. If a different basis were measured, the divergence would happen along that basis instead.

On the other hand, if nothing is measured, then as you often argue, the particle has no causal effects, at least other than any other particle. Which is why the superpositions of an unmeasured basis isn’t an issue. And why decoherence is typically cited as the solution to this issue.

Just so you know I’m not making this stuff up:
https://en.wikipedia.org/wiki/Many-worlds_interpretation#The_preferred_basis_problem
https://plato.stanford.edu/entries/qm-manyworlds/#PrefBasi
Wallace also discusses this in his book. (Can’t recall if Carroll does in his.)

If you don’t like my explanation of decoherence, then similar to the Pauli exclusion issue, I recommend seeking out more authoritative ones. But don’t assume because you don’t currently understand it that it must be invalid.

Like

26. Wow. Way to make it personal.

I see your 6502 story as more about your over-confidence than D-K syndrome. As you said, you knew a lot about the 6502 and its Atari implementation. But, of course, no one knows everything, and your considerable knowledge simply mislead you, a failing we all make. D-K syndrome is when “people with low ability at a task overestimate their ability.” [Wikipedia] Without the component of “low ability” it’s not D-K, it’s just over-confidence.

You’ve (once again) misunderstood what I said. In fact, I said the opposite.

“…you seem like my teenage self, to the extent when someone presents anything to you that doesn’t accord with your understanding, you assume it must be wrong and reject it, because you see yourself as the expert.”

I was explicit: “Not that I’m claiming expertise with the QM math — that’s a work in progress — but I do know enough to know where I can speak with confidence and to see where I need to learn more.” I don’t assume anything. I look for evidence and detailed explanations.

“Even when what you’re being told is sourced from real experts.”

We’re down to dueling experts? Where do you think I learned the things I learned? From people like Lee Smolin, Roger Penrose, and others. Not to mention the professors at MIT who teach QM and a handful of other experts who teach QM through YouTube videos.

Do you think I’m making this stuff up? You said yourself that the MWI is filled with contentious views. I respect your experts (which is why I read Carroll’s book and will get around to Deutsch and Wallace). I have the impression you respect the experts you agree with and dismiss those you don’t.

I’ll give you two examples. In your last post stolzyblog linked to a review about the Chalmers book. As I read that review, I thought to myself, “I wonder how Mike will find a way to dismiss this and not engage.” You didn’t disappoint. Quite some time ago I pointed you to a paper by Adrian Kent, Against Many-Worlds Interpretations. I’m pretty sure you didn’t read it.

(Kent is a British theoretical physicist, Professor of Quantum Physics at the University of Cambridge, member of the Centre for Quantum Information and Foundations, and Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics. Expert enough for you?)

“Sorry, but you’ll need a PhD in physics (at least) before I take your statements about what the math means over the statements of world experts on this subject.”

If all it takes is a PhD to impress you, you must not know that many PhDs. I worked with corporate PhD scientists and engineers for 15 of my 34 years. They’re human, and I wouldn’t let the letters alone determine much. (Kind of a classical “appeal to authority” logic error there.)

It’s, again, a matter of your PhDs versus mine. That’s a losing game. It’s what they say that matters. This is where the lack of math background gets in the way. Without it, one has to take these experts on faith.

Your links are interesting, but did you read much more than the phrase you searched for? The first one goes on at length quoting work by Adrian Kent and Fey Dowker. That’s why I was reminded of that Adrian Kent paper. I’ll easily concede the point there is disagreement on the precise nature of the preferred basis problem. (And yet, all the statements of it you linked to describe it the same way.) I found his comment interesting but would have to bone up on consistent histories to say much about it. His penultimate paragraph says much of what I’ve been saying here about the preferred basis problem. He also makes clear this is an open problem in the MWI.

The arxiv paper’s abstract says, “It is shown that, perhaps contrary to common belief, in realistic situations decoherence is not sufficient to solve the preferred-decomposition problem.”

“So, the phrase ‘there is no measurement in the MWI’. Obviously scientists in labs do measurements, so you must mean something else by it?”

Measurement=collapse. But in the MWI, as you said, there is no collapse, so measurement is a problematic concept.

“Consider, if someone measures the spin of a particle, how does that information get registered in their brain?”

According to Everett (section IV, Observation), the process doesn’t follow causality as we apparently see it. The detector goes into a superposition, and that superposition spreads to other systems, including the brain of the scientist. As he describes it in his paper, there is first an observer:

$\displaystyle\Psi^{O}_{[A,B,\ldots,C]}$

O for observer. He means the brackets to enclose time-ordered memory configurations. When the observer observes system S, he has initially:

$\displaystyle\Psi^{S+O}=\Phi_{i}\Psi^{O}_{[\ldots]}$

Evolving to:

$\displaystyle\Psi^{S+O'}=\Phi_{i}\Psi^{O}_{i[\ldots,{a}_{i}]}$

“where a_i characterizes the state φ_i. (It might stand for a recording of the eigenvalue, for example.) That is, our requirement is that the system state, if it is an eigenstate, shall be unchanged, and that the observer state shall change so as to describe an observer that is ‘aware’ of which eigenfunction it is, i.e., some property is recorded in the memory of the observer which characterizes φ_i such as the eigenvalue.” [Everett]

As far as “decoherence” and “entanglement” we’re back to Inigo Montoya. Sean Carroll provided nothing. I’m looking forward to what Deutsch and Wallace have to say. Maybe they can provide a detailed account.

Your Wiki and SEP links, in their first paragraph say pretty much the same thing I’ve been saying. They go on to say people, such as Wallace, claim the problem is solved through decoherence (although that arxiv paper you linked says not). My problem, as I’ve said repeatedly, is that decoherence as we understand it experimentally and in QM math, seems to have the opposite effect.

But, as you concluded, discussing this with you is fruitless. At least now I know I need a PhD to rate with you.

Like

27. MWI owes us some explanation of how a basis emerges in an experiment, such as when Alice or Bob subject their particles to fields of various orientations. But, absent (something approaching) a proof that a basis can’t emerge, I don’t see what I would call a problem. Just a puzzle and some unknowns.

Like

3. “ kind of like if I have a physical copy of today’s New York Times, I know what everyone else’s copy says”

I would say EXACTLY like that. I’m with Rovelli in that there is not a physical collapse, just an epistemic one. The wave function is just a mathematical description of all the possibilities given that you don’t know specifics. Just like there is a function that describes all of the possible articles in the New York Times, given that you don’t anything yet. That function says words like “the” and “a” start out with a high probability of appearing. Words like “Seattle” and “James” have a lower probability, and words like “gakusei” and “schuler” yet lower probability, and words like “wjn8kksj” and “rwdfspppsjy” have almost zero probability (almost).

But if you tell Wyrd one of the words is “Trump”, the probability of “January” goes way up, along with lots of others, but only for Wyrd. And if I know you told Wyrd five of the words, but I don’t know which ones, I can include that in my description of the probabilities. And then if I interact with Wyrd and he tells me one of the words, that affects the probabilities of the four other words from my perspective, as well as the probabilities in the original article.

You could say that all of those articles actually exist, until you find out which ones don’t, at which point those cease to exist. Just pointing out that the mathematics of probabilities doesn’t necessarily have to tell you about reality.

*

Liked by 2 people

1. You rang, sir?

I haven’t taken the New York Times in decades and speaking of speaking of the name of the devil, let’s not. Like ever again. It’s just a term used in Bridge. M’kay? 😉

My only comment is that experiments seem to indicate reality may be more like Wheeler’s 20 Questions game. Where the party hasn’t picked an item but answers yes/no randomly while building a consistent picture from previous answers (a bit like you described in your comment). It gives the strong and consistent impression that quantum reality isn’t concrete until you ask it a specific question. All things not ruled out are possible until some no answer cuts off that branch of possible answers.

The correlations we observe in Bell’s experiments require either that reality be this malleable or that some form of hidden variables exist (which you and Rovelli are betting on) — that reality actually is fully concrete and only apparently random. (I’ll note that superdeterminism is a flavor hidden variables theory.)

Like

2. My question for anyone taking the epistemic stance on the wave function is, where do the interference effects come from? What is interfering with what? Without these interference effects, we wouldn’t have a wave function. It seems bizarre to say that it’s the probabilities that are interfering with each other. At that point, we’re just using the word “probability” to mean something other than the common meaning.

As I noted to Steve, I’m open to an alternative here. But I need to see something concrete. Not just a statement that maybe the wave function doesn’t reflect reality. The best I’ve come across are references to toy models that exist somewhere, but I haven’t found a good description of them. (Often the very existence of these models seems to be implied as good enough to ignore the issue.)

Like

1. “Where do the interference effects come from?”

Where do the interference effects in a water wave come? It’s the dynamics of whatever is underneath. But whatever it is for photons, etc., it ain’t particles. Thus, strings, maybe? The wave doesn’t have to be a “thing”. It just describes certain kinds of dynamics.

*
[that’s all I got]

Liked by 1 person

1. I appreciate the effort. I’m actually agnostic on whether it’s particles. The wave might be waves of all the different versions of the particle. Or a particle could be seen as an infinitesimal fragment of the wave. Same ontology. But it could be waves of strings instead. Or something weirder.

(Like tiny little turtles. 🙂 )

Like

2. As you suggest, with water, sound, and light treated classically, something is physically waving — energy passes through some medium in waves. Interference comes from energy canceling or adding. But QM puzzles on several levels. The medium appears to be probability, whatever that means. And they’re probability amplitudes that we square to get the probability of finding the particle. And the interference comes from the complex number math used to add the probabilities. The two-slit pattern is real. Explaining it wins you an instant Nobel. 🏆

Like

4. Mike,

Prepare yourself for what may well be a stupid question. You wrote, In other words, Schrödinger himself goes into superposition, of seeing a live cat and of seeing a dead one. To be clear, each version of Schrödinger only sees the cat in one state, but every state is observed by a version of Schrödinger.

My question is, why does this thing called “Schrödinger” travel as a unit? Or is it just that everything duplicates when the wave function splits (the first type of description)?

I just sort of wondered while reading this, why do subjects in these conversations navigate the wave functions in blocks like that, as if they are integral wholes? Is it because any interaction of an atom at say, the tip of my finger, or between an electron in my lung and an atom from the air, will then quickly, at the speed of light say, propagate this entanglement through the body like the worlds most amazing Othello move (bad reference to a board game)?

And I only just thought of this while writing, but let’s say hypothetically we discover in the future that the body is able to maintain something like a quantum computer within itself, or that certain senses rely on tiny molecules that isolated from the environment for brief periods of time? Sounds crazy, maybe, but if we just allow ourselves to hypothesize the body has some mechanism that acts like a firewall with regards to the propagation of entanglement, then what happens when there is a “split” in the worlds?

Michael

Liked by 2 people

1. Michael,
Not a stupid question at all. I took a lot of liberties in the way I described this scenario for pedagogical purposes. In truth, as soon as the box is opened, it’s not just Schrodinger who goes into superposition, but everything around him, including the atoms in the air, the furniture, the floor, walls, etc. I hinted at this in my brief parenthesis statement, but probably should have called more attention to it. So Schrodinger doesn’t travel as a unit. Rather he, as a unit, goes along for the ride.

Likewise, assuming the room he’s in has some advanced technology that does succeed in isolating him from the world sufficiently, as soon as interactions between the room and the outside world happen, the entanglement rushes out. So Wigner also goes along for the ride. The world is actually entangled long before he has a chance to announce anything about the cat.

I’m not sure what would happen in the scenario you describe. Entanglement is, basically, correlation that results from causal interactions. It’s only the fact that quantum phenomena involves superpositions that make them weird. So my initial thought is, in an otherwise quantum universe, such a firewall would isolate us from the universe. I don’t think that would work.

But I suppose it could eliminate all but one of the states we would go into. In that sense, maybe, from our perspective, it would be like the von Neumann-Wigner interpretation, where consciousness causes the collapse. Except in this case it would be biological humans causing the collapse. But it would leave a lot of other branches where humans were missing. It would be a very strange universe, but one indistinguishable from us causing the collapse.

This actually reminds me of a conversation I read about between Everett and another scientist. The scientist asked him if he saw awareness in just one branch, or in every branch of the wave function. Everett of course said every branch. I wonder what the questioner was thinking in that exchange.

Liked by 1 person

1. Thanks, Mike. This is tricky stuff! It kind of leads to more questions for me. At some level I assume/thought entanglement proceeds through local interactions. Meaning, two particles in a gas “bump into” one another, and because momentum is conserved—but because there is no single and exact position and velocity of the two particles and/or perhaps various possible ways in which they could interact—then the outcome for each individual particle is bound up in the other. If we measure the momentum of one, then we’ll know the momentum of the other.
(Now it strikes me I gave a bad example, but I think I’m close. Because of our fascination with the combined one-two punch of entanglement and complementarity, which makes something like the Bell’s tests possible, entanglement all by it’s lonesome takes me a little thought to reach. But I think it’s as simple as interactions like this. No?)

Okay, so the cat in the box is isolated, and all the particles on the box side of the isolation go into an entangled state. Because the Schrödinger test also involves entanglement and complementarity, as the entanglement of the radioactive particle(s) at the heart of this experiment “diffuses” through the system and into the environment, every additional particle that interacts with a particle that can trace its interaction history to the radioactive elements in the box goes into a superposition—there is a version of it in the “particle decayed” world branch and a version of it in the “particle didn’t decay” branch.

But I think if we take the cat out of this, and the mechanism of it dying, it’s the same thing. There’s an entanglement network where the particle decays and one where it doesn’t. What gets the cat caught up in the madness is the device that creates a further constraint or relationship between the result of the radioactive particle’s behavior and some mechanism that kills the cat. If the cat were in the box with the radioactive particle didn’t exist, then the cat would simply be alive in both entanglement networks, instead of dead in one and alive in the other. Correct?
(So I see why Schrödinger goes along for the ride.)

But it gets impossibly complex really fast right, because if I put sixteen little radioactive particles in the room, and they are not isolated, then the entanglement networks arising need to contain every permutation of possible decay or not, correct?

So, QUESTION, if an event as simple as a collision produces entanglement, does it mean that every single collision involves multiple branches of the wave function? I am thinking it does?

Now, to get to what you said here, So my initial thought is, in an otherwise quantum universe, such a firewall would isolate us from the universe. I don’t think that would work. Agreed. That’s why I was curious, but after I wrote this I thought, isn’t that what a quantum computer is trying to do? Isolate from all other entanglement networks for some period of real time? And if that’s so, when it becomes re-entangled with the world, does it re-entangle with all entanglement networks that are available to it? It all becomes sort of overwhelming… But it’s very interesting.

Liked by 1 person

1. Michael,
I think your first three paragraphs are right. Or I should say they match my understanding. 🙂

On putting sixteen radioactive particles in the box, a while back my answer probably would have been close to yours. But consider, if there are no detectors, then whether the elements decay or not has no (or very little) causal effects throughout the rest of the box. In other words, there’s no detector to magnify their causal effects. That means there’s nothing to cause a macroscopic divergence within the box, so no distinctive states. There are microscopic differences, but those exist anyway in all the quantum jiggling always going on, which usually cancel each other out.

So we could view the contents of the box as being in innumerable distinctive microscopic states that all amount to the same macroscopic one. In that case, does it make sense to say the cat is in multiple states?

So to your question about one collision, there are multiple ways to view it. One is that it, by itself, doesn’t cause any branching. In this view, we need something to magnify the causal effects of that collision leading to macroscopic divergences. Or, we could view it as causing branching, but the resulting branches would all be macroscopically identical. Consider, are numerous identical branches really different branches? Or we could view all those branches as already there and moving in tandem, until something happens to make them diverge. I know this is maddening when trying to wrap your brain around it (it was for me), but all of those descriptions apply to the same ontology. It’s why many Everettians say a world isn’t a precise concept.

Interesting point about quantum computers, although I didn’t take that to be quite what you were describing. But definitely quantum computers isolate their qubits from the environment. Of course, eventually they have to produce a result, which will require a measurement, when they become entangled with that environment. When they do, it’ll be just like two particles entangling, but on a much larger scale. The number of states in the quantum processor and the environment together will become equal to the number in each multiplied together. (Although that’s oversimplified, because QCs use interference to promote the right answer, so in practice the causal effects of many of its states may get canceled out and never entangle.)

Ok, hopefully I haven’t just muddied the waters. This matches my current understanding and is a bit different than what I would have written a year or two ago, and might be different a year from now.

Like

1. There are microscopic differences, but those exist anyway in all the quantum jiggling always going on, which usually cancel each other out.

I don’t think this is correct. Chaos is the norm, although it can take a long time (minutes, days, weeks) for effects to spread. The weather is pretty damn important in human history.

Like

2. Possibly, if we let the cat sit in the box long enough, although I don’t know that it would always happen. But I think it would count as a natural measurement event, similar to the possibility of a quantum interaction leading to a transcription error in DNA that turns into a mutation.

Like

3. Fair enough. I am just saying that “natural measurement events” are extremely common.

Like

4. That’s an assertion I commonly see, but is there empirical data demonstrating it? Certainly it seems like it has to happen sometimes, but is there any evidence for its frequency?

Like

5. All the good fluid mechanics software has this feature. Of course, that software models Newtonian physics and I can’t rule out that a quantum correction would make the chaos go away – but I really doubt it. Also, David Wallace talks about chaotic dynamics in his Emerging Multiverse book. Remember, I quoted it when we were discussing Wyrd Smythe’s puzzle about how Everettian mass/energy distributions affect gravity.

Like

6. I remember that remark in Wallace. I think I looked for citations or some discussion of evidence, but didn’t find any.

It’s certainly possible that part of complex system dynamics involves quantum effects bleeding through. I’ve said as much myself many times. But classic physics exist for a reason. I wonder just how profligate such bleeding can be that it took us as long as it did to notice quantum effects. And I suspect any quantum contributions are swamped out by classical factors. But I’m totally willing to learn otherwise if there is data on it.

Like

5. This is really well explained! You know, the first time I heard about Schrodinger’s cat, my immediate response was “Can’t the cat observe itself?” I guess it can’t if it’s dead, but you get what I mean. But in this, the cat “observing” itself as dead or alive is just another step in the wave function spreading out from the original radioactive isotope. This makes sense to me in a way that other explanations of quantum many worlds did not.

Liked by 2 people

1. Thanks! That’s what I was hoping for.

I suspect Schrodinger used a cat to generate exactly those kinds of questions. The pervading interpretation was that observation collapsed the wave function. But whose observation? Someone, might have been Schrodinger himself, asked if observation by a bacterium was sufficient or if a PHD was required. (To be fair, not all variants of Copenhagen keyed on observation. Bohr’s position was that interaction with any macroscopic system was sufficient, although he refused to define “macroscopic”, but presumably a cat would have qualified.)

Liked by 1 person

2. Not just the cat, but the apparatus, too. And the air in the box. Reality observes itself. I think that’s how the classical world emerges.

Liked by 1 person

This site uses Akismet to reduce spam. Learn how your comment data is processed.