With quantum physics, we have a situation where a quantum object, such as a photon, electron, atom or similar scale entity, acts like a wave, spreading out in a superposition, until we look at it (by measuring it in some manner), then it behaves like a particle. This is known as the measurement problem.
Now, some people try to get epistemic about this. Maybe the wave isn’t real but just epistemic probabilities. The issue, shown in the double-slit experiment, is that the wave interferes with itself, something those who want to relegate the wave to completely non-real status have to contend with.
An important point is that if the wave is very spread out, say light years, and any part of it is measured, the whole thing collapses to a particle, apparently faster than light. This appears to violate relativity (and hence causality), which was Albert Einstein’s chief beef with quantum physics, and the impetus behind the concept of entanglement explored in the EPR paradox.
Now, we have an equation, the Schrodinger equation, that models the evolution of the wave. Its accuracy has been established in innumerable experiments. But when we actually look at the wave, that is, attempt to take a measurement, we find a particle, that subsequently behaves like a particle. The math appears to stop working, except as a probabilistic prediction of where we’ll find the particle. This is often called the wave function collapse.
The Copenhagen interpretation handles this by saying that quantum physics only applies to small isolated systems. As soon as something macroscopic is involved, such as a measuring device, the rules change. Kept to a minimal instrumental version, I think this interpretation is underrated. Bare bones Copenhagen doesn’t attempt to explain reality, only describe our interactions with it. It could be seen as an admission that the metaphors of our normal scale existence are simply inadequate for the quantum realm.
Of course, people can’t resist going further. Copenhagen is actually more a family of interpretations, some of which involve speculation about consciousness causing the collapse. Reality doesn’t congeal until we actually look at it. I think the challenges of quantum computing rule this out, where engineers have to go to extreme efforts to preserve the wave to get the benefits of that type of computation. They’d probably be very happy if all they had to do was prevent any conscious mind from knowing the state of the system. But it’s an idea many people delight in, so it persists.
The pilot-wave interpretation, often referred to as De Broglie-Bohm theory, posits that there is both a particle and a wave the entire time. The wave guides the particle. When we look / measure, the wave becomes entangled with the environment, it loses its coherence, and so the particle is now free to behave like a particle. This idea actually predates Copenhagen, although it wasn’t refined until the 1950s.
Pilot-wave initially looks promising. We preserve determinism. But we don’t preserve locality. Looking at the wave, anywhere in its extent, still causes the whole thing to decohere and free up the particle, even if the particle is light years away. So, Einstein wasn’t happy with this solution, since relativity appears to still be threatened.
Hugh Everett III looked at the above situation and asked, what if the math doesn’t in fact stop working when we look? Our observations seem to indicate that it does. But that’s failing to account for the fact that macroscopic systems, including us, are collections of quantum objects.
As it turns out, the Schrodinger equation does predict what will happen. The wave will become entangled in the waves of the quantum objects comprising the measuring device. It will become entangled with the environment, just as pilot-wave predicted, but unlike pilot-wave, Everett dispenses with the particle.
Crucially, rather than collapsing, the superposition of the wave will spread, just as it seems to do before we look. Why does it appear to collapse? Because it has spread to us. We have gone into superposition. Every branch of that superposition will now continue to spread out into the universe. But the branches are all decohered from each other, each no longer able to interfere with the other. They are essentially causally isolated.
So each of those branches could be romantically described as being in its own separate “world”, resulting in many worlds, the many worlds interpretation.
The appearance of the collapse, under the many worlds interpretation, is because we are now on one branch of the wave function, observing the small fragment of the original wave that became entangled with this branch of the environment. Under this interpretation, there is a different version of us in each other branch seeing differing parts of the wave, which we now refer to as a “particle”.
Which of these interpretations is true? Copenhagen, pilot-wave, many worlds, or some other interpretation? They all make the same observable predictions. (The ones that don’t were discarded long ago.) It’s the predictions they make beyond our ability to observe that distinguish them from each other.
We could ask which has the fewest number of assumptions. Most people (often grudgingly) will admit that many worlds has the most elegant math. (Evoking comparisons with Copernicus’ heliocentric model in relation to Ptolemy’s ancient geocentric one.) And it does preserve realism, locality and determinism, just not one unique reality. Whether that mounts to fewer assumptions than the others is a matter of intense debate.
Each interpretation has a cost, often downplayed by the proponents of that interpretation, but they’re always there. Quantum physics forces us to give up something: realism, locality, determinism, one unique reality, or some other cherished notion. As things stand right now, you can choose the interpretation that least threatens your intuitions, but you can’t pretend there isn’t a cost.
Unless of course I’m missing something.