Last week, Sabine Hossenfelder did a video and post which was interesting (if a bit of a rant at times at strawmen) in which she argued for a little considered possibility in quantum mechanics: superdeterminism.
In 1935, Einstein and collaborators published the famous EPR paradox paper, in which they pointed out that particles that were entangled would collapse into compatible states when measured simultaneously, even if separated by vast distances. This appeared to violate locality, a bedrock principle of relativity. It seemed to require faster than light communication between the particles. Einstein argued that there had to be hidden variables that determined the states from the time the particles were entangled. In other words, quantum mechanics was incomplete.
Three decades later, John Stewart Bell discovered a way to test this proposition. Bell noted that when measuring spin states along various axes, the statistics of the results would be different if Einstein was right instead of quantum mechanics. In the years that followed, the tests were conducted. The results were compatible with quantum mechanics. Einstein was wrong. This result is usually taken to indicate that no local hidden variable theory can reproduce the predictions of quantum mechanics. (Non-local hidden variable theories weren’t impacted, although they have other issues.)
However, Bell admitted there was still a loophole for his theorem. If the decisions the experimenter made to measure the particle and the state of the particle itself were somehow forced to be correlated (violating statistical independence), it was still possible to produce a local hidden variable theory. Einstein couldn’t be right, but perhaps some prior common cause in the history of the experimental setup and the particle was keeping things in synch.
Over the decades, various experimenters have pushed the boundary on when these prior causes might exist. One a few years ago used signals from quasars billions of light years away to ensure the causes of the experimental setup were as far into the past as possible. If a prior cause is determining the correlation, it now has to be found in the early universe.
I think this is why most physicists consider superdeterminism unlikely to bear fruit. It’s not an issue of determinism, but of unlikely correlations the entire universe has to enforce.
However, Hossenfelder argues that the causal history isn’t relevant. The only thing that matters is the state of the measuring device at the time of measurement. What the particle does, she asserts, is dependent on how it’s measured. At first glance, this doesn’t seem that different from conventional views of quantum mechanics. Except for the assertion that it’s all deterministic.
How can this be? Going through her preprint paper on this, it appears that Hossenfelder is relying on a type of retrocausation. Although she objects to the term “retrocausation” because it implies that information is somehow going backward in time, which isn’t what’s being argued. She prefers the term “future-input dependence”.
If I’m understanding correctly, the point is that all physical theories are time symmetric. They don’t prescribe the order of cause and effect. All theories, that is, except for the second law of thermodynamics, which states that entropy always increases, which is what usually gives the classical order between cause and effect. But the second law is emergent from the particle physics of large populations of particles. It doesn’t exist in fundamental interactions between particles. The idea, I think, is that the future state of these particles could impinge on their past states.
It’s an interesting proposition, but my initial take is that the experimental set up to measure a particle is a macroscopic system with a vast population of particles, one where the second law should definitely apply and so the normal directionality of cause and effect. There’s a discussion in the paper on causality that may address it, but if so it was over my head. I suppose it’s conceivable that the future-input dependence could funnel through one or a small number of particles, perhaps the particles that initially interact with the one to be measured. These initial particles might be in their state due to the state of the overall measuring apparatus.
I can’t say I find this line of reasoning too convincing, but I’m not a physicist and so could be missing important facts. I’d be curious to see what other physicists think.
I think a bigger issue for me is the fork Hossenfelder takes earlier in her reasoning, although it’s a common one, to deny wave function realism. I’ve always had an issue with that move, because it seems to ignore the main reason we have the wave function in the first place, the observed interference effects. If the wave function isn’t modeling something real, where are those effects coming from? What is interfering with what? Or what is giving us the impression of that interference?
I can understand why many make this move, because accepting wave function realism puts you in a quandary. It means the wave function collapse, if it happens, has to be a physical event, which seems to bring in a huge pile of mysteries and the paradoxes Einstein objected to. Or we have to accept that the collapse itself isn’t real, which puts us in either pilot-wave or many-worlds territory. And once there we have to choose between explicitly non-local dynamics or an ontology many consider hopelessly extravagant.
But as Hossenfelder herself states, it’s a mistake to reject a proposition just because we don’t like the philosophical implications. In her view, that means we accept what “all those experiments have been screaming us into the face for half a century” that superdeterminism is reality. Of course, others will contend that she herself is only tempted to take the hidden variable path to avoid the other philosophical implications she dislikes, indeterminancy, non-locality, many-worlds, etc.
Still, superdeterminism doesn’t seem like an option we can completely rule out. Hossenfelder is calling for more measurements of the very small and very cold to stress the regular quantum formalism. Even if all it does is confirm quantum mechanics, that seems worth it. And if it does find cracks, well, who knows what they might reveal. Although I suspect the hope that it will make things less weird is likely to be forlorn.
As with all discussions about quantum mechanics, if it doesn’t leave your conception of reality shaken, you haven’t understood what’s at stake.
What do you think about superdeterminism? A promising path, or a misguided attempt to wrench things back into classical physics?