Last week the Nobel Prize in Physics was awarded to Alain Aspect, John F. Clauser, and Aton Zeilinger, for their work in testing quantum entanglement, essentially validating that quantum mechanics is correct about the phenomenon, and eliminating, or at least profoundly minimizing, any possible loopholes.
Of course this set off a lot of physicists discussing entanglement, and led to the inevitable arguments about what it means. One early effort I liked was a Twitter thread by Adam Becker, another was a Big Think article by Adam Frank. Frank’s overall thesis is that quantum physics forces weird choices on us.
It’s often said in the popular press that quantum entanglement forces us to give up locality, to accept “spooky action at a distance”. That is one option. But as Becker points on in his thread, and Sabine Hossenfelder in her own Twitter comments, there are others. Rather than give up local dynamics, we could accept superdeterminism, the idea that unlikely correlations are causally set in the early universe, or that some form of retrocausality, where the experimental setup has effects on, or at least constrains, the earlier evolution of the particles that will be measured.
Or, rather than add anything to the theory to explain what’s happening, we could simply accept the mathematical structure of quantum theory for what it is, along with all the consequences. Because doing this results in the infamous many-worlds scenario, most people reject it out of hand.
But I think when weighing these options (and others), we should remember about how we got here. When Albert Einstein and his collaborators first identified entanglement in 1935 in the famous (infamous?) EPR paradox paper, the purpose wasn’t really to just recognize a feature of quantum mechanics. It was to identify something so absurd and ridiculous that it could not possibly be true, and so demonstrate that quantum theory had to be incomplete. Erwin Schrödinger’s famous cat thought experiment followed shortly afterward with the same sentiment.
Einstein and his co-authors reportedly took heat in the scientific community for speculating about something untestable and metaphysical. Today we know that a couple of decades later John Stewart Bell figured out a way to test whether Einstein or quantum theory was correct. The Nobel is going to the experimentalists who over the decades made it happen with increasing precision and completeness.
Richard Feynman reportedly dinged philosophy because no philosopher even conceived of the possibility of quantum weirdness before the data forced it on us. But it’s worth noting that neither did theoretical physicists, and some, like Einstein, never accepted that quantum theory is the whole story.
All of which, it seems to me, is something we should consider when evaluating scientific or philosophical possibilities. It’s very easy to reject propositions we dislike, that just don’t accord with our preconceptions of how reality works. But if there is a logical chain of reasoning for the proposition, and we can’t identify where in that chain things are going off track, then we should remember all the similar rejections made throughout the history of science that turned out to be wrong.
Of course, that doesn’t mean we should accept the proposition as true without evidence. It’s an uncomfortable fact of life that there are a lot of scientific and philosophical propositions which we can’t yet dismiss, but also can’t take as reliable knowledge. And it’s very easy to fool ourselves that a proposition we want to be true is the best explanation, when there may be simpler or less exotic options.
I don’t know any way to avoid the two extremes here, other than to force ourselves to explain the logic for propositions we think are true, and find specific logical issues with the ones we don’t. And be willing to change our mind when the reasoning or evidence warrants it.
But maybe I’m missing something?