Those inconvenient quantum interference patterns

Are quantum states and the overall wave function real? Or merely a useful prediction tool?

The mystery of quantum mechanics is that quantum objects, like electrons and photons, seem to move like waves, until they’re measured, then appear as localized particles. This is known as the measurement problem.

The wave function is a mathematical tool for modeling, well, something related to this situation. Exactly what that something is, is a matter of long standing debate. Erwin Schrödinger, developed the wave function after being inspired by Louis de Broglie’s hypothesis that matter has a wave like nature, similar to light. Schrödinger’s original intention was to model the electron itself. 

But Max Born took his equation and discovered that squaring the amplitude of the wave at any particular location gave the probability of finding the particle in that spot. Which converted Schrödinger’s equation about physical waves into a straight calculation tool. This might seem like a natural move. No one actually ever measures a quantum wave function, only particles. And the wave function describes a high dimensional abstract configuration space, which makes its relation to reality unclear. 

Still, Schrödinger wasn’t happy about the move, and continued arguing for some version of wave function realism. Which began the long standing debate between seeing the wave function and its quantum states as modeling something real, or just calculating the probabilities of future measurements.

In a recent conversation, someone compared my reasoning on this and consciousness, where I largely see the limitations of introspection as dissolving the hard problem of consciousness and the need for exotic solutions to it. They wondered why I don’t make a similar move for quantum mechanics, and just go epistemic, a stance epistemists see as dissolving the measurement problem.

I do occasionally review the arguments to see if I’ve overlooked anything about the epistemic view. Certainly it would appear to make the need for things like a physical collapse, non-local causality, a quantum multiverse, or other metaphysical “costs” unnecessary. The only “collapse” would be an informational one, an update in our state of knowledge. Definitely a grounded option to be taken if feasible.

But my block on this remains the whole reason we talk about wave-particle duality in the first place, the wave interference patterns revealed in the double slit experiment or the Mach–Zehnder interferometer. Crucially, in these experiments, the apparatus can be set to send one particle at a time and the landing location of each particle recorded, with the result that the interference pattern still accumulates. 

In other words, the one particle (which can be photons, electrons, or even large molecules) seems to interfere with itself. The only way that seems possible is if the particle goes through both slits at the same time. The question for people who assert the epistemic view is, how can they account for this evidence?

A frequent response over the years has been Robert Spekkens’ toy model, a hidden variable model showing a different physical reality that the wave function could model statistically, but not be an accurate description of. The argument is that this alternate model, and similar efforts, can successfully account for interference effects.

Hidden variable theories, which either expand the structure of quantum theory or propose alternative structures, are constrained by various no-go theorems, the most famous being Bell’s theorem, which requires that they be causally non-local. This seems to complicate any efforts to reconcile them with special relativity, something that was done with straight quantum theory by 1930. An alternative theory with a smaller scope of usefulness doesn’t strike me as likely to be the more accurate description of reality.

But the nail in the coffin for the toy model and similar approaches is the PBR theorem by Matthew Pusey, Jonathan Barrett, and Terry Rudolph. In short, this theorem demonstrates that pure quantum states in the wave function, if they are referencing any objective reality at all, must describe something real. Any different reality would lead to predictions incompatible with quantum theory.

The PBR theorem does have a couple of assumptions. One is that the preparation of two separate quantum systems can be independent of each other. This seems similar to the “free will” assumption in Bell’s theorem (which is actually about independent preparation of measurement choices). It seems like this assumption can be violated in the same way the Bell one can, with some version of retrocausality. So superdeterminism remains an option, albeit a long shot one in most physicist’s eyes.

But the second assumption is more basic, that the wave function is referring to something physical and objective, even if it’s not an accurate description of it. When I was an instrumentalist toward quantum mechanics, this was largely my view. I never doubted that there was some physical reality there, just not the straightforward one described by quantum states with all its bonkers implications.

The second assumption can be violated by going explicitly anti-realist, and simply asserting that there is nothing objective happening at all, that the measurement outcomes just happen, that they are fundamental interactions, brute facts of the world. In this view, the wave function is nothing but a prediction related to future measurements. Since the outcome is something fundamental, there’s nothing left to investigate. We just need to get used to it and stop asking questions.

Of course, there are a lot of people who are willing and eager to bite this bullet. It’s one of the postulates of neo-Copenhagen interpretations like RQM (relational quantum mechanics) and QBism (quantum Bayesianism). It’s worth noting that these stances come with their own metaphysical “costs”, whether it be the semi-idealism of QBism’s participatory reality, or the sparse “flash” ontology of RQM.

However, while the hidden objective reality view might have at least aspired to provide an answer to my interference question, the anti-real stance seems to outright ignore it, or assert that the question is meaningless. When you hear about the “shut up and calculate” phrase, this is where it comes from.

I find the incuriosity laden in this position difficult to understand. My interference question remains. But I also now want to know why the wave function is so useful, particularly when it comes to something as complex as quantum computing. If there’s nothing going on prior to the measurement outcome, then why are there detectable patterns at all? It seems like the “no miracles” argument for scientific realism, or at least structural realism, applies here. 

So, at least for now, I remain in the quantum state realist camp.

But maybe I’m missing something. Are there explanations for the interference effects in the epistemic view I’m overlooking? Or reasons to just dismiss the concern?

Featured image source

58 thoughts on “Those inconvenient quantum interference patterns

  1. Re “Which began the long standing debate between seeing the wave function and its quantum states as modeling something real, or just calculating the probabilities of future measurements.”

    Early QM had a competitor to wave mechanics, matrix mechanics, which was considered mathematically equivalent to wave mechanics and so was dropped. But in the matrix mechanics version there are no waves, real or imagined, no? Which to me means that the “waves” aren’t necessary.

    As to what the wave functions describe, is it not clear? They are the square roots of position probabilities . . . :o).

    Liked by 1 person

    1. Wave mechanics and matrix mechanics (as well as Feyman’s path integrals) are mathematically equivalent, which to me means any topologies in one should show up in the other, although not necessarily in an obvious manner. Heisenberg didn’t have waves in mind when he developed his approach, but that doesn’t mean they wouldn’t have eventually emerged.

      In any case, the experimental data issue for me remains: how to explain the interference effects. If there’s no wave (or something wave-like) happening, then what’s the alternate explanation?

      Like

      1. I think it is important to remember that at this level all “observations” and “detections” involve an interaction with the particles involved which affect the outcome.

        Clearly what we consider to be “purely” wave-like behavior and “purely” particle-like behavior are not clearly distinguished at the quantum level. Since so much at the level of the very, very small is quite different from the large and very different from the very, very large, I should guess that this is not surprising.

        At the galactic level, gravity dominates. At the quantum level, gravity is a non-player. So why should the distinction between waves and particles as we experience hold at the level of the very, very small?

        Like

        1. Definitely on the interaction. That’s an important factor in understanding quantum decoherence.

          But here’s another piece of evidence that should disturb psi-epistemics. We can do the interaction, then reverse the effect, in other word erase the evidence before the information about the path of the particle gets out into the world. When we do, the interference pattern returns.
          https://en.wikipedia.org/wiki/Quantum_eraser_experiment

          Right, the effects of gravity are not understood yet. They may have a dramatic effect on our understanding of what’s going on, if we can ever reconcile quantum theory with general relativity.

          Like

          1. Mike, I suggest you need to be much more circumspect when describing the quantum eraser experiment. You do realize that no evidence is “erased” and the interference pattern is not “restored”. Instead, a subset of evidence is ignored, and the remaining evidence contains the interference pattern.

            It’s possible I misunderstand the set-up, but if I don’t, here’s what could be happening:
            Each electron does the interference-thing as it goes thru the slits. Then it interacts with one of the polarizing filters (which one determined randomly by hidden variables) which shifts the final interaction at the screen either left or right. If you take an interference pattern, but shift half of the hits a little left and the other half a little right, guess what the total pattern is?
            Now if you take the entangled particle, but send that thru a filter which only sees the particles that would get shifted left in the double-slit side, then you use that data to go back to the first data set and select those hits from the electrons that got shifted left instead of right. When you just look at those electrons, you see the interference pattern.

            Again, let me know if something here is wrong.

            *

            Liked by 1 person

          2. James, you’re right. I oversimplified the description. This is what I get for not rereading the article past the intro before linking to it. The experiment I was remembering is different from this one. (In that one, I think they just rotated the polarity at one of the slits and then rotated it back.) I think you’re right about this one preserving a subset of the evidence. Unfortunately that Wikipedia description is confusing. There’s a slightly better one at https://laser.physics.sunysb.edu/_amarch/eraser/index.html

            I’m afraid I’m not following your hidden variable description. But I do think you have the right idea in the final paragraph about thinking about this in wave dynamics. My interpretation, in terms of normal wave function mechanics, is that the initial scenario, before the polarizers are put in front of the slits, is just the standard double slit scenario.

            In the second scenario, the circular polarizers in front the slits act depending on what the detector for the entangled particle detects. Which causes the portions of the wave in each slit to have different circular polarizations, and so different phases, causing decoherence, and the loss of the interference pattern.

            In the “eraser” scenario, the detector for the entangled particle will now only detect diagonally polarized photons, which means only diagonal ones are considered at the circular polarizers in front of the slits, and they (I think) either do nothing, or treat the wave portion for each slit the same, keeping them in phase, and so preserving interference.

            All of this produces the same result as the “which way” story, but without sounding nearly as mystical. (Which in my mind, is always the benefit of the wave story, at least until after decoherence.) Another way of describing the “which way” scenario is when the causal effects of the quantum system in question get amplified into the environment. If something prevents that causal propagation (and associated backreaction on the system itself), then quantum effects remain.

            Like

          3. A quick note about the (much simpler) eraser experiment I had read about before. It involved the standard double slit experiment.

            1. The initial sequence with the interference pattern is just the standard double slit experiment.

            2. Then polarizing filters are put in front of each slit, a vertical filter for one slit, and a horizontal filter for the other. This causes the interference pattern to disappear. We now have “which way” information available since the polarity of the photon gives it away.

            3. The eraser stage is adding a diagonal filter after each slit, which lets a fraction of the light through on both sides, but since both now have diagonal polarity, there’s no “which way” information available. The interference pattern returns.

            This scenario seems more reasonable for the “erasing the evidence” phrase, since the “which way” info exists for a brief instant in the slits between the filters.

            As I noted above, we can also describe this as the branches of the wave in 2 no longer being having a definite phase relationship with each other, and so are decohered. And in 3, the light that makes it through is again in phase, and so can constructively interfere.

            Like

          4. [hope it’s not too late to reply here, but …]

            The way I see this new experiment is you start with a single population of polarized particles. Before a particle goes thru the slits, it interacts with one or the other (vertical/horizontal) filter. Interacting with the filter changes the polarization of the particle (pretty sure this is known). So two populations of particles go thru the slits, shifted in polarization, and so two interference patterns are created. But the patterns are shifted, resulting in what appears to be a no-interference pattern. When the particles go thru the second filter, their polarizations are again shifted, such that now they are synchronized and produce the interference pattern.

            Anything obviously wrong with this interpretation?

            *

            Liked by 1 person

          5. [never too late!]

            That wave based description sounds pretty similar to the one I give in my last paragraph. One point, easy to miss, is that since these are filters, what’s actually happening is that photons are being removed from the population. Which the more I think about it, makes the whole thing seem a bit bait and switch. I was already convinced the delayed choice eraser was contrived, but now I’m thinking the straight eraser one may be as well.

            The closest thing to a true eraser might be an experiment that was able to recohere a very limited degree of decoherence (essentially reversed the entanglement that had propagated up to that point). https://www.nature.com/articles/srep15330

            In principle, if you can stop the “which way” information from getting out (in other words, stop the causal effects from propagating), then you can do an “erase”, but the practical difficulties are enormous.

            Like

          6. I feel the need to point out that these filters not only filter out some particles, but they also alter the polarity of the ones they let thru. That’s why if you have two filters at 90 degree offset, no light gets thru, but if you put a filter with 45 degree offset in between the first two, suddenly light comes thru again.

            Like

          7. Good point, and that is the purpose of the second filter, although even the pre-slit ones will let light through with polarity less than 90 degrees off.

            One thing you said above I meant to comment on before, that there would be two interference patterns. That might be true if the light from each went through another pair of slits (or similar apparatus). As is, each portion would be coherent with itself (which is what I took you to mean), just not with each other. The second 45 degree filter would reestablish their coherence for the light that makes it through.

            I suppose that does imply there is erasing going on, so maybe it’s not as bait and switch as I was thinking.

            Like

  2. “The only way that seems possible is if the particle goes through both slits at the same time”.

    When one particle is sent in the double-slit experiment, is only one particle detected at a time?

    If so, I don’t see the mystery. We would expect a wave to be at various locations so the pattern of particles reflects a statistical pattern of where we are able to do a measurement of the wave.

    Liked by 1 person

    1. Only one particle hits the back screen at a time. But the interference pattern still emerges over time as the number of particles sent increases. (From what I understand, this experiment takes weeks to complete.)

      On the other hand, if a detector is set up at one of the slits, detecting the particle prior to the opportunity for interference, then the pattern on the back screen changes to what we’d expect for classical particles.

      So the mystery is, what is interfering with what for each particle that’s sent in alone?

      (Or with more positivist caution, what causes the pattern on the back screen which matches the patterns historically seen with waves in other contexts?)

      Like

      1. “if a detector is set up”

        In terms I would understand it, once you’ve measured it, you’ve measured it so it would behave classically. Before that, the particle can be found in undetermined locations on (or with) the wave that would make a statistical pattern if plotted. .

        Liked by 1 person

          1. The origin is the wave. The wave is gone once it is detected.

            If we have a fair coin and flip it again and again, the distribution will approach 50-50. We wouldn’t ask for the origin of the 50-50 distribution because it is simply inherent in the coin and flipping.

            Like

          2. Right, the question is whether or not the wave is a real physical force.

            On the coin flip, we would expect a 50-50 distribution pattern. But suppose instead we got a 75-25 pattern, or something else. At that point we might start doubting that the coin is really fair, or whether something else was going on.

            Suppose we randomly toss the coin into a dark room, and do that repeatedly with additional coins, one at a time. We might expect a more or less random distribution on where the coins land. But when we look at the distribution, we see 80% of the coins end up on the south side of the room. Again, wouldn’t we expect something was going on?

            Now suppose the distribution pattern had an interference diffraction pattern in it. But we’re assured that everything is still perfectly random; nothing to see here. Shouldn’t we suspect there’s something happening with the underlying dynamics? And wouldn’t the distribution pattern be telling us something about it?

            Like

          3. The wave is a real physical force in my view. This may be completely incorrect but I almost think of it as two phases like water and ice. The same material different states.

            Again, wouldn’t we expect something was going on?

            Yes, with coins. Nature is more complicated. It’s not random but it’s not determined either. Probably those patterns are critical to how the universe works as it does, but where they ultimately came from gets into the realm of God or a God substitute.

            Liked by 1 person

          4. Sounds like you’re coming down on a physical collapse of some type. The good news is that’s testable, and experimentalists are gradually probing the scales on which we might expect to see it.

            I’m leery of God or ultimate unknowability answers. It feels like giving up. Newton reportedly had to fall back on it when he couldn’t figure out orbital perturbations. Pierre-Simon Laplace later figured out how he could explain it without that hypothesis. At best, it seems like a poetic way of just saying, “I have no idea…yet.”

            Liked by 2 people

  3. Until I’m convinced otherwise and why I’m a Copenhagen fan – Particles are nature and only come into being when consciousness (the observer) measures it. I’m a fan of Bell’s theorem and that only as conscious observers do we conjure particles into their existence. This is discussed at 33:00 in the video ‘The Secrets of Quantum Physics 1 of 2 Einsteins Nightmare’.
    Also I point you to my article:
    ‘Consciousness and Quantum Mechanics – Is it Too Early to Rule out the Copenhagen Classic Interpretation?’ https://observationblogger.com/2019/01/10/consciousness-and-quantum-mechanics-is-it-too-early-to-rule-out-the-copenhagen-classic-interpretation/

    Liked by 1 person

    1. Copenhagen is actually a broad family of interpretations. The Niels Bohr version didn’t involve consciousness. But John von Neumann and Eugene Wigner speculated about it. (Which is why these days, the consciousness version is often called the von Neumann-Wigner interpretation.) von Neumann, noting that there was nothing in the math predicting a wave function collapse, wondered if quantum states might persist until they interacted with the mind.

      Most scientists moved on once decoherence theory was developed. Although QBism seems to skirt the issue with its participatory realism. And a lot of contemporary idealists continue to like the idea.

      Interestingly, if the Everett interpretation is true, then relative to any individual mind it will look like that mind causes the collapse. Of course, per RQM, you could say it looks like that for any physical system.

      Liked by 1 person

  4. Mike, you’ve obviously spent more time looking at the issues/theorems involved here, so I’m hoping you can sort me into a camp.

    My current understanding is that there could be hidden, non-local, variables. The non-local part means that we should stop talking about particles, because particles have a location, and so, are not “non-local”. So electrons are not particles. In the two-slit experiment it is wrong to say they go thru one slit or the other. Instead, they go past the barrier (or not). Because, whatever they are, they behave in wave-like ways, we can see wave-like effects. However, when they interact, they interact with one thing at a time, which is why they appear “particle-like”.

    Looking quickly at the “toy-model” link, I see something which shows how mathematical models of information can produce some wave-like effects, but does not produce non-locality. But neither do I see anything that precludes non-locality. Did I miss something?

    Looking at the PBR theorem, I see this:

    “the interpretation of the quantum wavefunction … can be categorized as either ψ-ontic if “every complete physical state or ontic state in the theory is consistent with only one pure quantum state” and ψ-epistemic “if there exist ontic states that are consistent with more than one pure quantum state.””

    I interpret this as:
    1. Ontic = the real (ontic) state determines the quantum state, or
    2. Epistemic = the real (ontic) state could generate the multiple quantum states.

    What I don’t see here is
    3. (Epistemic-ish?) = multiple real (ontic) states can generate a given quantum state,
    which seems consistent w/ hidden non-local variables.

    So (heh), what am I missing?

    *

    Liked by 1 person

    1. James, as usual you ask good questions. I love it.

      Bell’s non-locality stipulation actually doesn’t necessarily mandate non-locality in the sense of waves, but just in the sense of action at a distance. And it’s worth noting that John Bell’s favorite interpretation was de Broglie-Bohm pilot-wave theory, which has both a guiding (pilot) wave and a particle. It’s just that the particle’s states can be affected non-locally. I think it’s why hidden-variable theories break the reconciliation between QM and special relativity.

      So Bell’s theorem doesn’t rule out Spekken’s toy model. (I doubt he would have bothered if it had.) What *does* rule out the toy model is the PBR theorem. So Bell mandates that any hidden variable model must have action at a distance. But the PBR one says that it must include the pure quantum states of the wave function. If it doesn’t, then it has to make different predictions than straight quantum theory.

      On your three categories, right, but if 3 is the case, doesn’t that mean that we’re actually in 1, but with a more complex underlying reality? The PBR theorem, as I understand it, only mandates that the structures be real, not that they be fundamental. They can always still be emergent from some underlying reality. (As a structural realist, I wouldn’t be surprised if we don’t someday find out they are.) And we already know that all quantum states in any equation we right down are coarse grained. They can always be subdivided further, so we really already know 3 is true within 1. (Unless I’m missing something on this end.)

      Which is why, aside from retrocausality, the only option left for epistemics is to deny any ontic state at all, along with the metaphysical assumptions necessary for it. Which of course QBism and RQM already do. But which also, I think, ignore troublesome evidence, like the interference patterns.

      BTW, here’s another paper I meant to link to in the post, but it got deleted when I removed a section, which thoroughly reviews the overall subject. I won’t claim to have read the whole thing, but you might find sections of it interesting.
      http://quanta.ws/ojs/index.php/quanta/article/view/22

      Like

  5. For me, the whole debate about the wave being real or not hinges on what one understands by “real”.

    Consider an analogy… Fourier decomposition represents any wave-form as a (possibly infinite) sum of sine waves. Are these component waves real or not? To make the question less abstract, let’s look at planetary orbits (2D wave-forms). We know Kepler’s laws and their derivation from Newton. But the same orbits can be Fourier decomposed into circular orbits with a series of ever decreasing (circular) epicycles. These epicycles certainly describe something real — namely deviation of an elliptical orbit from a perfect circle. But are such heliocentric epicycles real? To my mind the correct answer is: it depends on what you mean by “real”. In other words: who cares?

    And that is at bottom my view of Schrodinger’s wave. It describes something real — sure. But its relation to that something is unclear and the something being so described is (taking Bohr’s neo-Kantian stance here) at bottom unknowable.

    One way to look at it is that the wave describes evolution of probabilities. Are probabilities real? That’s a deep philosophical question with no clear answers. And quite possibly with no answer other than a definitional one.

    Liked by 2 people

    1. For me “real” is about causal effects and coherence with other theories we regard as real. But there can be many different descriptions, models, approximating that reality. Some may be more useful than others. And they can be replaced in the future with other models that get to a closer approximation.

      So my view is similar to yours. The math is describing something real, at some level of organization. I’m fairly agnostic on whether what it’s describing is fundamental or something emergent from an unknown underlying reality. Or what portions could be substance vs relations. But that’s enough to put us outside of the anti-real camp, particularly in the wake of the PBR and related theorems.

      The biggest issue I have with talk of probabilities is that any example of probability we can provide outside of QM is going to be about something ontic, like a coin toss, cards in a deck, weather patterns, etc. But talk of them in relation to QM seems to be in service of something that’s asserted to refer to nothing ontic. I struggle to see how that’s coherent, except maybe in some kind of idealist or semi-idealist framework.

      Like

      1. I’d say that interpreted as a probability wave it is very much about something ontic — i.e. about the likelihood of particular quantum outcomes; not different, in principle, from a die toss. I am quite comfortable with such likelihoods propagating as a wave. Maybe it’s my ancient training in statistics. 🙂

        BTW (not sure I am on the right thread here), MW’s problem is that it may be falsifiable, but it is not verifiable, because it makes no predictions specific only to it. If it were falsified, all other QM interpretations would get falsified too. This does not apply the other way around. E.g. Penrose’s gravitational collapse being falsified would not affect QM as such and thus would not affect other interpretations.

        Like

        1. Right, but a die toss isn’t just a result, it’s all the mechanics of the die being shaken and tossed, and then of it landing, rolling, etc on a surface. So the probabilities associated with it map to underlying ontics. And I’ve never known the probability of various die toss outcomes to interfere with each other.

          To say that quantum measurement outcomes just happen, with no ontic dynamics prior to the result, is itself an ontological assertion. (Carlos Rovelli, to his credit in his SEP article, owns up to how radical it is.) I might see it as plausible if the result were utterly random, with no discernable patterns at all, but we have a wave function because there are such patterns.

          No worries on threads, I’m not strict about thread topics.

          On being falsifiable but not verifiable, I think that applies to just about any scientific theory. It’s why the logical positivist verification principle is now seen as problematic (at least by most people), particularly since the principle itself isn’t verifiable. Most scientific theories involve inductive generalizations which are ultimately never verifiable. Karl Popper’s falsifiability criterion, although not the last word in the demarcation issue, is seen as much more plausible by most scientists.

          On not being uniquely falsifiable, that used to be a concern I had too. (I once left a comment on Sean Carroll’s blog to that effect.) Until I realized that the simpler theory (in terms of rules or principles) is never uniquely falsifiable. Falsifying it always falsifies other more complex theories that are a superset of it. That shouldn’t be seen a problem for the simpler theory, but a strength, one arising from its parsimony. Everett is pure wave mechanics. Verifying that anything else is required (guiding equation, physical collapse, etc) falsifies it.

          At least, unless I’m missing something on either of these points. 🙂

          Like

          1. Hm… I thought I’d replied to this one,but it would seem not. So…

            I am of course, aware that theories cannot be verified in any absolute sense (though I agree with Lakatos, contra Popper, that in practice falsification is also not as decisive as all that). But we do verify theories in the Bayesian sense. If a theory makes a verifiable prediction(s) which its rivals do not and the prediction is subsequently confirmed, we prefer that theory to its rivals without thereby assuming that this proves the theory to be correct. That’s why we keep testing our theories again and again. My point is simply that MW by definition makes no prediction which are not made by other interpretations of QM. Thus, unlike others, it cannot be Bayes-verified. OTOH falsifying it would bring all of them down, which isn’t necessarily the case with others.

            Re probability waves interfering… Serves me right for appealing to an epistemic probability analogy of tossing a die. 🙂 QM probabilities are not epistemic, of course, so it is better to think of them in dispositional terms. Nevertheless, I thought I could come up with an epistemic-probability based analogy to clarify why I am fairly comfortable with probability waves, but I cannot think of any which are not too contrived. This I actually find surprising, because my intuition has no problem with probability waves not being ontic *in themselves*. Given your comment about you possibly finding complete randomness more acceptable, I just wonder whether we have different mental views of waves.

            It may save me a lot of typing if at this point I just ask whether you know of Huygens Principle and if so, does it automatically come to mind when you think of waves? If that’s where we differ, it could also, perhaps, explain our differences re Feynman’s path integral approach.

            Like

          2. Right, if a theory is a subset of a more well accepted theory, then there’s nothing we can do to uniquely verify it in that sense, except to falsify the other theories. It seems like that can be done by pushing the leaner theory’s boundaries, by attempting to crowd out the additional predictions, if possible. That’s why I find the experiments isolating quantum effects in ever larger systems interesting, like this new proposed method: https://physicsworld.com/a/protocol-could-make-it-easier-to-test-the-quantum-nature-of-large-objects/

            Of course, knocking out physical collapse theories still leaves other options, but I fear we’ll never be able to eliminate every alternative. Even well accepted theories typically have more complex alternatives. At some point we do have to rely on parsimony. And yes, I know many people think that goes against Everett. But typically it’s a heuristic for assumptions, not predictions. It seems like we actually want the highest ratio of accurate predictions to assumptions we can get. Sometimes that means reducing things on the assumption side.

            I just read a couple of quick summaries of Huygens Principle (on Britannica and Wikipedia). It does fit my intuition of how waves propagate. Should path integrals change the way I think about it?

            Like

          3. Exactly. Even if you knock out all known competitors to MW, it does not mean there isn’t one which does work, but we haven’t thought of it yet.

            Re Huygens… I don’t want to overstate my case — this line may not lead anywhere. I am just trying to work out what is it that apparently primes my intuition to shrug off probability waves as no big deal. Just thinking “aloud”, really. Hope you don’t mind.

            The Principle is generally formulated on the lines of a decomposition into “primitive” perfectly symmetrical waves originating at every point. But the lesson I take from it is that waves are ontologically redundant, because the moment such a primitive wave reaches the nearest neighbour (or neighbourhood) it simply becomes a part of that neighbour’s primitive wave. In other words all that actually happens is that there is a periodic oscillation of variable amplitude at every point and neighbouring amplitudes modify each other. Waves, wave-fronts and interference are secondary, phenomenological entities, not primary ontological ones.

            So I think of a probability “wave” in terms of amplitudes varying locally and influencing immediately adjacent amplitudes — both of these being pretty intuitive for probabilities. If such interactions result in (and are describable as) interfering waves, that’s fine. But it is a description of phenomena, not of ontology.

            Path integral comes into it because I see a very close affinity of this picture to Feynman’s approach — manifest in his explanation of light propagating along paths of shortest time, as outlined in his QED lectures. The behaviour of light emerges from (rotational) oscillations *everywhere* of “something” — effectively of phases of electromagnetic field. Basically, light takes all possible paths., but in wave-description they all get cancelled by interference, except for the shortest time one. And that is, as I understand it, a simple application of path integral.

            Hope this makes some sort of sense. A caveat, though… I don’t claim the above to be any sort of gospel — it’s just how I make sense of what little I have learned by fairly unsystematic reading. My background in mathematics, classical physics and statistics does help some, but I wouldn’t want to overstate my expertise.

            Like

          4. Hey no worries. Your background is a lot closer to this stuff than mine, which is basically programming and other IT technologies, and is increasingly in remote past since I’ve been in management now for several years. My knowledge of QM comes from reading several layperson level books on it, part of one on quantum computing, and then using the fragmented knowledge from those to make my way as best I can through physics papers and SEP articles.

            I love having these discussions because I’m not confident in my conclusions, and want to know the arguments against them, or for other conclusions. In that sense, thinking “aloud” is definitely welcome. That’s basically what I’m about to do.

            On the waves and interference, I think a lot of antirealists would say they’re not even phenomena, just a calculation tool. For me, the issue is logical coherence. We see different patterns depending on the conditions set up by the experimental apparatus (turning a detector at one of the slits on or off, etc). An antirealist might say we’re just adjusting our expectations of the measurement results. I might agree that we are, except for the “just” part. We’re not acting directly on the results (or at least we don’t appear to be). That implies something is changing in the dynamics between the apparatus manipulations and the result, something that’s part of the causal chain.

            I think this gets at something that bothers me, both in this area and with the consciousness debate. What we’re talking about is mysterious and complex. Many people want to say that this complex thing is just fundamental. That there’s nothing to study here. We just need to accept it as a fundamental fact of the world. For complex phenomena, I can’t see how it’s the right move, particularly when we appear to have after effects which change depending on the conditions we set up. Again, if nothing we did made any difference, I might see their stance as more plausible, but as it is, if feels like a major gap that they just want to ignore.

            Maybe I’m just too much of a reductionist, but I don’t know how else we learn if we’re not willing to at least attempt to pierce the veil, to see the details underneath, including of what appears to be a wave function collapse. Right now, the best information we have on a reduction is the mathematical structure of the theory. And we have the PBR theorem indicating that if quantum states refer to anything real, then they must be a description of reality, at least at some level of organization. Antirealists aren’t bothered, because they don’t consider quantum states as describing anything ontic at all.

            Which brings us back to measurement results that just…appear. I can’t see how that makes causal, mathematical, or any other kind of logical sense, at least outside of some kind of idealist or related commitments. Yet there are very intelligent people who adhere to this view. So either I’m missing something fundamental, or they are.

            Sorry, hope that wasn’t too rambling (or ranty).

            Like

          5. Well, my maths/physics/stats background… that was long ago and what I retain mostly are just habits of mind. Since then I spent thirty years as a systems programmer, for which, it transpired, I was much better suited. I sympathise re being promoted away from the “coal face”. I had a similar pressure, but put my foot down and refused promotions until management caved in and allowed me to split my time roughly 50/50 between programming and management, regardless of the gradual inflation of my official titles.

            But back to the topic at hand. Let’s step back (temporarily or otherwise) to a more philosophical level, in order to avoid misunderstandings.

            Firstly, I have no truck with anti-realism, while acknowledging the strength of some of anti-realist arguments. Secondly, I see no need for you to feel apologetic about reductionist tendencies. There is nothing wrong with reductionism, except for the term “reductionism” itself. I reckon it should be called “augmentism” 🙂 to emphasise that rather than “A is noting but B” a proper reductionist stance says “A is also B”.

            This is where philosophy comes in. We can view phenomena on distinct reductive levels, each of which is valid in its own right and has its own (emergent) ontology and its own (emergent) causal frameworks. This leads to the notion that perhaps all causation is nothing more than (is also!) a descriptive story appropriate to any given reductive level, with a different account being possible at the underlying reductive level. The payoff of this kind of thinking is that we are entitled to choose the reductive level which offers the simplest causal story. Hence my stepping from the ontology of waves to the level implied by Huygens, which (for me) dissolves the puzzlement of interacting probability waves.

            As you may guess from all this, I agree entirely that declaring things fundamental largely amounts to just giving up. I do dislike panpsychism, unless it is understood simply as the inarguable fact that the possibility of consciousness must be implicit in the way the universe is. Where we seem to differ is that for me MW has the same feel of just giving up on the very real problem of measurement. I am much more comfortable with viewing such intractable problems in Davidsonian terms — as a consequence of a lack of translatability between discourses of distinct reductive levels. As far as I am concerned, predicate dualism takes this into account for consciousness. I have no idea how to construct a similar argument for the measurement problem, but that’s the way my intuition points me.

            BTW, for me, QM has another puzzle, which is at least as deep as the measurement problem: the Pauli Exclusion Principle. I don’t understand why it is not debated at the same level as the measurement problem. I cannot help wondering whether the two are linked somehow.

            Like

          6. I used to wonder why so many people from computing professions were attracted to these kinds of discussions, until I realized that it’s more that people who like these subjects tend to be attracted to making a living in that type of job. If we’d gone into academia, we’d likely get it out of our system professionally, but since we don’t have that channel, it becomes a hobby.

            Sorry for the antirealism rant. I should have remembered it wasn’t your position. It’s just on my mind recently, because, related to quantum states, it’s not a view I can put on, not even temporarily, which bothers me.

            I’m not catching how reducing interference effects down to amplitudes at various points dissolves the issue. (Maybe I need to reread your discussion a few more times.) But I should clarify that the main thing for me is that those effects appear to be affected by conditions of the experimental apparatus (or hardware architecture for quantum computing), and appear to affect the final measurement outcomes.

            On the “nothing but”, right. I think that’s why a lot of people dislike reductionism, because it seems to trivialize concepts they value. I once had someone ask why I wanted to dismantle the beauty of consciousness with all this reductive talk of functionality. My response was to channel whoever it was that said the beauty of the rainbow isn’t lost by understanding how it comes about.

            Along those lines, while I agree that prematurely declaring something fundamental is giving up, it’s interesting to think about what it’s giving up on. For someone like Einstein, declaring the measurement outcome fundamental was giving up on a local deterministic account, which he complained to Max Born about. Born’s reply is one I hear a lot in these discussions, that he saw fundamental randomness as a feature rather than a problem. So from Born’s perspective, he wasn’t giving up on anything.

            Einstein later softened on the deterministic stance, but the 1935 EPR paper demonstrated he wasn’t willing to let go of locality. Even as late as the early 1950s, while he appreciated David Bohm’s pilot-wave efforts, he still saw it giving up on local dynamics. I sometimes wonder what he might have thought if he’d lived a few more years and seen Everett’s deterministic theory with its local dynamics.

            Which brings me to my question for you. If Everett’s theory feels like it’s giving up, what would you say it’s giving up on?

            The Pauli exclusion principle is weird (as is just about anything at this level) but I’ve never thought about it specifically as a problem. Maybe because we know it can be overcome in gravitational collapse with white dwarfs, neutron stars, and black holes. But I might well be showing my ignorance here, since I haven’t read much about it.

            Like

          7. I think you got the interests/job causal direction right. 🙂 But I am pretty sure I would not have enjoyed academia — not as it developed over past few decades.

            Re switching from wave-front interference to the level implied by Huygens… Remember that I was trying to work out why I (a) feel comfortable with Schrodinger’s wave being (possibly) a wave of probabilities, while (b) being unable to come up with an epistemic probability model, which would help one to accept this. Epistemic models for probability varying in time or of distinct probabilities influencing each other — these are not hard to construct. But probability wave-fronts generating a pattern through interference — a model for that had me stumped. Which eventually made me realise that the Huygens Principle is fully compatible with (a) and (b) above, and is perfectly sufficient for wave interference — and thus is likely to be the intuition pump priming my gut feeling in this matter.

            My point about MW feeling like giving up was not so much about ontology but about the obvious fact that accepting it, removes motivation for seeking other explanations for what happens in measurements. And since MW makes no novel predictions, any hope of falsifying it lies with such alternative explanations — without them, it becomes truly unfalsifiable. Now, I know you reckon that experiments pushing superpositions higher up mass scale might help, but I really cannot see how. They may eliminate some other models (e.g. gravity induced collapse), but I cannot see how they can invalidate all of other models without invalidating MW too, because they effectively all share the basic QM predictive apparatus.

            Re Pauli exclusion… You have probably only met the standard “bowdlerised” version which says that no two electrons in an atom can be in the same quantum state, which is why matter has bulk. This always bothered me — why is the principle limited to an atom? The answer is: it isn’t. It is true for any quantum system. Such as the universe, for example. Tweak any electron (or any fermion) and *all* electrons (or corresponding fermions) everywhere in the universe have to adjust — and adjust *now*. Note that no entanglement is assumed — so what does it even mean? Or should we consider them all entangled? In which case, how do we justify the use of statistics in QM? Statistics (and even just probability) rests on the foundation of the Law of Large Numbers. How does that work if everything is entangled? After all, LoLN is based on the assumption of *independence*. Of course, you still can’t use it to transfer information, so the armed truce between QM and Relativity still holds, but…

            Like

          8. Same for me on academia. I occasionally contemplated it when I was younger, but grad school purged it from my system.

            Thanks for the probability clarification. I’ll stop beating on that angle. At least for now. 🙂

            On the giving up point, I think that’s a danger anytime the general consensus for any theory gets close to 100%. I don’t think that’s going to happen anytime soon with any of these interpretations. And even though, I have to admit, my credence in Everett is inching up, I hope people keep trying to break quantum theory, discover its limits, or stress its structure in any way they can. For instance, I doubt superdeterminism is viable, but I hope the people pushing for it can get the experiments they want.

            Jim Baggott, who has a general sympathy for epistemic interpretations, in his Quantum Reality book admits that it’s the realist ones that tend to generate experimental work and theoretical progress. For instance, John Bell, as a Bohmian, developed his theorem to demonstrate that the non-locality in Bohmian mechanics wasn’t just a weakness of that theory, which led to the entanglement experiments. And I suspect the people preserving quantum effects in ever larger systems are coming at it from a realist angle, either to validate one of the objective collapse models, or falsify them. I doubt this impetus is coming from neo-Copenhagen views like RQM or QBism.

            All I can say is I do think those larger systems make a difference. Although really the big difference for me was when quantum computing started being real (even if overhyped). I was much softer on quantum state realism until then. And once driven into realism, my options seemed to shrink to objective collapse, pilot-wave, or continuous wave mechanics. Objective collapse seems increasingly on the ropes empirically, and while pilot-wave attracted me when I first read about QM, I now understand why its issues with QFT turn most physicists off.

            Of course, a new experiment could make the news tomorrow and change everything.

            Sounds like I might need to do some reading on the Pauli exclusion principle. Thanks! Definitely I’ve only gotten the quick and dirty version up to now. The effects you describe actually sound kind of like the guidance mechanism in Bohmian mechanics.

            Like

          9. I agree that being too satisfied with any theory is bad — which is why we keep testing them. Trouble is MW does not add anything to testability of the basic QM apparatus (in whatever formulation). And I’ll shut up on that now. 🙂

            I should, perhaps, clarify that I don’t suggest QM probabilities being necessarily epistemic. I was looking for epistemic analogues simply because our intuitions concerning chance and probabilities are shaped entirely by our macro-world with its adequate determinism, in which probabilities are epistemic. So in order to grasp other than just mathematically any peculiarities of ontic probabilities (i.e. real ones, to whatever value of “real”) we need epistemically based intuition pumps.

            And BTW, we may not have to give up localism (no, I haven’t gone bonkers! — read on! :-)). One way of reading the very persuasive non-localist arguments is to conclude that QM tells us that spatial distances are not fundamental, but are an emergent feature of the world — in which case everything remains local. That, of course poses the opposite problem: an explanation would be required why information cannot propagate instantaneously across emergent distances. But that’s would be a very different kind of issue.

            I agree that pilot-wave (Bohm et al.) alternative is unappealing. As you note, it lacks the QFT super-structure, though I am told there has been some limited progress on that of late. It also has other issues. E.g. its version of non-locality can be downright perverse.

            Copenhagen-like versions don’t particularly appeal either. But I do think you should look at Feynman’s path integral approach, which is entirely realist and (I do agree with Ian Stewart here) its implied ontology does not seem compatible with the ontology implied by MW. Plus I have another reason for liking it — it is based on the principle of least action, which permeates all of non-quantum physics. It would be surprising if that universality did not apply to QM. So I would urge you to have a look at “The Quantum Universe: Everything That Can Happen Does Happen” by Brian Cox and Jeff Forshaw. It is aimed at non-technical readers and succeeds in that quite brilliantly, I think. While not making this explicitly clear, it explain QM from ground up very much in Feynman’s terms.

            Like

          10. Sorry, didn’t mean to imply again that you’re in the epistemic camp. It’s just an example of what I think actually is giving up in the way you expressed concern about.

            I actually have no problem dispensing with localism, or any other principle, if the evidence makes it the most viable option. My biggest issue with many interpretations is that they dispense with physical principles like locality, determinism, or metaphysical realism without establishing that necessity, and then act like anyone who asks questions about it is being unreasonable. If a theory or interpretation makes a big metaphysical claim, it should explore it in depth, including possible theories about how it’s supposed to work. In that sense, I think ideas of spacetime being emergent from quantum entanglement or whatever are definitely worth exploring.

            The one thing I have a near 100% credence on with quantum physics, is that the interpretations that seem principally aimed at preserving the comfortable view of the world as we know it are just modern versions of the Tychonic system. Or worse, theistic evolution.

            Have to admit I’ve overlooked Cox’s book until now. Thanks. I’ll check it out.

            Like

          11. PIcked up Cox’s book and couldn’t resist looking up what he has to say on the measurement problem. It quickly becomes apparent the Feynman approach has a different mental model and vocabulary, yet still leads to same issues, including interference between the possible paths and what happens with measurement. And I found this point striking, which doesn’t seem to agree with Stewart’s interpretation.

            The approach to quantum mechanics that we have been discussing, which rejects the idea that Nature goes about choosing a particular version of reality every time someone (or something) ‘makes a measurement’, forms the basis of what is often referred to as the ‘many worlds’ interpretation. It is very appealing because it is the logical consequence of taking the laws that govern the behaviour of elementary particles seriously enough to use them to describe all phenomena.

            Cox, Brian; Forshaw, Jeff. The Quantum Universe: (And Why Anything That Can Happen, Does) (p. 188). Hachette Books. Kindle Edition.

            That said, they don’t spend a lot of time on it. I’m definitely going to read the rest of the book though. I’m getting intrigued by the alternate approach used here. Thanks again for recommending it!

            Like

          12. My fault. I should have made explicitly clear that the path integral approach does not offer a resolution of the measurement problem. It would be an outright winner if it did – for me, anyway.

            I confess I’d forgotten that they name-check MW in approving terms. You spotting it, sharpens my overall unease about ontology. In this version we first have everything that can happen actually happening with no probabilistic bias, but through interference results in verifiably correct probabilities (or even certainties — like light following geodesics, for example). Then, if MW is correct, everything that can happen happens again, this time biased as per those probabilities, and this time it all de-coheres into separate “worlds”.

            You may say: all the worse for path integral! Yet the difficulty remains: Schrodinger and Feynman produce two distinct formulations — both realist, but with different ontological commitments (that’s before subsequent MW splitting or probability collapse). Matrix Mechanics, being purely instrumental by design, has no commitments to anything other than measurement outcomes, while MW adds further ontological layer to all of the three formulations. I can just see anti-realists smirking! 🙂

            I guess in the end, it all depends on one’s intuition. As already noted, mine is strongly biased in favour of the least action principle and against ontological proliferation. Blame my education, I suppose. In any case, as you say, Feynman’s formulation has its own way of speaking about QM and that in itself is interesting. I understand that as mathematical tools and computing power evolve, it is becoming more and more widely used, so well worth to be aware of.

            Like

          13. I’m about a quarter of the way through so far. I like the clock analogy they came up with for phases and vectors, although they’ve been using it for a while, making me wonder when they plan to cash it out. And I just read about Leibniz’s “action”, and the odd way Dirac tied it in with QM. They also note that Feynman was able to derive Schrodinger’s equation with it.

            I’m not seeing any different ontological commitments at this point. They still talk in terms of waves, amplitudes, interference, and probabilities. What does appear different is how the evolution is calculated, but they’re careful to clarify it will provide identical results at all points. If anything, the early chapters seem to minimize ontological commitments, although that might be to coax the reader through the strangeness.

            They do say that explaining the interference effects in the double slit experiment means accepting that the particle is in multiple places at the same time. So they pass my acid test. 🙂

            It seems like intuitions cause a lot of trouble in science (and philosophy). They can be useful as a starting point, but become risky when used to assess conclusions.

            Like

          14. I probably should wait until you are further into the book — by about quarter, they’ve only just set up the basics before getting going — for you all that would be familiar stuff.

            So just one quick point, which will probably come up later too… It looks to me like we use the term “ontological commitment” in different ways — worth bearing that possibility in mind. Specifically, yes, Schrodinger wave can be constructed starting from Feynman’s premises. In principle, that’s what one would expect in so far as path integral is “simply” a third formulation of QM, provably equivalent to Matrix Mechanics and to Schrodinger’s equation. That does not mean that all three have the same ontological commitments — e.g. MM, being instrumentalist by design, has none other than measurement results.

            Like

          15. I have to admit I haven’t been using “ontological commitment” in any rigorous fashion. I think I’ve been taking it as the ontology implied in the descriptions and the mathematics they surface. I’m at 40% and the implied wave/particle ontology still seems pretty familiar. There was an early emphasis on particle paths, which I imagine will get fleshed out eventually.

            I do know Heisenberg didn’t even try for an ontological account, and was impatient with any efforts in that direction, a sentiment that seemed to run strong in the Copenhagen school. He reportedly thought the idea of matter waves was nonsense, at least before the Davisson experiment. But just because he didn’t develop his math with it in mind, doesn’t mean the implications aren’t there.

            Like

          16. Sorry… Looks like we’ve been rather at cross-purposes. I rather assumed you’d be familiar with Quine’s use of the term in philosophy of science. SEP , predictably, has a very thorough piece on it. Wikipedia is also good, but kicks off with ontological commitments of language rather than of scientific theories — there is, of course, a profound connection, as Quine himself noted. Think of such commitments as the minimal ontology required for a theory to be formulated in the first place. Theory’s consequences (ontological or otherwise) are not included. I.e. what has to be assumed, but not what is derived.

            It’s like wave-fronts, which follow from Huygens Principle, but do not not feature in its ontological commitments — you may remember me making that point.

            Like

          17. No worries. Just skimmed the early sections of the SEP article. It’s interesting that it uses electrons as an example. There was an epic Twitter debate a few months ago between Sabine Hossenfelder and Philip Goff about the existence of electrons. Hossenfelder is an instrumentalist and argued that nothing about scientific theories requires their actual existence.

            It seems obvious there’s widespread disagreement about what the minimum ontological commitments of quantum theory are. I wonder if Quine’s criteria can actually adjudicate any of it.

            I need to read more about it, but I’m leery of the “ontological cost” concept. It seems like something that could just be rationalized biases.

            BTW, read the section in the Cox / Forshaw book on the Pauli exclusion principle and the universe spanning aspects. They do explicitly tie it in with entanglement, even mentioning Einstein’s “spooky action at a distance” concerns. But it’s layered in analogy (like much of the book), and they don’t really explain their assertions, so it’s hard to know what to think. I checked a number of my other quantum books, including ones I’d expect to mention something like this, and none of them even allude to it. I don’t doubt their description is accurate in some sense, but it smells misleading.

            Like

          18. Calling Sabine H an instrumentalist is a bit of a stretch. She is more a neo-Kantian – a position with which I have a lot of sympathy. )I recall her explaining on her blog (before it migrated to Patreon) that “real” was not a bivalent attribute and why some theoretical entities are to be counted “more real” than others.) To a neo-Kantian, the concept of e.g. an electron enables us to construct models which certainly describe something real about behaviour of matter and in that sense it is more real than a unicorn. But the question whether it is “really real” is simply not answerable.

            I am unclear in which way the book’s take on Pauli exclusion might be misleading. Once alerted to the fact that it does not apply only within atoms, one can see that even the Wikipedia doesn’t disagree. E.g. in its last para of the introduction section it says “So, if hypothetically two fermions were in the same state—for example, in the same atom in the same orbital with the same spin […]”. Note the “for example”.

            Re ontological commitments… Quine understood scientific theory as a form of language (or more precisely discourse). One of my favourite quotes from him is:… Sorry I’ve run out of time. Will continue tomorrow.

            Like

          19. Hossenfelder actually self labels as an instrumentalist.

            I’m an instrumentalist; I am looking for a mathematical prescription that reproduces observations, one of which is that the outcome of a measurement is a detector eigenstate.

            https://backreaction.blogspot.com/2022/02/an-update-on-status-of-superdeterminism.html

            I should note that I don’t consider being an instrumentalist a bad thing. I had (reluctant) sympathies with it before I discovered structural realism.

            I think the key phrase in that Wikipedia example is what follows: “for example, in the same atom in the same orbital with the same spin”. I have no particular issue with the interchangeability of particles, and I do see that in other materials. What I can’t find is any reference to the universe spanning non-local effects they imply take place. Yes, entangled particles collapse into correlated states on measurement no matter how far they are from each other, but I didn’t get the idea that was what they were talking about (despite the allusion to entanglement). It’s possible I misread them.

            No worries on time. Looking forward to that Quine quote!

            Like

          20. I didn’t know Sabine so identified herself. Still, her view on what is real are (or were) neo-Kantian (or ESR-like, if you prefer) rather than instrumental.

            Re Pauli exclusion… Turns out you are right, the assertion in the book is misleading — the non-local effect enables exclusion rather than being its consequence. Our discussion prodded me to do a thorough web search (should have done so long ago!) and it looks to me like the best response to the controversy stirred by Brian Cox was by Jon Buttersworth (UCL professor of physics): https://www.theguardian.com/science/life-and-physics/2012/feb/28/1

            Handily it also finally provides me with an explanation why in practice (though not in principle) the non-local effect is distance bound, thereby alleviating my puzzlement over the effect on the Law of Large Numbers.

            Back to Quine… He had the knack of devising pithy, memorable slogans encapsulating his view. Here’s the quite I struggled to lay my hands on, having misremembered where it was from:

            “I am not suggesting a dependence of being upon language. What is under consideration is not the ontological state of affairs, but the ontological commitments of a discourse. What there is does not in general depend on one’s use of language, but what one says there is does.”

            Quine thought (and I very much agree) that this applies to scientific theories. Specifically, I would say, to all discussions of QM with its distinct formalisations and interpretations. The distinction between ontology and ontological commitments of a theory is a handy philosophical tool, even though, as is well known, Quine’s criteria fail to fully nail ontological commitments. For one thing, under-determination rears its head. A classic example is Poincare’s conventionalist parable of hyperbolic space being reinterpretable as Euclidean space with some distorting fields.

            Liked by 1 person

          21. I keep meaning to read about neo-Kantianism, since I often see it mentioned as an alternative classification for historical figures, like Niels Bohr, whose positions seemed instrumentalist, at least some of the time.

            Good idea to search for specific responses to the book’s statements. Wish I’d thought of it. I can’t say I follow Butterworth’s discussion, at least not on initial reading, but it’s good to know this was more involved than the book implied. Thanks!

            I still have a few chapters left in the book, which might be the most interesting ones. Hope to finish sometime this week.

            Thanks for the Quine quote. QM seems like a unique challenge for ontological commitment, because the only thing everyone agrees on is the math and observables. I wonder if it makes sense to just accept that the interpretations are actually themselves different theories, each with their own ontological commitments.

            Like

          22. I’d say one needs to distinguish QM formulations (Matrix Mechanics, Schrodinger’s equation and Path Integral) from QM interpretations. I think of the latter as groping towards a successor theory rather than as theories in their own right.

            Bur QM is by no means a unique challenge where ontological commitments are concerned. It is not unusual for a theory to be axiomatisable in more than one way, with distinct OCs. E.g. Sklar, in his “Space, Time and Spacetime” notes two distinct ways of axiomatising Special Relativity. And in fact the Newton/Leibniz argument about substantival.relational nature of space was all about OCs of Newtonian physics. That argument could have, though did not, culminate in construction of what we now know as Galilean or neo-Newtonian space time. I bet history of science would have been somewhat different! 🙂

            Like

          23. I actually don’t see the formalisms as separate theories since they always converge on the same answers. But the interpretations have different postulates and predictions, albeit with most of the variances currently untestable. Some of the realist ones actually are mathematically distinct. Objective collapse and pilot-wave theories add new variables. But even the antireal ones have different ontological assumptions (participatory reality, flash ontology, etc).

            I hadn’t heard that about classical physics. Interesting. Relativity is weird, but it doesn’t seem to cause the same existential outrage as QM.

            Like

          24. Ah, I didn’t mean to imply different formulations amounting to different theories. But despite being formally equivalent, they do have different ontological commitments. OTOH different interpretations are, strictly speaking, not theories but mere hypotheses.

            Yes, Relativity is also weird, but there we have two powerful (and often misleading) intuition pumps/ models: Minkowski’s formalisation for SR and the rubber sheet analogy for GR. This makes them seem less strange.

            Like

          25. Are the formalisms by themselves enough to have ontological commitments? Schrodinger when developing his equation did it with a mental model of physical waves, and that’s how he sold it. Max Born redefined it into only a probability engine, to Schrodinger’s disgust. It seems like the realist and anti-realist camps see different commitments in the same math.

            Like

          26. Formalisms certainly can have ontological commitments. Schrodinger’s formulation is committed to waves (of something) — much to disgust of Heisenberg, whose Mattrix Mechanics has no such commitment. And Schrodinger was unable to find anything intrinsic to his formulation to substantiate his objections to Born bacause that formulation-as-such actually says nothing about the nature or origins of his wave.

            Liked by 1 person

          27. I was going to consider this conversation wrapped, but I came across an English translation of Born’s paper, and found his take in the introduction more ontological than the later anti-realist views. It actually sounds like he was initially thinking in pilot-wave terms. Thought you might find it interesting if you hadn’t seen it before.

            One would do better to postpone these thoughts, when coupled directly to quantum mechanics, until the place of the electromagnetic field in the formalism has been established. However, from the complete analogy between light quanta and electrons, one might consider formulating the laws of electron motion in a similar manner. This is closely related to regarding the de Broglie-Schrödinger waves as “ghost fields,” or better yet, “guiding fields.”

            I would then like to pursue the following idea heuristically: The guiding field, which is represented by a scalar function ψ of the coordinates of all particles that are involved and time, propagates according to Schrödinger’s differential equation. However, impulse
            and energy will be carried along as when corpuscles (i.e., electrons) are actually flying around. The paths of these corpuscles are determined only to the extent that they are constrained by the law of energy and impulse; moreover, only a probability that a certain path will be followed will be determined by the function ψ. One can perhaps summarize this, somewhat paradoxically, as: The motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws (1).

            Click to access QMCollisions2_1926.pdf

            The last sentence often gets quoted by itself, but the fuller context makes it more interesting.

            Like

          28. The comparison with pilot-wave is obviously rough, since pilot-wave aims for determinism and this is the paper where Born basically says it’s time to give up on that. But the particle maintaining an existence between measurements and guided by a ghost wave seem un-Copenhagen like. That said, this paper predates the Copenhagen interpretation and so should be seen as a snapshot in time. Just historically interesting.

            Like

          29. Indeed. He wants to have it both ways — have both particles and probabilities. Born clearly assumes the probability being ontic, rather than merely epistemic, as pilot wave approach would have it. Interestingly, he also acknowledges as a “paradox” that this would require the wave to be deterministic , yet particle behaviour probabilistic — which for ontic probabilities would imply that the wave affects particles, but is itself completely unaffected by them. Remarkably enough, this is in fact one of philosophical objections to Bohmean interpretation, despite its probabilities being merly epistemic. So the same problem arises in either case,

            Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.