Avoiding the structural gaps

A long standing debate in quantum physics is whether the wave function is real. A quick reminder: quantum entities appear to move like waves, including portions interfering with each other. These waves are modeled with the wave function. But once measured, quantum objects manifest as localized points or field excitations. The wave function can’t predict the measurement outcome, only probabilities on what the result will be.

A popular move here is to decide the wave function isn’t real, that it’s just a mathematical contrivance. Doing so seems to sidestep a lot of uncomfortable implications. But it leaves us trying to explain the statistical outcomes of measurements that show patterns from portions of the wave interfering with itself. Those effects, along with entanglement, are heavily used in quantum computing. If the wave function isn’t modeling something real, then it’s usefulness in technology starts to look like a magic incantation.

Of course, accepting wave function realism leaves us with something that seems to operate in a higher dimensional “configuration space.” And we end up having to choose between unsettling options, like an objective wave function collapse on measurement, a pilot wave guiding the particle in a non-local manner, or just accepting pure wave mechanics despite its implications.

Valia Allori has an article at IAI arguing against quantum wave function realism. (Warning: you might hit a paywall.) The main thrust of her argument, as I understand it, is that we shouldn’t allow ourselves to be lured farther away from the manifest image of the world (the world as it intuitively appears to us) when there are viable alternatives.

Her argument is in opposition to Alyssa Ney’s argument for wave function realism, which touts as one of the benefits that it reclaims locality. Allori argues that this is aiming to satisfy an intuition we develop in three dimensional space, that there aren’t non-local effects, “spooky actions at a distance”. But wave function realism only preserves locality across configuration space, which Allori views as a pyrrhic victory.

Overall, Allori seems to view this as a conflict between two different sets of intuitions. On one side, we have views that are closer to the overall manifest image of reality, one with three dimensions, but at the cost of non-local phenomena. She doesn’t view this as ideal, but deems it preferable to the idea of a universal wave function existing in near infinite dimensions. In her view, embracing theories too far away from the manifest image puts us on the path that leads to runaway skepticism, where nothing we perceive can be trusted.

But I think looking at this in terms of intuitions is a mistake. When it comes to models of reality, our intuitions have historically never been particularly useful. Instead they’ve often led us astray, causing us to insist the earth was the center of the universe, humans were separate from nature, or that time and space were absolute, all ideas that had to be abandoned in the face of empirical realities. The reason to prefer locality isn’t merely to privilege one intuition over others, but to prefer theories that provide a structurally complete accounting.

A while back I described this as a preference for causally complete theories. But causation is a relation across time that is made asymmetrical by the second law of thermodynamics, that entropy always increases. The more fundamental reality are the structural relations. A theory which can account for all (or at least more of) those relations should, I think, be preferred to theories that have larger gaps in their accounting.

By that standard, I perceive wave function antirealism to have huge gaps, gaps which proponents of the idea seem comfortable with, but I suspect only because, as Allori does, they deem it a lesser evil than the alternative. Of course, objective collapse and pilot-wave theories also have gaps, but they seem smaller, albeit still weaknesses that I think should make them less viable.

Pure wave mechanics seems like the option with the fewest gaps. Many would argue that accounting for probabilities remains a crucial gap, but that seems like more of philosophical issue than a scientific one, how best to talk about what probabilities mean. In many ways, it highlights issues that already exist in the philosophy of probability.

Overall then, my take is that the goal isn’t to preserve the manifest image of reality, but to account for it in our scientific image. Preferring theories that are closer to the manifest image just because they are closer, particularly when the theories have larger gaps than the alternatives, seems to amount to what is often called “the incredulous stare”, simply rejecting an proposition because it doesn’t comport with our preexisting biases.

But maybe I’m overlooking something? Are there reasons to prefer theories closer to the manifest image? Is there a danger in excessive skepticism as Allori worries? Or is preferring a more complete accounting itself still privileging certain intuitions over others?

25 thoughts on “Avoiding the structural gaps

  1. Allori’s position seems to be that wavefunction realism both does and does not preserve locality, depending on how you look at it. It preserves “locality” in dimensions beyond the three (or four) which constitute locality for the manifest image. As she says, “I think that the problem with wavefunction realism is that the intuitive notion of locality is not the notion which wavefunction realism preserves.”

    When Allori says that “the primitive ontology approach and the other ‘three dimensional’ views give up on locality,” she means that they are willing to give up on locality as understood from the manifest view. But this is also exactly what wavefunction realism is willing to give up. If this makes wavefunction realism “a sceptical scenario” in which “we are deceived about the dimensions of the space we live it, we are deceived about the nature of objects, and we are deceived about the notion of interaction,” how is it substantially different from a manifest view in which we are deceived about locality?

    For Allori, it’s a matter of degree. “Why should we accept a theory according to which we are being deceived about everything when there is a view such as the primitive ontology approach according to which most of what we think, given the manifest image, is actually true? In this approach, it is really the case that there are fundamentally individual objects moving and changing in a fundamentally three-dimensional space.” I’m not convinced that the argument from degree is helpful. Both wavefunction realism and Allori’s “primitive ontology” are forced to challenge the manifest image. Interestingly enough, it’s wavefunction realism that blinks first, by trying to recover locality (albeit in a weirdly backhanded way).

    (Nothing about this speaks to the many-worlds theory, as far as I can see.)

    Liked by 1 person

    1. It’s interesting to note that while locality could be said to be part of the modern manifest image, it wasn’t part of the medieval one. In astrology, considered a valid endeavor until the 1600s, the heavens were thought to have non-local effects on people on Earth. And gravity under Newton was considered a non-local phenomenon.

      Quantum non-locality comes in a couple of different flavors, depending on which interpretation we’re talking about. Non-local dynamics is action at a distance. It seems hard to argue that any collapse interpretation doesn’t have some version of this, although it’s usually limited to the interactions within an entangled system.

      But non-separability is a more difficult issue. It’s basically the idea that an entangled system can’t be fully accounted for with just its parts. The correlations have to be accounted across the entire system. This type of non-locality is generally thought to exist in all interpretations of QM.

      Although there are people who argue that you can find separability within QM. David Deutsch claims to demonstrate it with Heisenberg matrices. Alyssa Ney argues we can do it within the wave function, but it requires accepting configuration space realism (at least in the non-relativistic versions), which even many Everettians are skittish about.

      Configuration space locality is definitely outside of the manifest image, but if there’s a way to preserve the manifest image with QM, I haven’t seen it, at least other than just going full and strict instrumentalist. And of course there are lots of in-between positions, with Allori’s strategy on the spectrum.

      Like

      1. By “locality” I understand a modern concept involving the speed of light. What Einstein called “spooky action at a distance” is the violation of this limit on the influence between things. The influence itself is conveyed by “fields,” which frankly, in the common conception of the world, amount to “action at a distance” between particles — just not spooky action, because it stays within the known laws of physics.

        The view that the medievals believed in non-locality strikes me as a misconception. The notion that the motions of the heavens had an influence on the earth suggests a quite intimate locality, proper to the cosmos. The planets were not “out there” in the way we think of them, but a part of God’s sheltering and unified world.

        Similarly, the Newtonian conception of gravity would not have been thought of at the time as local or non-local, because the idea of a limit on its speed of influence had not occurred to anyone. It may have been regarded as a mysterious action at a distance, in that things affected by gravity did not actually come in contact with one another, but this is a different issue. It also seems to me an issue that remains with us today. We are now comfortable with the explanation of a “field” that “warps space and time,” but when you think about it, it’s still pretty mysterious, compared to what happens when one thing just bumps into another.

        So non-locality is not just about “action at a distance,” but about action exceeding the speed of light, which in four-dimensional space-time appears to be impossible. In QM it seems to happen nevertheless. Based on what I’ve learned from this post, adding extra dimensions apparently offers a mathematical way out, by positing a sort of locality applicable only in higher dimensions.

        If there is locality of this sort, I see no problem in principle with extending it across an entire system (although of course I have no idea what the math says about it). That would, I presume, make for the single unified wave function about which we hear so much.

        Accepting a high-dimensional “configuration space” to explain the apparent non-locality of events in four-dimensional space doesn’t trouble me, if the math works. But I don’t understand why this, in particular, obliges us to choose between the unsettling options of collapse theory, or pilot-wave theory, or “pure wave mechanics” in your phrase. (The link you provide actually points to the many-worlds interpretation, which I suspect goes a little further, being a theory about pure wave mechanics.) As far as I can tell, we are confronted by this choice whether or not we invoke configuration space to explain non-locality. What drives the choice is not the problem of non-locality, but the gap between the micro-world of unresolved QM waves, and the macro-world of resolved particles.

        Like

        1. The speed of light limit put a constraint on what speed interactions (including hidden ones) could take place at.  Whether that counts as the same concern early modern philosophers had about action at a distance is, admittedly, something of a judgment call.  But Newton’s contemporaries were concerned about his gravity theory seemingly allowing action at a distance. Leibniz saw his vortex theory as a better alternative because it accounted for how those interactions took place.  Of course, we know today his theory was wrong.  Newton was right (or at least less wrong) because he explained what could be explained at that point and stopped.

          Likewise, if the medievals thought that the action at a distance, which proceeded in some hidden manner, happened through a spiritus mundi (“world spirit”) , does that count as “spooky” by our standards?  You could argue that their vocabulary was more limited and so they expressed ideas in a way that now seems non-scientific to us.  Maybe if they saw our accounts of fundamental forces, they might take it to be what they were referring to all along.  But it does seem like people arguing for the mechanical philosophy in the 1600s saw themselves as making a break with older paradigms.  

          On pure wave mechanics and many-worlds, as far as I can tell, they’re one and the same.  Pure wave mechanics, if kept pure, logically seems to lead to an environment in a superposition of an ever increasing number of states, aka, many worlds.  To get rid of the worlds, we have to introduce additional assumptions.  The worlds are untestable, but pure wave mechanics itself is testable, and has been tested for decades, most intensely in recent years with quantum computing.  Of course someone could falsify it in some manner tomorrow.

          Like

          1. “Pure wave mechanics, if kept pure, logically seems to lead to an environment in a superposition of an ever increasing number of states, aka, many worlds. To get rid of the worlds, we have to introduce additional assumptions. ”

            That “a.k.a” is where the additional assumptions start. We have all these states, and we assume that every one of them is a real world — as if “superposed states” meant multiple things existing simultaneously, instead of a strange suspension of actuality. We have to posit some actuality, of course, because we’re in one. But it would be more accurate to say that “To explain any worlds, we have to introduce additional assumptions.” The assumption of the many-worlds interpretation is that, absent any good way of explaining how just one actual world emerges, we must suppose that all of the superpositions represent actual worlds. This goes beyond the pure wave mechanics, on which a positivist would remain silent — following Newton’s example of declining to explain gravity by any mechanism, and just leaving us with the working equation. Now that‘s keeping it pure.

            Although I’ve read a fair bit about Leibniz, I don’t remember seeing anything about his vortex theory of gravity. My first thought is that the popular rubber-sheet depiction of Einsteinian gravity looks a lot like a vortex! I looked into this briefly and didn’t find out much, but I did come across a paper called “Leibniz’s Monadology and its insights concerning quantum physics,” by Ludmila Ivancheva, a professor at the Bulgarian Academy of Sciences. According to this paper (available for free download from semanticscholar.org), “The famous American mathematician and philosopher Norbert Wiener has recognised Leibniz as the forerunner of field theory (Wiener 1934). Moreover, he finds it remarkable that ‘the Leibnizian principle of sufficient reason is a direct ancestor of the mathematical methods of quantum mechanics’ (Wiener 1934, 481).” Wiener is far from the only one to notice a connection between Leibniz’s monadology and QM, but most of the other names are unfamiliar. Barbour and Smolin are mentioned: “According to Cooperman, in the current search for a quantum theory of gravitation, Gottfried Wilhelm Leibniz’s philosophy, embodied in his Monadology, is revived in the works of Julian Barbour and Lee Smolin (Cooperman 2007).” Apparently, not everyone today knows that Leibniz was wrong.

            Certainly, any recourse to God as an agent of physical occurrences would be considered “spooky” by moderns, and indeed, so it was with Newtonian gravity: its mysterious “action at a distance” was apparently attributed to the workings of God by at least some theorists, but both Newton and Leibniz found that spooky. Leibniz speculated about a natural explanation; Newton preferred to remain silent.

            Like

          2. As you note, we don’t have to posit our world, because we’re in it. (Assuming we don’t go the idealist or solipsist route.) But then, there’s no distinction in the math between the different outcomes. Saying that before the outcomes is a “strange suspension of actuality,” making a distinction between the pre and post measurement states, seems itself an assumption, a postulate. Something like it appeared to be necessary in the early decades of QM. After all, we only observed one outcome, so some kind of collapse or reduction had to be happening, or the wave had to be unreal, or so the thinking went.

            But with the development of decoherence theory, a theory that is basically just following the math as the system interacts and becomes entangled with its environment, we find that the math eventually fits our observations. I suspect if it didn’t predict anything else, everyone would have accepted pure wave mechanics decades ago.

            Of course it does predict more, and that’s the sticking point. But should it be? The other predictions are currently untestable? Should they influence our acceptance or rejection of the theory?

            I have to admit I don’t know much about Leibniz’s vortex theory myself. I based my point above from poking around in material like this SEP article (which I really need to read through one day): https://plato.stanford.edu/entries/newton-philosophy/

            Like

          3. That superposition is a “strange suspension of actuality” is indeed a postulate. That it might be,instead, a simultaneity of multiple actualities is an alternative postulate. Pure wave mechanics has been accepted by everyone who works with it, in the spirit of “shut up and calculate,” and yet not everyone has accepted the many-worlds interpretation — which is why it’s still called an interpretation, I guess. MW doesn’t just fall out of the equation as an obvious answer. The only thing that falls out of the equation is the result. Also, in this universe there is always only one outcome. It’s not as if, when non-locality happens, we get to observe an outcome both here and in some other universe.

            l’ll have to check out that SEP article. It’s a telling commentary on the SEP that one often ends up thinking, “I’ll really must read through this one day.”

            Like

          4. The SEP’s academic style is rarely a captivating read. But it’s more authoritative than Wikipedia and much more complete than the IEP (Internet Encyclopedia of Philosophy). It’s a nice resource for those of us who don’t have the time or money to do a deep dive into the broader academic literature. But it does take time and effort, which is why I often say, “I really have to get around to reading that.”

            Like

  2. I vote Ney!

    I actually think wavefunction realism can salvage locality for causality. (Not separability.) This wouldn’t be the locality of the manifest image, exactly, but close enough to explain why we successfully got away with believing in locality as we understood it, without tripping over our own feet.

    In every test of Bell’s theorem, every EPR experiment, Alice and Bob compare their results after the experiment is done. They communicate over light-speed or slower channels. By that time, the decoherence which is involved in their measurements has had time to spread to each other’s part of the universe. On an Everettian view, there’s no problem placing their 4-dimensional experiences into a family of 4D worlds where causality is local.

    When you say we have to account for the manifest image using the scientific image, do you mean explain how lots of it is approximately true? I think that is necessary to avoid the kind of skepticism that Allori seems to be worried about, the kind that would undermine observations like “the sample mass is 83 grams”. But I also think it’s achievable, and I’m not terribly worried about it.

    Liked by 1 person

    1. I’m with you on Ney. Although she claims full wave function realism does provide separability. From her book:

      The key innovation of wave function realism is to postulate the existence of a field that is spread out not in the three-dimensional space of our ordinary experience, but instead in the space of the universal wave function. This will be a higher-dimensional space, which, for nonrelativistic quantum mechanics, has the structure of a classical configuration space. So, unlike the primitive ontology approach, which takes as fundamental both ordinary low-dimensional matter and a wave function, in this picture, fundamentally, there is only what inhabits the high-dimensional space of the wave function. The matter itself is viewed by the wave function realist as constituted out of the wave function (and perhaps a marvelous point). The resulting wave function metaphysics is capable of capturing entanglement relations while remaining completely separable. It is separable because all states of the wave function, including the entangled states we have been considering, are completely determined by localized assignments of amplitude and phase to each point in the higher-dimensional space of the wave function.

      Ney, Alyssa. The World in the Wave Function: A Metaphysics for Quantum Physics (p. 87). Oxford University Press. Kindle Edition.

      “When you say we have to account for the manifest image using the scientific image, do you mean explain how lots of it is approximately true?”

      Maybe, if it in fact remains a productive concept. Ghosts and witches, for example, once part of the manifest image, were eliminated. But if it remains useful, then I think that usefulness has to be accounted for. It’s not an option for science to simply dismiss it.

      I agree it’s achievable, for the productive concepts. But my point was that simply judging how close the scientific image is to the manifest one, weighing theories by that metric alone, is a bad strategy. The only reason to engage with it, as far as I can tell, is to make us feel better.

      Liked by 1 person

      1. OK, I concede Ney’s point that within the QM framework alone, you seem to get separability. But then we have to reconcile QM with (most of) general relativity, wherein there are proofs that the maximum entropy that can be packed into a given region of spacetime is proportional to the area, not the volume, of the region. This increases the plausibility of various holographic principles, as well as the ER = EPR ( https://en.wikipedia.org/wiki/ER_%3D_EPR ) hypothesis, and I suspect that separability is going to have to be abandoned. Admittedly, this is well above my pay grade.

        Liked by 1 person

        1. I think those proofs are related to black holes and their event horizons, which of course are the maximum amount of entropy packed into a region. But definitely above my pay grade as well. Who knows what a successful reconciliation of QM and GR might change.

          Like

  3. This is pretty over my head, but what exactly are the gaps you’re referring to? Is it roughly a “then a miracle occurs” (https://www.researchgate.net/figure/Then-a-Miracle-Occurs-Copyrighted-artwork-by-Sydney-Harris-Inc-All-materials-used-with_fig2_302632920) moment?

    I don’t think there’s anything wrong with trying to follow one’s intuitions or aesthetic preferences, or basically anything else, when it comes to theories. So much of historic scientific progress came from people pursuing ideas that they probably should have given up on, given the evidence. Even some papering over theoretical gaps seems to have been crucial.

    Re intuitions, I recently found out that the idea of absolute time and space was actually not intuitive. Newton himself critiqued the “vulgar”/common view as being that time and space were relative rather than absolute. And at least for time, Aristotle considered it as relative, calling it “the measure of change”, which is not too far from “time is what the clock reads”. And the idea that humans are separate from nature is also only intuitive to certain cultures. Many have no concept of “nature”. Our bad intuitions seem to be generally a product of a particular culture. Which I suppose reaffirms the importance of multiculturalism in science.

    Liked by 1 person

    1. Exactly on “then a miracle occurs”. Sometimes we’re stuck with that. It’s what Newton had to do with gravity. He had no idea what it was, only what its effects were.

      But if we do have a step by step accounting of what happens, one that disturbs our idea of how the world is put together, the temptation to dismiss it as “only” a predictive framework can be pretty strong. This gets us into the main argument for scientific realism, the no miracles argument. If an accurate theory doesn’t reflect reality, at least at some level of description, then it’s accuracy becomes a miracle, a proposition it seems reasonable to be suspicious of.

      Right, the manifest image isn’t a static thing. It is culture specific and changes over time. I noted to Jim that locality hasn’t always been part of that image. And really, locality is more a useful principle in science than part of folk physics, so Allori might be stretching things a bit calling it “manifest”.

      Definitely agree multiculturalism is a benefit in science, particularly on the frontier. Although people sometimes get carried away that idea, sliding into postmodern water.

      Liked by 1 person

  4. Consider applying “the wave function is real” position to any other descriptive scientific equation, say, Newton’s force of gravity equation. The parameters are real, but the masses, the little m’s, do not represent anything real. The force described is measurable but the equation simply describes the relationship between the parameters and, in itself, isn’t a reality.

    Equations are descriptions, they aren’t real things, just like “a rose by any other name would smell as sweet” is a description but not a real thing. The rose is real. The scent of the rose is real. The claim about the name applied to the rose is not a real thing.

    Liked by 1 person

    1. Right, when people debate whether a theoretical concept is “real”, they’re talking about whether what it describes is “out there”, at least at some level of description. And the idea isn’t that it has to describe the reality perfectly to be real, just as an approximation close enough to make more accurate predictions about it.

      And it isn’t to say the thing being described is necessarily fundamental. It might well be something emergent from some lower level reality. In fact, it seems prudent to me to always assume that’s the case. We can’t know whether we’ve hit bedrock, and that future generations won’t find ways to probe deeper.

      Like

      1. I still haven’t seen a proof that space-time is a real thing, a thing that can expand and contract. Similarly, an equation that works or doesn’t work is not collapsing. At one point in time, there isn’t enough information to be definitive and then voila, there is. The equation didn’t have anything to do with it, just as a map cannot cause a landslide.

        Like

        1. General relativity seems to get a lot of mileage assuming spacetime is real. If it isn’t, then GR’s accurate predictions start to look magical. But as noted above, spacetime being real doesn’t mean it’s fundamental. It may well be emergent.

          Definitely the collapse is not part of the QM math. Collapse is postulate, an assumption. Really it was originally just a tag to label the gap between what happens prior to and after a measurement. But it later turned out that just continuing to follow the math gets us decoherence, and an explanation of the appearance of the collapse. I think decoherence changes the collapse from an empirically necessary postulate to a metaphysical one.

          Like

          1. Getting accurate predictions is nice but not definitive. All kinds of theories have in past, just before being proven wrong.

            Plus, saying that stars curve the space around them is just another way of saying that gravity affects light paths. I also contend that a distortion of space time does not eliminate the need of a force of gravity. The paths objects could take are one thing, but what causes them to move in the first place. It may be sitting on a curved path but if it is not moving . . . ?

            Like

          2. I actually think predictions are all we ever get. In my mind, the real dispute between the realist and anti-realist stances is in how broad or narrow a theory’s predictions are accurate. A realist is willing to assume they’re broad enough that we should expect various theories to reconcile. An anti-realist assumes they only apply to current observations. Of course, most people are somewhere in between, with positions that vary depending on the theory in question and their metaphysical preferences.

            If an object is sitting still but on curved spacetime, similar to a ball on on a curved surface, isn’t it going to start moving due to that curve? We could say that it’s not really warping spacetime, that the warping is just a metaphor, that the effects are “as if” spacetime were being warped. But if so, the metaphor seems so accurate that it’s just easier to talk of it warping. Maybe someday we’ll discover a better metaphor, but we shouldn’t expect the current one to be invalidated for the domain it’s been useful in.

            Like

          3. Re “If an object is sitting still but on curved spacetime, similar to a ball on on a curved surface, isn’t it going to start moving due to that curve?” Sure . . . if there is a force acting upon it, like gravity, maybe? Advocates of GR claim that curved space-time makes an argument that there is no force of gravity.

            Like

          4. I take what they’re claiming to be that the force of gravity is the warping of spacetime. It does make gravity different from the other forces, which seems to fit given how much struggle there’s been to get a full theory of quantum gravity.

            Like

Leave a reply to J.S. Pailly Cancel reply