The benefits of wave function realism?

The central mystery of quantum mechanics is that quantum particles move like waves but hit and leave effects like localized particles. This is true of elementary particles, atoms, molecules, and increasingly larger objects, possibly macroscopic ones. It’s even true of collections of entangled particles, no matter how separated the particles may have become.

People have been arguing about how to interpret this for almost a century. A key question in an interpretation is, how should we regard the wave function, the tool used to model and predict the evolution of the wave? Is it modeling something real? Or is it just a convenient fiction, a mathematical contrivance that is really just modeling probabilities? Are quantum particles really ever waves?

As many of you know, on this question, I fall into the realist camp. We have the wave function because of empirical data showing interference effects. And too much quantum phenomena, like tunneling, electron orbitals, or the Aharonov-Bohm effect, just make a lot more sense in terms of waves, but are spooky when viewed with a strict particle ontology.

So I think the waves are real and we have to deal with the implications. But I’ve generally been uncommitted on the nature of that realness. It’s not clear that we can know how much of it is a substance of some kind, and how much is modeling relations between substances. The structures seem real, but what the structures are structures of seems beyond the current boundary of knowability.

And there’s the whole issue that the wave function seems to operate in 3N dimensions, where N is the number of particles being modeled. In an observable universe of 1080 particles, that’s a lot of dimensions.

These, and other issues, make most physicists resistant to accept full wave function realism. Even many Everettians (many-worlders), like David Wallace, are reluctant to sign on. However, there are people who support it. This week I came across an interesting article by Alyssa Ney advocating for exactly this type of realism (warning: possible paywall). In her view, the wave function is fully real, in the sense that it represents a physical wave in a field operating in a higher dimensional space.

My initial reaction to this was to wonder how much this is actually saying. A lot hinges here on what we mean by words like “physical”, “wave”, “field”, “dimension”, or “space”. Taking “physical” to mean working according to rules or laws, “wave” as a topological category of processes, “field” to mean a field of something, “dimension” as a degree of freedom, and “space” as configuration space, it’s not clear we’ve moved much.

Poking around in her book, The World in the Wave Function, by “physical”, Ney seems to mean res, or substance. But even here we still run into the fact that substances we commonly think of, such as the wood in furniture, is ultimately a series of structures, patterns, of processes. Do we hit ultimate substance with quantum fields? Or just a new boundary of structure and processes?

So the question for me is, what does buying this ontology get us? Ney has an answer: locality. Now, locality in quantum mechanics is usually broken up into two components: causal locality and separability.

Causal locality is the one most of us think of when we think about locality. It’s the principle that you can’t have action at a distance, instantaneous (or faster than light) effects from causes that are distant in time and space. The evolution of the wave function is usually considered to be causally local, at least until a physical collapse, if in fact there are collapses. So it’s not clear what benefits full wave function realism provides here.

Separability just means that, for a system distributed over multiple spacetime regions, there’s nothing in the accounting of the system as a whole that isn’t contained in the accounting of its components. Most physicists, including most Everettians, accept that this is violated by the correlations in entangled particles.

Ney’s assertion is that accepting wave function realism provides separability. Here, I think I’ll just quote the key passage from her book.

It is separable because all states of the wave function, including the entangled states we have been considering, are completely determined by localized assignments of amplitude and phase to each point in the higher-dimensional space of the wave function.

Ney, Alyssa. The World in the Wave Function (p. 87). Oxford University Press. Kindle Edition.

This might be true for the individual value assignments of the wave function. But it’s not clear to me that the relations themselves can be accounted for separately, at least not in the bra-ket notation I’m familiar with. On the other hand, David Deutsch claims to demonstrate separability using Heisenberg matrices, which seems to resonate with Ney’s assertion here.

Ney admits that this only provides separability in terms of the higher dimensional reality. If we restrict our view to 3D space, then we’re still faced with non-separable phenomena, just with that non-separability rooted in a separable reality in the higher dimensions.

What does seem clear to me, which hadn’t before, is that the correlations, even if non-local, are fully accounted for and encoded in the wave function. And it seems easier to think about that accounting in some higher dimensional framework. So I don’t know if it necessarily demonstrates separability, but it does make understanding how the entangled correlations exist and persist easier than when trying to map them into 3D space, at least for me. That seems true as much for qubit circuits in quantum computers as it is for particles separated by light years.

Ney works hard to avoid making this about any specific interpretation of the measurement problem. Obviously wave function realism sits most easily with the Everett many-worlds approach, and Ney admits that’s where her sympathies lie. But she also spends a lot of time discussing the implications for Bohmian mechanics (pilot-wave theories) and objective collapse models (mostly focusing on GRW, although she does mention Penrose’s version in an endnote).

Interestingly, she notes that wave function realism doesn’t necessarily sit that well with Bohmian mechanics. The reason is that Bohmians, while regarding the wave function as real in some sense, tend to relegate it into some kind of ghostly secondary ontology, with the particle itself only having primary ontology. (Many Everettians regard the Bohmain approach as many-worlds in denial, relegating the other worlds to that secondary ontology.)

She also makes the case that the spontaneous collapses in GRW don’t, strictly speaking, violate causal locality. Since the collapse is spontaneous with no external cause, there’s no action at a distance, just a whole bunch of spontaneous actions throughout the extent of the field. Not entirely sure I buy this one, but it’s an interesting point.

Ney’s wave function realism is something that exists in a higher dimensional configuration space. (At least in the non-relativistic version. She does address the relativistic version, but I haven’t attempted to parse that section of the book. My QFT is too limited.) It’s worth noting that this is somewhat in contrast to Sean Carroll’s wave function realism, where he asserts that reality is a ray in Hilbert space. Ney doesn’t find this view compelling and I have to agree. Reality may end up summing up to a ray in Hilbert space, but saying it is just that ray, doesn’t seem productive. (To be fair, I haven’t attempted to navigate Carroll’s paper on this.)

Ney, at the end of her book, discusses the issue of the incredulous stare, the reaction that wave function realism is simply absurd and should be rejected, without trying to find a logical reason to do so. She obviously doesn’t think this is a valid reaction. I agree with her. The wave function obviously works with all its infinite dimensions, which means that simply dismissing aspects we don’t like is really just flinching from what the data seem to be telling us, taking the blue pill rather than the red.

On the other hand, I’m not sure if Ney’s argument has moved me from my original position. I’ll have to give it more thought. I do think it provides a good way of thinking about the wave function, which, at least for me, seems like progress. But I still leave room for the underlying reality to be something very different. That said, I suspect any hope that underlying reality would be less bonkers than what we’re already looking at is likely to be forlorn.

What do you think of wave function realism? Are there aspects you think are more likely to be real than others? Or do you go for a full epistemic view? If so, where do you see the interference effects coming from?

Featured image source

39 thoughts on “The benefits of wave function realism?

    1. Honestly with a lot of this stuff I’m standing on my tip toes and occasionally going under completely. But I think your question is a very good one. The most common answer I see is that we’re dealing with probabilities, and that it’s the probabilities that are interfering with each other. Like many things in QM, I think people only get satisfied with that answer because they get used to it.

      Liked by 1 person

      1. To be clear, the probabilities (which always occur) and interference (which doesn’t always) are separate phenomena. Interference, if it occurs, occurs before “measurement” — the probabilities kick in as a consequence of that.

        Mathematically, interference is a consequence of the way complex number math works. Complex numbers have a phase, and that phase interferes when two complex numbers combine. The probabilities are a consequence of the (square of) the inner product between a proposed measurement eigenvector and the current position of the system’s state vector. (But the location of that state vector has been already determined by any interference that might have occurred.)

        Physically, to the point of your post, as you know, interference is a key mystery in QM. That it occurs supports some kind of physical reality, but we haven’t identified it, yet. (It doesn’t sound to me as if Ms Ney has moved the needle.) Why reality has a built-in Born Rule is also a big mystery. I do like living in a random universe, though. 🙂

        Like

        1. I like that you quoted “measurement”. It’s definitely an ambiguous term, and you know how I feel about unacknowledged ambiguities. 🙂

          The thing about randomness, is unless Hossenfelder is right about superdeterminism, there won’t ever be a way to cash out any determinism in our emergent universe. So you’ll be happy even if a deterministic interpretation turns out to be true. (Unless ultimate determinism makes you unhappy.)

          Like

          1. Heh, I quote “particle” and “measurement” a lot! Because, yeah, totally. 😀

            We might live in a universe that is deterministic but still has entanglement (and, thus, quantum non-locality). Back when Brian Greene infected me with his string theory enthusiasm, I pondered the idea that Planck-scale spinning strings might easily account for spin. Spin-0, spin-1/2, spin-1, etc come from multiples of the string vibration. The orientation of the string accounts for spin interactions. It would be deterministic as far as the string, but below our ability to ever detect. We can only measure particle properties.

            Note that spin is a “gross” property — particles only ever have two values for it. The quantum ambiguity of state is huge here (whether a photon passes through a polarizing filter or not). Consider other inherent particle properties, such as mass or charge. There is uncertainty there, too, but only on the tiny quantum scale (as with the canonical example of position and momentum). Spin accounts for magnetism, too, so it feels like a “large” property.

            Such as, for fancy, the physical orientation of a tiny spinning string. (Initially, string theory looked like it answered a lot of questions. Unfortunately, it doesn’t seem to apply to our universe, nor have we found supersymmetry, which it needs.) Oh, well, it was a nice idea, and I still find “particle” spin intriguing.

            Liked by 1 person

  1. Although I did a year of quantum mechanics at university a long time ago, I’m not up to speed on this now, although I can understand why there’s a question to answer about the ‘real’ nature of the wave function. In lieu therefore of an informed question, I’d be interested in your thoughts on what ‘real’ means here, as the common sense view that a rock is real because I kick it and hurt my toe, and it’s still there when I come back tomorrow, doesn’t seem enough!.

    Liked by 1 person

    1. From what I’ve read, university quantum mechanics courses aren’t great at covering these types of issues. The “shut up and calculate” sentiment keeps them out of foundational matters. It’s worth noting that Ney is a philosopher. A lot of quantum foundations work comes from philosophers of physics, or physics minded philosophers.

      Good question on what is real. That is itself pretty philosophical ground, notably ontology. On some level, these debates risk becoming semantic, about what we’re willing give the label “real”. But my answer is that anything that is part of the causal chain counts as real. If it causes or is caused by other things we consider real, then it itself is real. If we can choose to ignore it without consequence, then it might be imaginary. Real stuff has a necessity. We need it to account for our experiences. We can only deny it by being illogical.

      That’s my answer today. It might be different in the future. 🙂

      Like

  2. This may be above my pay grade too.

    I wonder if really we have just reached the limits of measurement. In QM, I assume we could measure something and get one value but, if it were measured a nanosecond before or after, it could have been a different value. This inability to measure finely enough underlies the concept of chaos. We can’t predict even the motions of three bodies – their paths become chaotic – over a long enough time frame in part because we cannot measure finely enough the initial conditions. Perhaps a wave better represents this probabilistic universe than alternatives.

    Liked by 1 person

    1. I think anyone who feels confident about QM, even those with a thorough command of the mathematics, probably doesn’t understand the implications. As Richard Feynman said, if you think you understand quantum mechanics, you don’t understand quantum mechanics.

      I recently read something about the three body problem, where people made progress on it, but by treating it like a chaotic system and working with probabilities. The question is whether, as Sabine Hossenfelder thinks, QM indeterminancy is a matter of chaos theory dynamics, or as most people think, something fundamental, or something that is deterministic but only at an ultimate level we can’t access.

      Liked by 1 person

      1. Actually I was going to mention the three body problem and the use of neural networks to “predict” the future paths. It’s vaguely related to something else I’ve been thinking about recently – the number of connections between science and neural networks. There are the physics-informed neural networks, the use of artificial neural networks for original science, and not least the notion that the universe is a neural network being trained by itself. The latter has appeared recently in the form of the autodidactic universe (Lee Smolin among others).

        https://arxiv.org/abs/2104.03902

        The physical seems increasingly to take on characteristics of how we envision minds to work.

        Liked by 1 person

        1. It’s always seemed to me that a simple equating of the universe to one neural network has a major issue, the speed of light limit coupled with the ongoing expansion of the universe. Together, they seem to make any chance of the universe being a functional neural network pretty suspect. It seems like a universal mind wouldn’t have time for a single coherent thought before it’s rent asunder.

          Maybe it could be argued that the portions that are gravitationally bound to each other might each be their own networks. That at least provides a lot more time to work with.

          But based on the paper’s abstract, it’s not clear that’s what they’re proposing. The relation to neural networks seems more analogical. But I haven’t gotten into the paper proper. Have you read it?

          Lately I’ve been wondering if it makes sense to think of the universe as a giant quantum computer, albeit an analog one.

          Liked by 1 person

          1. I’m not sure there needs to be a universal mind. Compare the evolution of the human mind or minds in general. They came into existence without a master plan. There doesn’t need to be instantaneous communication across the entire universe.

            I think we may be looking at emergent properties of particles, atoms, and molecules acting as a group. On the more macro level, it is same thing when slime molds and biofilms organize to form networks. There are communications between parts that spread outward and eventually develop into what seems to be a coordinated, smart network.

            For that matter, we hardly have any idea what sort of processes occurred in the seconds and even many years after the big bang when everything presumably was much closer together and many of the core features of the physical universe may have been set.

            Liked by 1 person

          2. Quantum non-local causality, if it exists, is pretty limited, in a way that wouldn’t enable non-local communication. Of course, to all your other points, we don’t know what we don’t know. But any new discoveries have to be compatible with what is currently known.

            Like

        2. One consequence of the three-body problem is that, while we have precise versions of the Schrödinger equation for the hydrogen atom, we can only approximate it for helium atoms (let alone what astronomers consider “metal” atoms). This is because the Schrödinger equation is just a quantized version of Newtonian laws of motion, and those become chaotic with more than two bodies. (Part of the quantizing quirk is that, different from classical physics, momentum and position are mutually uncertain.)

          I thought of the neural network stuff regarding three-body problems. It shows some promise. The universe obviously figures it out (and in real-time, too). I assume the NN trains to model reality and “moves” in a similar enough fashion. Interesting use of a NN.

          Put me down for skeptical to the point of disdaining regarding panpsychism ideas about the universe. 😉

          Liked by 1 person

          1. Your skepticism is well-known.

            I would encourage a rethinking of mind and matter in favor of the idea of evolving complexity in both matter and mind. I don’t think electrons are conscious but even in the most physicalist worldview it is electrons that are powering the brain and consciousness.

            Like

          2. So? They power my flashlight and the static sparks I’ve been getting in the winter. They’re also responsible for chemistry. They’re a fundamental particle, one of the first we discovered, so of course they’re involved in, well, everything.

            Like

          3. One part of the Standard Model in our current conception of things. Wonder how those symmetries emerged? Atoms, molecules, complex molecules, life molecules, conscious molecules – at what point do interactions stop looking like something predictable by a neural network. It could simply be our models since they are generated from a network of real neurons start to resemble neural networks themselves.

            Like

          4. I think we have many models nothing like neural networks, so I don’t think neural networks can only produce models that resemble neural networks. (Of course, it’s not surprising that different kinds of neural networks can produce similar results.)

            FWIW, I think the symmetries are fundamental structural axioms everything descends from. Math is because of them; they’re the Platonic realm. A sphere is obvious and discovered (rather than invented) because of the structural regularities of nature. The patterns of math, even a priori ones, reflect that regularity because our brains are structured by the same regular nature.

            Liked by 1 person

          5. I’m skeptical of Platonic realms. I think more likely the universe emerges bottom up from some sort of evolutionary learning principle. At every level there are imperfections. The complexity of reality comes from the imperfections: matter/antimatter imbalance, chaos and order. Is there a perfect sphere in nature?

            Like

          6. I don’t think complexity comes from imperfections, but from combination of basic building blocks. We have seen in many areas that a collection of primitives and a basic set of rules allows complexity to grow. From 26 letters and the rules of English, every book ever written. From two quarks, one lepton, and one boson, plus the rules of the SM, every object we can touch, see, or know. Think of all thing things you made from building blocks when you were a kid.

            It’s a striking thing. A fundamental rule of reality appears to be: Entropy Always Wins. And, yet, the universe allows entropy to be reversed (in spots, for a while) because of this facility for allowing complexity. It’s why we exist.

            I am likewise skeptical of Platonic realms as distinct from our own. What I was saying is that this universe’s basic physics and structure are (in effect) that Platonic realm. We can abstract the perfect sphere from all the real-world spheres because the rules of reality support the abstract notion of spheres.

            FWIW, back in 2015 I wrote a post about the inevitability of math — that basic math seemed derivable a priori based on nothing but conscious thought. Lately I’ve been pondering how true that is of geometry. Does it require experiencing reality, or could it be derived a priori. It seems less certain to me, anyway. It depends on a notion of spatial extent, which might supervene on experience.

            Like

  3. This is very interesting, Mike, and I like your analogy of standing in the deep end with the wave peaks occasionally (if not regularly) passing right over my head. It’s the use of additional dimensions that is interesting, I suppose, and to your point: what does this mean for “substance”?

    I read recently a paper (can’t remember where now) that said a form of General Relativity in five dimensions, as opposed to the usual four, could recover the 4D equations of quantum mechanics… But this is a 5D or more quantum mechanics right? And then string theory has been deploying additional dimensions to resolve all sorts of things. I’m not opposed to it just wonder how we get to a point where we can devise an experiment to prove their existence? Or have we done it already and just haven’t figured out what we’ve done yet? Certainly possible.

    I do agree it’s very hard to explain the basic interference effects as being completely unrelated to something physically happening. As to particles behaving like particles when we measure them, a physicist I was talking to once told me that’s not quite right, and no matter how fine a measurement we make, when the measurement device supports it we always find waves. Just very confined ones in the particle cases. But it’s never just a pure point… A fact I’m not sure what to do with either… Haha.

    Michael

    Liked by 2 people

    1. Thanks Michael.

      I haven’t heard about that 5D GR thing yet, but I definitely know about string theory and its extra dimensions. The difference though, at least as I understand it, is string theory is positing these extra dimension as extra dimensions in actual space, but supposedly existing curled up in some microscopic manner.

      Wave function realism doesn’t seem to mandate that its (essentially infinite) dimensions be ones in our normal spatial framework, although it makes sense to think of regular 3D space as embedded in, or emergent from, the overall configuration space. (And in the Everettian approach, just one of many.) Of course, the exact nature of this embedding probably requires a theory of quantum gravity to be mapped out. But this is definitely an area where I’m underwater.

      I hadn’t heard that about measurements. I know many physicists think of the particle as where the wave has essentially “snapped to”, with all its energy (or probability, or whatever) as essentially now concentrated in that spot. But a lot of people seem to think that elementary particles at least are point entities, existing in a location but with no spatial extent. I don’t know that there’s anyway to empirically validate that, since no measurement is ever perfectly precise. There is always a degree of uncertainty in any measurement. I wonder if that’s what that physicist meant. If not, I’d be really interested to learn more.

      Liked by 2 people

      1. Hi Mike,

        There is a paper on-line at the de Broglie foundation. Search in google for “From atomism to Holism in 21st century physics” and I believe the link will come up. The author is Mendel Sachs. This article I think appeared in some sort of physics anthology (in book form). If you find the right PDF it’s 9 pages, and the discussion on page 4 contains the following:

        Consider, for example, the cathode ray tube experiment of J.J. Thomson, wherein he discovered the electron[3]. One sees a small spot on the fluorescent screen of a cathode ray tube, interpreted as discrete electrons landing there. But in actual fact this is not a singular point. The image on the screen has a finite, though small spread. It does not actually disappear altogether until the edge of the screen is reached. Indeed, if one should examine the ’electron spot’ with sufficient resolution, one would see a diffraction pattern inside of it!

        [cont’d] It is my contention that there are no experiments that reveal conclusively that an electron (or any other of the ’elementary particles’) is indeed a localized, discrete particle of matter. What the contemporary physicist does is to extrapolate theoretically from the observed facts to say that indeed there is a singular, discrete particle that is responsible for the non- singular, finite spread of illumination that is then identified with the electron. What is said is that this finite spread is due to the unavoidable interference between the singular electron and the measuring apparatus that views it, as explained with the Heisenberg uncertainty principle. But is it not also possible that there is no singular particle in the first place?

        This view may be dated or disproven by now. Or maybe the received notion of uncertainty covering this apparent spread in “point particles” is widely/generally accepted. It’s interesting though!

        Liked by 2 people

        1. Hi Michael,
          Thanks! Googling brought up this link: https://fondationlouisdebroglie.org/AFLB-26j/aflb26jp389.pdf
          Sachs also has a Wikipedia page: https://en.wikipedia.org/wiki/Mendel_Sachs
          I read Section 3, but might have to return and read the whole thing.

          I have to admit the point about seeing a diffraction pattern in the spot initially blew my mind. Although thinking about it, the spot isn’t a one-to-one picture of the electron, but an effect caused by the electron hitting one of the atoms in the screen. Obviously for us to see it, that had to involve a cascade of after-effects throughout the surround atomic structure, a wave in an of itself, which is likely where the diffraction pattern is coming from. Maybe.

          But he does make an excellent point. It’s often said we never observe the wave, only the particle. But that’s wrong and misleading. We never observe either, at least not directly. We only infer both from their macroscopic effects. We should probably stay open to any alternate explanations for those macroscopic effects. Our attachment to a particle ontology might be the legacy of ancient Greek atomism.

          Here’s something related that I’ve been pondering lately. Consider the mirrors used in quantum experiments. Often the experiment involves bouncing a photon off the mirror. (Ex: https://en.wikipedia.org/wiki/Mach%E2%80%93Zehnder_interferometer ) The photon maintains its wave nature both before and after its interaction with the mirror. But for that to happen, the photon has to be absorbed by an electron and then re-emitted. However it can’t just be one electron in one atom. It has to be happening in a large number of atoms throughout the extent of the diffuse wave hitting the mirror.

          Why does the interaction with the mirror maintain the wave status but not the interaction with the screen at the end of the experiment? I think the difference is the silver atoms in the mirror re-emit the photons with no after-effects, leaving no effects (or very little) of the photon’s passing. Also, the coherency of the wave is preserved by the mirror, but not the screen.

          Liked by 1 person

          1. Reflection has both a classical and quantum explanation. The classical one treats light as just a wave that gets electrons in the incident surface “wiggling” — changes their energy level. These re-radiate the energy and the electron relaxes, and all those radiating electrons act like tiny emitting antennas. The combination of those signals determines which way the reflection (or refraction) goes.

            The quantum explanation involves the phases of the “particles” taking all possible paths and those paths adding up to the path taken.

            In both cases it depends on the characteristics of the atoms in the incident surface. Light always penetrates many atoms deep into a surface. If photon energy level is compatible with the atoms’ electrons energy levels, it can be absorbed. If not, it keeps going. X-rays penetrate most surfaces because the high-energy photons have too much energy for most electron jumps. Photon penetration depends on its energy — which, of course, is its frequency.

            Screens absorb photons because they do something with the absorbed energy. In the case of photo film, for instance, it reduces a silver halide into a silver atom. That transformation uses up the energy. In opaque surfaces, it just causes heating. Hot sidewalks, etc.

            Liked by 1 person

    2. Yeah, it goes way back to the 1970s and early supergravity theories that attempted to bring gravitons into the particle zoo and reconcile gravity with the EM force. But it required an extra degree of freedom, hence an extra dimension. (Raising the question, well, where is it?) Such theories are supersymmetric, if not stringy. By the time we had an understanding of the weak and strong forces, it was 11 dimensions (including time).

      However, no shred of a sighting of supersymmetrical particles, and ST isn’t yet even about our universe yet, so these theories are just speculative ideas so far.

      As for “particles” … 😀 😀 yeah, that’s what I’ve been saying for some time now. There’s no such thing as a “particle” but some whatever kind of waving medium is going on often has very localized interactions. When we “measure” something is only one example. Every photon that lands somewhere is an example.

      BTW: Points are a huge problem even classically. They’re a kind of infinity, and nature doesn’t usually do infinity.

      Like

  4. Quantum effects occur at very small scales. The concepts of waves and particles were created at a much larger scale. So, when we find those concepts don’t work so well at quantum scales, what do we think? We think the quantum scale is weird.

    This is a little like learning the use of hammers, and rip saws, and torque wrenches at our human scale and then when finding those tools don’t work so well on wristwatches, declaring wristwatches to be weird.

    We need to do what the watchmakers did–create new tools. The concept of waves and particles don’t work so well at the quantum level and there were some feeble attempts at creating new concepts, like “wavicles,” but those attempts were feeble. I hear physicists throwing up their hands and saying things like “maybe QM will never be understandable.” Of course what they are really saying is that maybe we can’t impose the patterns we discovered at human scales and apply them to quantum scales. Duh, d’ya think? Where are the new concepts? New ideas? The only ones I read are in self-published books which equates to “not accepted by the powers that be.”

    Liked by 1 person

    1. On quantum effects only happening on very small scales, I used to think the same thing. But consider the wave of a photon being shot through the double slit experiment. The slits it passes through and the interference pattern on the second screen are spatially macroscopic. It’s just the final mark left by any individual photon hitting that back photoreceptive screen ends up being very small.

      What the data actually seems to be telling us is quantum effects require very isolated systems. That isolation is just a lot easier to pull off with one or a few particles, and increasingly harder as we move up in scale with ever more particles involved. But physicists are constantly pushing that boundary, reportedly up to macroscopic objects. https://physicsworld.com/a/quantum-entanglement-of-two-macroscopic-objects-is-the-physics-world-2021-breakthrough-of-the-year/

      If it was just switching to new tools, I think a lot of people would be fine with that. What’s bewildering, is that which tools you need seem to change depending on how you look at the watch. And everyone argues endlessly about why the switch happens and what it means, including whether there’s only one watch or many, or whether the watch itself is just a convenient metaphor for something else unfathomably strange.

      Liked by 1 person

  5. I think it’s very important to distinguish between the mathematical tool we call the “wave function” (or “wavefunction” or “wave-function”) and the physical reality. I think it can be metaphorically misleading to conflate the notion of a state vector moving in Hilbert space (a wavefunction) with whatever “particles” are doing in the real world.

    Exactly as you say, Carroll’s view equating reality with a ray in Hilbert space seems too Tegmarkian. It does seem more like a model than a reality. But that’s exactly what the Schrödinger equation does — move a ray around in N-dimensional Hilbert space. Carroll constantly insists that the MWI is just taking the Schrödinger equation at face value, assuming it applies to the universe, and adding nothing new. When you do that, reality is a ray in Hilbert space. And if you insist that the Schrödinger equation is real, then so is the ray.

    Now you see why I’ve always said the MWI smells Tegmarkian to me (I’ve also said that’s the one context where it makes sense). If it’s not Platonic, then it has the same mystery between the math and reality. (Perhaps now you also see why I’ve become somewhat askance at Sean Carroll. He may be over-invested in his own views.)

    I’d put it this way: The math we have models our observations very well. That we don’t understand how the math applies to physical reality should be seen as a certain indicator we lack a full picture. That the math doesn’t include gravity is another strong indicator our view is incomplete. Yet in its domain, our model is one of the most accurate, to many decimal places, in human history. (The irony is that its competition is the other most accurate theory in human history.)

    Obviously interference and superposition are real phenomena. Something makes that pattern — one that looks exactly like the pattern we get with water, sound, and radio waves. Those are cases of energy waves interfering. Somehow matter waves (deBroglie waves) interfere, too.

    Mathematically, we describe matter waves with complex numbers, the ray in Hilbert space and its amplitude (length) never changes. That suggests why matter waves don’t look wavelike. Matter waves can interfere because complex numbers also have a phase, which can reenforce or cancel when two combine.

    I think QFT is roughly the right picture. “Particles” are wavelets in quantum fields. The very nature of wavelets allows them to be spread out or localized. As I think you know, I’m find with the quantum non-local aspect. I accept the apparent randomness (I prefer a universe that is), but I’m open to small-scale phenomena that might explain the Born Rule.

    Speaking of quantum non-locality, firstly, I accept experiments that appear to demonstrate it. (Frankly, I think quantum non-locality and the Born Rule being fundamental make reality more interesting.) Secondly, Ney needs to provide math to be taken seriously. The notion of inseparable states — of entangled states — jumped out of the quantum math early on. It’s what led to the EPR paper. If we take the math seriously, they are fundamentally different from separable states.

    Separable states are:

    |\Psi\rangle=|\psi\rangle_{1}+|\psi\rangle_{2}+|\psi\rangle_{2}+\ldots

    Inseparable (entangled) states are:

    |\Psi\rangle=|\psi\rangle_{1}\otimes|\psi\rangle_{2}\otimes|\psi\rangle_{2}\otimes\ldots

    Those are very different. Vector additions versus vector tensor products. It’s a direct and obvious consequence that’s because you can add two column vectors, but you can’t multiply them in the normal sense. You can only take their tensor product. Which intertwines them into a single vector in Hilbert space that is no longer the sum of contributing vectors.

    So I’d really need to see the details of her hidden variable theory.

    Lastly, about Hilbert space and many dimensions. It isn’t just that you need 1,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000,​000 dimensions. Each of those dimensions uses a complex number. Each dimension is, in a sense, a 2D plane. So it’s all the harder to square with physical reality is my point. Quite the mystery.

    I do think wavelets in quantum fields account for it. I think that wave-like behavior works it way up the size scale a fair degree but eventually swamps out into emergent classical behavior. Singles voices in a crowd of gadzillions.

    Like

    1. Still can’t say I see any strong link between many-worlds and Tegmark’s mathematical universe hypothesis. Certainly Everett was motivated by following the quantum formalism, and that is definitely compatible with the Tegmarkian view, but so are other interpretations. I think characterizing Carroll’s view that the Schrodinger equation, in and of itself, is somehow the reality, is strawmanning him. He’s fully cognizant of Tegmark’s views, and has said he’s not a Tegmarkian on multiple occasions.

      Carroll’s not perfect, but just because I disagree with him about the ray doesn’t mean I’ve soured on him across the board. (Honestly, there’s no one I agree with on all things.) He remains a much more credible figure for me than he does for you.

      Ney does get into the math in her book. She doesn’t reach for it as often as Wallace does, but she’s quite comfortable with it. No hidden variables. Everything is in the standard formalism. (Well, aside from the Bohmian and GRW discussions.) Unfortunately most of the equations and bra-ket notation is in images and so difficult to quote. And I’d have to quote a lot of the book to convey her detailed arguments. So you’d have to read it for the details. (As I noted in the post, her explanation seems resonate with Deutsch’s, and his is available online in all its gory details: https://royalsocietypublishing.org/doi/10.1098/rspa.2011.0420 (Although I’m not aware of Deutsch necessarily tying his treatment to wave function realism.))

      Like

      1. I have to start by saying it’s hard to move forward if I have to cover ground I’ve covered before. If I may ask, when you say you don’t “see any strong link between many-worlds and Tegmark’s mathematical universe hypothesis,” do you mean you have no sense of what I mean, or do you mean “see” in the sense of understanding it and not buying it? It makes a difference because I don’t know whether I need to explain the connection, support it, or just drop it. ABAP: the connection I see involves the centrality the MWI places on the Schrödinger equation. A key argument, almost a dogma, is that everything comes from taking the Schrödinger equation seriously and not adding anything.

        I recall telling this story at least once before, so if you do remember it, apologies for the repeat, but it illustrates the notion. Back in the 1990s I knew an intelligent fellow who was all in on the MWI. I knew what it was but had never given it much thought other than in SF and comics, where I thought it was fun. He explained the apparent non-physicality of branching by pointing out that x^2-4=0 has two answers (polynomials always have as many roots as their degree). There’s no “cost” to such “branching” of reality. There’s no problem with overlapping realities; +2 and -2 coexist just fine. Also, the notion that decoherence accounts for the coincidence of matter is bonkers except mathematically, where it makes perfect sense. Ever since, I’ve realized that a Tegmarkian view is a natural fit for the MWI.

        And yet, yes, most who choose the MWI deny it being a Tegmarkian view. Which is fine, but it requires explaining the coincidence of matter and possibly the branching of energy (which would change gravity). Take the Tegmarkian view, take the Schrödinger equation ultimately seriously, and those issues vanish. Bottom line (make a note! 😁), all I’m saying, is that Tegmarkian MWI makes perfect sense, but as a physical theory, I think it has massive problems.

        Here’s a key point: the MWI depends on the Schrödinger equation being meaningfully applied to the entire multi-verse. Physics, in general, takes the Schrödinger equation seriously enough to believe it applies to cats and bigger. Under the MWI, exactly as Sean Carroll says, there is a Schrödinger equation, a full description, for the multiverse. All Schrödinger equations deal with a moving vector, |Ψ⟩, the wavefunction, which lives in Hilbert space. In some formulations, mostly because of normalization, that vector is a ray (a constant-length vector).

        So, what Carroll says is exactly correct; I’m not strawmanning anyone. The distinction is the same: how real is that mathematical object? Both Carroll and Ney are talking about the same thing: the wavefunction. If Carroll “asserts that reality is a ray in Hilbert space,” then he’s asserting that Ψ, that vector, is real. Neither of us finds that compelling.

        At the very least, we have a useful mathematical model. When we all agree reality isn’t Tegmarkian, the central question is: What in the Planck’s constant, Euler’s number, double pipe symbols, does that math model?

        Like

        1. Here’s why I don’t see the Tegmarkian comparison. Tegmark’s view is that all math is reality. Everett’s is that the universal wave function describes reality. Nothing in the Everettian view, in and of itself, commits to math other than the universal wave function. For example, most Everettians don’t think Bohmian mechanics or GRW are real somewhere in a mathematical multiverse.

          The central insight that Everett offers is that the quantum formalism minus any collapse postulate is all we need to explain our observations. I can understand why the opponents of the theory want to paint this insight as somehow an unreasonable rigid dogma, but it’s hard for me to see it as anything other than rhetoric. As Sean Carroll often mentions, the Schrodinger equation only stands until it is falsified (in it’s non-relativistic context, or QFT in the relativistic one). It’s why I think we should keep an eye on the experiments pushing the boundaries of quantum effects, or anything else that stresses the formalism.

          On the issue of matter coinciding, I don’t see it as an issue. As I understand it, the macroscopic matter situation ultimately results from the Pauli exclusion principle, but that only says no two fermions can be in the same quantum state within a quantum system. Being in different branches of the wave function is, by definition, being in different states.

          But I think the gravity issue has more bite. The Everettian approach does seem to require that spacetime be quantum in some manner, or emergent from quantum phenomena. If that ultimately turns out not to be the case, then it’s hard to see Everett remaining viable.

          Like

          1. If you can’t see it, you can’t see it. Really just two dots to connect: Tegmarkian MWI makes perfect sense to me; Sean Carroll says the wavefunction vector is reality. It caught my eye because, yeah, the MWI isn’t Tegmarkian.

            There are reasons why I think the Pauli exclusion principle is a problem, but it requires detailed explanation and quantum weeds. Since we’ve discussed it before, I feel it might waste time for both of us. Suffice to say, I have considered the quantum state, but see problems.

            FWIW (something we’ve also discussed before), what I understand to be Deutsch’s formulation of the MWI doesn’t have the energy/gravity issue. (It’s matter coincidence that’s the deal-breaker for me.)

            Liked by 1 person

          2. Nothing in the Everettian view, in and of itself, commits to math other than the universal wave function.

            Not directly commits, OK, but morally speaking as it were, the Everettians are committed to any math that is essential for doing actual science. Which in my view, is perfectly healthy.

            While I’m here, let me comment on something from your main post:

            Reality may end up summing up to a ray in Hilbert space, but saying it is just that ray, doesn’t seem productive.

            My attitude – which I highly recommend to everyone – is to be extremely promiscuous with “is” statements. I’ll jump into bed with any that look attractive. There is no need for monogamy here. Most anything can and does have multiple accurate descriptions, ways to view, etc.

            Like

          3. Paul, it seems like you have a tendency to respond to points I make in a specific context as though they were made without that context. In this case, the context of pointing out that nothing about the Everettian view commits one to the full-on mathematical universe hypothesis.

            Promiscuity with “is” can be fine, as long as it comes with clarifications. (And to be fair, Carroll may well provide those clarifications.) I can say the universe is zero, because all the negative and positive energy appear to balance out. But if I don’t include a discussion about positive and negative energy, it seems like a statement designed to be provocative rather than inform.

            Like

  6. I’ve only started reading Ney’s piece, but I’m already favorable to the high-dimensional-reality view. Jenann Ismael has a great paper discussing an analogy made by David Bohm: https://www.jenanni.com/papers/WhatEntanglementMightBeTellingUs.pdf

    Bohm imagines an aquarium with two cameras, one mounted on (I’ll say) the North side and one on the East. The two images are displayed on a wall in another room. We have no access to the aquarium other than via these images. Why are the images so correlated? – we wonder. Every time we see a fish-face in one, we see a fish-flank in the other. The correlations obtain because we have multiple low-dimensional views of a higher-dimensional object.

    Liked by 1 person

Your thoughts?

This site uses Akismet to reduce spam. Learn how your comment data is processed.