Is it just the math?

Scientific breakthroughs often begin with someone saying, “Don’t panic. This crazy sounding assumption is just to make the math work.”

Nicholaus Copernicus, when he developed his theory of heliocentrism (the earth orbits the sun), was operating from a scientific realist view. In other words, he thought his system reflected actual reality, or at least reflected it better than Ptolemy’s geocentric system (everything orbits the earth), which had been the accepted model of the universe since ancient times.

However, the new reality he presented was controversial, particularly in protestant circles at the time. Which led Andreas Osiander, a Lutheran theologian involved in printing his book, to add an unauthorized and unsigned preface. Osiander argued that Copernicus’ framework shouldn’t be evaluated on whether it’s literally true, but as a useful mathematical framework to make predicting astronomical phenomena easier. In other words, don’t worry; it’s just convenient math.

For decades many astronomers followed Osiander’s advice, accepting just Copernicus’ mathematics. The number who actually accepted heliocentric realism was vanishingly small. One astronomer, Tycho Brahe, advocated for a compromise cosmology with most planets orbiting the sun, but the sun still orbiting the earth. Straight Copernicans like Johannes Kepler and Galileo Galilei were very rare. It wouldn’t be until the early 1600s and Galileo’s telescopic observations, that heliocentrism started to be taken seriously (and resisted).

Moving forward to 1900, Max Planck was trying to mathematically model black body radiation. But he couldn’t make it work. In desperation, he made a change he was loathe to do, one that would make his math compatible with Ludwig Boltzmann’s statistical interpretation of entropy, a view he opposed. He added discrete quantities into the equations, essentially doing the math as if there was a minimum unit of radiation. The change worked.

Planck was beginning the science of quantum physics, but he didn’t see it at the time. He saw the quantization as purely a pragmatic move, a mathematical contrivance, and was skeptical of any deeper philosophical implications. However, a few years later, Albert Einstein used quanta to explain the photoelectric effect, essentially reifying the quanta into what we now know as photons.

That same year, Einstein introduced his theory of special relativity. He was a realist about the theory from the beginning. However, his equations had implications for spacetime that he was initially skeptical of. We call it “Minkowski spacetime” today because his old math teacher, Hermann Minkowski, recognized the implications. Einstein eventually came around.

But after working out general relativity, Einstein was again resistant to some of the implications of his math. General relativity predicted that the universe either had to be contracting or expanding. To save appearances, he introduced a fudge factor called the cosmological constant, a move he later regretted after observations showed that the universe was indeed expanding. (Although the cosmological constant later found new life with the discovery of dark energy.)

Einstein was also resistant to certain solutions to his equations, solutions which seemed to indicate there could be regions of spacetime which were so curved that nothing could escape. In the early 1900s, these seemed like perverse entities that couldn’t be physical. Of course, today we know black holes exist and play a pivotal role in the universe. We’re able to detect and image them.

In 1935, Einstein, together with Boris Podolsky and Nathan Rosen, published the famous “EPR paradox” paper, pointing out issues in the mathematics of quantum theory that violated locality, at least under conventional interpretations of quantum mechanics. Erwin Schrödinger followed up with additional papers naming the phenomenon “entanglement”, as well as coming up with the famous “Schrödinger’s cat” thought experiment, which questioned the implications of the mathematical framework he himself had been instrumental in developing.

The thrust of their argument at the time was that these mathematical implications couldn’t be reality. However, twenty years later, John Stuart Bell came up with a way to test those implications. Alain Aspect, John F. Clauser, and Aton Zeilinger won the 2022 Nobel Prize for their experiments carrying out out those tests, progressively closing the loopholes to such an extent that it would be at least as absurd for the predictions to be wrong as right.

(Einstein is picked on a lot on this post, but it’s worth noting that these are cases of him blanching at the implications of his own brilliant theories, or theories he helped develop. The fact is many famous scientists struggled with the full implications of their discoveries.)

Of course, the mathematics aren’t always right. Newton’s laws of gravity were used to predict the existence of Neptune based on anomalies in Uranus’ orbit. However, those same laws were also used to predict the existence of the planet Vulcan, supposedly closer to the sun than Mercury. But Mercury’s orbital anomalies turned out to be stranger, heralding the limitations of Newtonian theory, limitations which would require Einstein’s general relativity to resolve.

And the Large Hadron Collider hasn’t been kind to many speculative theories and their mathematics. So just because someone can manipulate equations, doesn’t mean it reflects reality.

On the other hand, when the mathematics of a heavily tested theory, with no further assumptions, make predictions that can’t currently be tested, history seems to suggest taking them seriously. And mathematical convenience often heralds new realities. Even when the limits of a theory are reached, the new explanation typically ends up being far stranger than the initial prediction.

Granted, it’s always possible to ignore the implied ontology by going instrumentalist. I do think it’s important to be able to put on the instrumentalist hat from time to time. It helps to sidestep ontological biases. Planck did it to make his breakthrough, as did Werner Heisenberg when he was working out the initial mathematical framework for quantum mechanics. But these were theorists using instrumentalism to make progress in spite of the strangeness.

Other times making progress seems to mean finding ways to reconcile theories, to find where they converge, an inherently realist approach.  Einstein reportedly worked out special relativity from reconciling classical electromagnetism and Newtonian motion, and then general relativity from reconciling special relativity and Newtonian gravity. And most of us got interested in science and philosophy to get closer to truth, not to unrelated prediction instruments.

This is why my own preferred outlook these days is structural realism, a sort of minimal realism that accepts the mathematical structures described by well tested theories as real, but remains agnostic on any underlying ontology. However even structural realism means accepting strange implications. 

Which is why many people reach for instrumentalism. Although few are able to stick with it consistently. And selectively adopting it to dismiss predictions we don’t like seems firmly in the tradition of Osiander.

Unless of course I’m missing something.

Featured image source

50 thoughts on “Is it just the math?

  1. @selfawarepatterns.com #JohnDalton quantized mass ‘to make the #chemistry work’ a century before #MaxPlanck quantized action ‘to make the #physics work’. ✅

    #Science #Histodons

    Like

    1. @ChemicalEyeGuy @selfawarepatterns.com

      Interesting, and good point. I'd forgotten that the chemists got to atomism before the physicists. Although I think it was still controversial in 1900 when Planck was doing his thing, and he wasn't onboard yet.

      Like

      1. @selfawarepatterns @selfawarepatterns.com Dalton’s #AtomicTheory wasn’t controversial for chemists. We were well into the architecture at the nanoscale (molecules) before physicists went the other way and explored the nucleus and its structure. 🤷🏻‍♂️

        #LudwigBoltzmann was among the first physicists to ‘see the light’.

        https://academic.oup.com/book/34859?login=false

        Like

        1. @ChemicalEyeGuy @selfawarepatterns.com

          Yeah, sometimes fields get hung up on their biases until empirical data eventually forces the issue. Physics has definitely had its share of it throughout its history.

          Like

  2. I see truth as a social construct. Instrumentalism is a fall back position when society is not yet ready to change its view of what is true. I’m inclined to think that structural realism doesn’t actually help here.

    Science is a pragmatic enterprise. Scientists want to go with what works. Societal ideas of truth lag behind.

    Liked by 1 person

    1. I agree about truth, in the sense that there are varying definitions and theories about it. When I use that word, it’s in a pragmatic sense of what works reliably over time. But I definitely agree that societal ideas of it lag.

      Although it’s not just society overall. Max Planck once quipped that physics progresses one funeral at a time, indicating that even among scientists, some ideas are only widely accepted by those who grow up with them.

      And the nuances of structural realism probably only help a small population, but I think it’s a population of theorists and philosophers of science, along with people like us who follow it closely, who might be open to a new idea.

      Liked by 1 person

  3. Mike, your questions opened Pandora’s box about our explanations of what is out there.

    It is not just math used to explain an “unexplained” phenomenon. It is a lot more.

    For example, take a field in physics, like an electromagnetic field. It is used to explain a lot of things. Every student in every school knows about it. But all explanations are about how the field reveals itself, impacts something, originates, etc.
    Yet, I have yet to see an article explaining what an electromagnetic field is.

    “It is just the math” and “it is just the field” are alike.

    Liked by 1 person

    1. Thanks Victor. Opening Pandora’s box is always the goal!

      I do think there’s a difference between just the math and just the field. The math, in and of itself, isn’t guaranteed to have causal effects with things in our world. There are plenty of mathematical constructs which don’t. The fields, whatever they are, do.

      Liked by 1 person

  4. I’ve been toying with this as part of my Sci-Fi world building. For example, maybe people in the future discover how to do something like antigravity first, but don’t understand why it works until much later. I feel like there is precedent for that in the history of science, and it conveniently spares me from having to explain all of my made up sciency stuff.

    Liked by 2 people

    1. That’s a common strategy for FTL. They discovered it but don’t really understand how it works, or why it can’t be used for time travel, etc. You can also have the science worked out in a previous golden age with the knowledge now lost, except for the recipes on how to build the tech. Iain Banks and Neal Asher have the AIs work it all out and just say it’s hopelessly beyond the ability of unmodified humans to grasp. (Although the explanations for why humans should matter as protagonists in these stories seem weak.)

      You can also just have none of your characters understand or be interested in the science. How many people today can really explain how a car, airplane, or smartphone works, much less nuclear power?

      Liked by 2 people

  5. Any chance you want to take a go at explaining the difference between instrumentalism and structural realism? I know what I think regarding how the world works, but I don’t know what that makes me. For example, I don’t think the cat is both alive and dead. Instead, we don’t know, and can’t know, until there is some causal connection between us and the cat. Likewise, just because the wave function describes many possible worlds, that doesn’t mean they all happen. Again, likewise, Bell’s inequality doesn’t rule out hidden variables (so, determinism). Instead it shows that both locality and hidden variables can’t be true at the same time. So you can have hidden variables if you give up locality, which I’m good with.

    So what am I?

    *

    Liked by 1 person

    1. My take on the difference is that an instrumentalist only regards observables as real, or at least it’s the only thing they’re confident is real, and so doesn’t trust the theory beyond the current observables. A structural realist, while leery of the story told with the theory, does see its well tested structures as real, and so is more willing to consider their predictions beyond the current observables, unless there is reason to think the structures reach their limit.

      Somebody can be a realist and think the cat is in one state in one world, provided they buy a physical collapse or pilot-wave. Maybe a more telling question is, what do you think is happening when one particle is sent through the double slit experiment prior to detection? If you think the observed patterns tell us nothing about the pre-detection dynamics, then you’re probably an anti-realist or instrumentalist toward the wave function.

      It’s worth noting that RQM takes an anti-real stance toward the wave function. It’s one of my issues with it. (The other is relations not existing until an interaction.)

      Like

  6. [You’re not helping answer my question: what am I? 🙂 ]

    Of the quantum theories, I’m most sympathetic to RQM, but I don’t know enough to say it’s anti-real as opposed to reality agnostic. But I agree with it (?) in saying there is no point in talking about existing without interaction. To exist is to interact, and you only exist to those things with which you are in a chain of interaction.

    For now I’m gonna be a neumenonist: the neumenon is real, but we can’t know anything about it beyond it’s phenomena (which are interactions ).

    *

    Like

    1. [Sorry, doing my best here. As I noted, consistent instrumentalism is tough to maintain. Even people who call themselves instrumentalists are prone to making ontological assertions. Unless you’re an idealist, but based on your noumenon comment, it doesn’t sound like you are.]

      I think my biggest issue with RQM is that in it noumena don’t exist until the interaction. I agree we can’t know anything about them except what we get through interactions, but saying they don’t exist until those interactions, the sparse or “flash” ontology, raises more questions than it answers for me. And that stipulation seems crucial for keeping RQM away from some degree of wave function realism and many-worlds.

      Like

  7. Great post! It can’t be easy distilling all that history of science into digestible form.

    For me, I think it depends on the particular issue. I tend to lean instrumentalist (as you have probably guessed) especially when scientific theories seem especially cockamamie (as so many do these days), but then again, I’m neither a mathematician nor a scientist. I imagine if I were, that would make some difference in how I viewed the underlying reality of the work I was doing. I imagine many scientists—the ones actually doing science—are realists, though I’m not sure if that’s true on the more theoretical side of physics. I imagine, too, that the degree of certainty one feels toward a theory plays a role.

    Liked by 1 person

    1. Thanks Tina. I find this stuff interesting, and mull it over a lot, so it wasn’t that hard. And with history, there’s always more details to uncover, as some of the feedback coming in shows.

      That’s the typical way people lean instrumentalism, and hopefully it came across in the post that scientists are no different. One reason I like reviewing history is to be reminded of how many well accepted concepts today were once thought cockamamie, by even the most open minded people at the time. Often even the person who discovers a concept can’t accept its full implications. Sometimes it just takes the next generation growing up with it to fully embrace it.

      Liked by 1 person

  8. As an aside, Neptune was discovered by accident. The prediction of its position was based not just on Newton but also on the purely empirical Titius-Bode law of planetary orbits, which did not in fact apply. It so happens that at the time Neptune was in about the predicted place — a couple of decades earlier or later it would have been sufficiently off, to remain undiscovered.

    Re taking maths seriously… What do you make of Ian Stewart’s observation that Feynman’s take on quantum electro-dynamics (and the path integral formulation of QM in general) clashes head on with MW in that it explains what we experience as a sum across all possible histories, rather than treating those histories as separate worlds, experienced separately? Both MW and path integral interpretations take maths seriously, while painting very different ontological pictures. (As ever, phenomenology under-determines ontology — the cental tenet of ESR).

    As I see it, the path integral approach has the advantage of conceptual continuity in extending the action principle from the classical to the quantum realm. How do yous see it?

    And, BTW, Happy New Year! 🙂

    Liked by 1 person

    1. Interesting. I hadn’t heard that about Neptune’s discovery. Thanks. I still think it’s fair to say that it was the deviations in Uranus’ orbit from the predictions of Newton’s theory (with maybe some Laplace mixed in) that clued astronomers that something was going on.

      I’m not familiar with Ian Stewart’s take here, and a quick google search didn’t bring up anything that seemed relevant. Do you happen to have a link or ref?

      From everything I’ve read, similar to the distinction between matrix mechanics and wave mechanics, path integrals have been mathematically proven to be equivalent, with it possible to derive the different formulations from each other.

      Migueal Morales at Ars Technica puts it this way:

      Quantum mechanics is not only written in math, but there are three completely different versions of the math in widespread use: the Schrödinger wave approach, the Dirac formulation, and Feynman’s path integrals. The Schrödinger approach emphasizes the waviness of particles and uses differential equations. The Dirac formulation focuses on quantum mechanics’ sensitivity to measurement order and uses the language of linear algebra.

      Feynman’s path integrals also have a wavy point of view and can be seen as an extension of the Huygens–Fresnel principle of wave propagation. This leads to some truly terrifying path integrals, covering all possible paths and possibilities. Feynman diagrams are a shorthand for keeping track of the approximations you need to make to actually solve things. While the mental models behind the three mathematical traditions are quite distinct, they always give the same answers.

      https://arstechnica.com/science/2021/02/a-curious-observers-guide-to-quantum-mechanics-pt-6-two-quantum-spooks/

      Interestingly, Feynman developed (or began to develop) path integrals in his 1942 PhD thesis. His thesis advisor was John Wheeler, who also advised Hugh Everett many years later (1957) for his thesis. It seems like if path integrals represented an obvious problem for Everett, it would have come out pretty quick. Although it’s always possible Stewart noticed a deviation between the formalisms everyone else has missed.

      Happy New Year Mike!

      Like

      1. Re Neptune… I wasn’t disagreeing — just sharing an interesting wrinkle to the story.

        Re path integral… Here’s another wrinkle I find interesting. The idea of using the Least Action Principle was originally Dirac’s, but he didn’t pursue it and it got mostly ignored, until young Feynman happened across Dirac’s paper and in an astonishing burst of creativity fledged it out into a fully worked out apparatus.

        Ian Stewart’s point as I recall it (sorry… cannot remember the source: either one of his books I had from the local library, or a TV interview some years ago) wasn’t that the path integral approach was a problem for MW in any formal sense — as you say, it is known to be equivalent to the both the Schrodinger’s and Heisenberg’s formulations. In my understanding, Stewart was pointing out an ontological clash. According to Feynman everything that can happen does indeed happen, but instead of persisting as separate realities of MW, it results in one unique world via the sum-across-histories mechanism.

        Both Feynman’s and Schrodinger’s treatments are “wavy” but if their maths is to be taken seriously, they appear to imply different ontologies. Admittedly the relationship between MW worlds and Feynman’s histories is unclear, but they cannot be the same and accepting both as separate, coexisting ontological realities seems a bit of an unnecessary stretch.

        Hence per Poincare’s conventonalism thesis, we are free to pick the one we find more plausible, as long as it remains consistent with the available data. Stewart obviously found the sum-across-histories ontology more natural. And it seems to me that its use of Least Action, which is universal in the rest of physics, makes it very attractive indeed.

        On the other side of the scale is, of course, the measurement problem. MW does offer an answer, but says nothing about how it could be verified. Is that better than accepting that we don’t know the answer? I don’t know. So I have to rely on my prejudice, which keeps changing its mind — sod it! 🙂

        Like

        1. On the Neptune point, thanks. It was still interesting to learn that Newton wasn’t used for predicting where to look, something I’d always assumed. (And been somewhat in awe of, since the calculations seem like they would be hideous. Apparently they were too hideous to actually happen in an age of manual calculating.)

          “According to Feynman everything that can happen does indeed happen, but instead of persisting as separate realities of MW, it results in one unique world via the sum-across-histories mechanism.”

          One of the issues with these types of discussions is separating semantic distinctions from actual ontological ones. The first thing that comes to mind when I read this was: it depends on what we mean here by “world” or “histories”.

          For example, David Deutsch in his description of Everett, uses the word “universes” rather than “worlds”. To him there are multiple universes in one world. Although most many-worlds advocates flip that terminology, and say there are many worlds in one universe. While others just equate “world” and “universe” and say there are many in the multiverse (which is a different “multiverse” from the ones conceived of in cosmology).

          Interestingly, Deutsch regards all the universes as all already existing. In his terminology, universes don’t split, but histories do. In the sense that there can be large numbers of universes in the same history, but then diverge from each other splitting that history. I wonder if he gets that terminology from Feynman. In any case, he would agree that it’s a sum across histories in one world, but that we only have access to our little slice of one of the histories.

          On verifying many-worlds, it doesn’t seem like we’ll be able to test the other worlds anytime soon. Which, to me, makes them irrelevant for scientifically assessing Everettian QM itself. The real test is whether the raw structure of quantum theory (under whichever formalism we want to work with) remains predictive in ever larger domains. So the way to test Everett is continuing to stress that structure, to see if actual physical collapses show up, or the right kind of additional (currently hidden) variables.

          Of course, even if nothing is found, it can’t rule out that something won’t be found in the future. At least unless someone does figure out a way to detect interference from other decohered branches.

          And an RQM advocate would argue that all that formalism stressing also benefits their interpretation, if you buy its metaphysical assumptions.

          Like

          1. Sorry, a misunderstanding. The Neptune prediction *was* based on Newton, it also assumed the Bode-Titius law , which was an empirical regularity not derivable from Newton. It just happens to be true for planets from Mercury to Saturn, with a gap accounted for by the asteroid belt. But as a universal regularity it is now thoroughly discredited, of course.

            Hideous calculations were involved, but nothing compared to what poor Kepler had to deal with, trying to determine planetary orbits, with both the observer and the observed object following unknown closed paths. As another fascinating wrinkle, the older Ptolemaic system was not actually wrong. Its epicycles can be viewed as second terms of Fourier decomposition of elliptical orbits. So the system could have been pragmatically fixed by adding a third term of a second order epicicle (and so on, as measurements improved), but Kepler’s solution, while being formally equivalent to the full decomposition, was so much simpler, that there was no argument.

            Re worlds and histories… I agree about confusion. However, regardless of terminology, the path integral formulation sums across somethings, while in MW, somethings splits. The two sets of somethings cannot be the same because sum-over-histories would make all the splitting somethings the same, given that Feynman’s sum produces a single, unique answer. So there are two competing ontologies — which is what Stewart’s comment was about.

            Yes, Deutsch tries to accommodate both potentially infinite structures and has been criticised for doing so. Philsophically, his view seems to flip-flop between treating and not treating the two sets of somethings as the same.

            But I am unclear how you expect MW to be experimentally stressed. It amounts to “no more” than Schrodinger minus wave collapse (the “collapse” being purely subjective ). So any experimental evidence which agrees with path integral (and matrix mechanics) will be automatically in agreement with Schrodinger and hence with MW. And vice versa.

            Or put another way… MW removes the measurement problem but makes no verifiable predictions other than made by Schrodinger’s formulation, and thus also made by matrix mechanics and by path integral. Unless I’ve missed something fundamental along the way, of course.

            Like

          2. Thanks for the clarification on Neptune. Glad to have my admiration for the calculations restored, even if there was an element of luck involved.

            I’ve read a little about what Kepler had to contend with. It seemed like his biggest hurdle was conceptual, letting go of the ancient idea of heavenly bodies moving in perfect circles. I used to wonder why that was so hard, but now I know they used to be associated with crystalline spheres, which Tycho Brahe discovered couldn’t be there. But holding onto a circling principle, I think, preserved some kind of explanation for why planets moved as they did. Giving it up meant the explanation for those motions had to be something else, which we now know is gravity.

            It seems like if path integrals led to a different ontology, that would mean producing different predictions, and not being mathematically equivalent. As Morales indicated, they do have different mental models, but those are just crutches. As I understand it, you can find each model in every formalism, but they’re each only easy in one. So if Stewart is saying there’s a different ontology, it seems like that would be a major discovery.

            You have MW right. It is just the quantum formalism (whichever one you choose) with no collapses or hidden variables. It doesn’t postulate the other worlds, something people seem to have trouble grasping. The worlds / histories / branches / whatever just fall out of the formalisms. Postulates have to be added to get rid of them. But that means you can falsify Everett by validating any of those postulates, or finding some other way to demonstrate the limits of the formalism structures.

            Like

          3. > So if Stewart is saying there’s a different ontology, it seems like that would be a major discovery.

            As I understood him, Stewart was simply pointing out that Feynman’s path integral offers a natural explanatory physicalist ontology, incompatible with that of MW. While it leaves the measurement problem in place, unlike other versions it has the advantage of giving plausible physicalist explanation to phenomena such as e.g. light rays following geodesics . (Plus the whole PI approach is also appealing by being based on Least Action, and this gives its natural ontology a head start in plausibility.)

            Yes, one can show the formal models to be equivalent, but that does not entail their respective ontological explanations also being equivalent in any sense.

            > The worlds / histories / branches / whatever just fall out of the formalisms. Postulates have to be added to get rid of them.

            Indeed. And MW is one of such additions, albeit a negative one. All such hypothetical add-ons are really gropings towards a validatable successor theory. Just experimental data cannot resolve the matter because their respective formal models are equivalent.

            Like

          4. Stewart has a book, “In Pursuit of the Unknown: 17 Equations That Changed the World”. I see he has a chapter on the Schrodinger equation. Any chance that’s where he discusses it? In any case, I’m tempted to buy it for the other stuff I see in it the contents, and the visual aids in the preview. It looks pretty accessible, even for a math phobe like me.

            “And MW is one of such additions, albeit a negative one.”

            Maybe if our starting point is the Copenhagen set of assumptions. But calling removing a postulate…a postulate, seems a bit strange. Although I have to admit I used to see it the same way. But in terms of experimental data, I keep watching the ones preserving quantum effects in ever larger systems, like this one: https://www.livescience.com/physics-mathematics/quantum-physics/worlds-heaviest-schrodingers-cat-made-in-quantum-crystal-visible-to-the-naked-eye

            Like

          5. Ok, I bought the book and he does discuss Feynman’s point in that chapter, although it’s brief and references Feynman’s book: QED.
            I’d have to dig up that book to be sure of what I’m about to say. But I think Stewart is taking Feynman’s discussion of what happens under normal circumstances and erroneously extrapolating it to what happens during a measurement.

            Right now, the laptop I’m typing this on is composed of quantum particles, which are constantly going into superposition and decohering. The reason things are deterministic (or at least mostly deterministic) at the macroscopic scale, is because all of that normally cancels out. That’s my interpretation of what Feynman is talking about. So yes, in that scenario, the classical world is the sum of all the histories.

            But things are different in a measurement. Because in a measurement, whether it be a natural one or in a controlled experiment, the effects of an isolated quantum system are amplified into the environment, which prevents everything from just canceling out. It’s how quantum randomness intrudes on the classical world. But instead of the classical world now being the sum of all the histories, there are now multiple sets of those histories with different sums, leading to the classical world being split into diverging ones. (Unless of course there’s a collapse to annihilate all but one of the histories, one history is more real from the beginning, or quantum theory is wrong in some unspecified manner.)

            At least that’s my first impression. As I reread the chapter more carefully, I may come away with a different conclusion.

            Like

          6. As a (very ex-) mathematician, I have no problem with seeing subtraction as addition of a negative value. 🙂 More seriously, though, MW resolves the the measurement problem by positing a vast additional ontology. One has to accept that ontological expansion in order to remove the need for any kind of wave collapse. So it seems to me reasonable to call call it an addition.

            I am afraid I am unclear how demonstration of superposition in ever larger systems helps either side of the argument. You’ll have to spell it out for me.

            Yes, the “17 Equations…” title rings a bell, so it’s probably where I came across Stewart’s comments. Now, though you have the advantage on me in that you actually know what Stewart said, whereas I have to go by a memory of something I read long ago. 🙂

            All I can say is that as I recall it he was making the ontological point I have been trying to present. While I do not think that point is as decisive as Stewart intended it to be, I believe it does have some heft.

            And I think you misconstrue what Feynman is talking about. It is not at all the same as stochastic “adequate determinism” of your laptop. That is quite clear from his QED lectures. But perhaps even clearer is the excellent exposition of Feynman’s general take on QM in Brian Cox’s and Jeff Forshaw’s slim volume “The Quantum Universe: Everything that can happen does happen”. That title definitely does *not* refer to the MW ontology, but to everything happening in this world/universe/time-line/whatever… Though aimed at non-technical readers, it avoids over-simplifying its subject matter.

            Like

          7. “I am afraid I am unclear how demonstration of superposition in ever larger systems helps either side of the argument. You’ll have to spell it out for me.”

            Remember that many-worlds is just quantum mechanics without the collapse. So detecting an actual collapse would falsify it. But if there is a collapse (ontic or epistemic), the question is, when does it happen? What are the conditions for it? Bohr speculated it would be interaction with macroscopic systems, but the crystal in the article I linked to above is a macroscopic object (tiny but visible to the naked eye), with 10^16 atoms in it, yet it was successfully held in a superposition.

            Generally, these experiments extend the scope of pure quantum theory. The more that scope is extended, the stronger non-collapse interpretations look. Of course, if they do detect any kind of collapse, those interpretations are toast.

            On remembering Stewart’s book, no worries. I suspect we’re about done. For Feynman’s discussion, based on my googling around, I think it’s fair to say Stewart’s interpretation is not universal. (For example: https://www.hedweb.com/everett/everett.htm#sum ) I’d have to buy and read Feynman’s book to say anything more authoritative. I bought Stewart’s book because it had other interesting stuff in it. At the moment, I’m not interested enough in quantum electrodynamics to pick up Feynman’s. Maybe at some point!

            Like

  9. Well no, I wouldn’t say that reality should exist merely as “math”. In a natural world reality should exist as “causality”. Apparently the Copernicus situation illustrates this. According to McFadden’s “Life is Simple”, Copernicus would not have published without the prodding and writing of his devoted student Rheticus. Here is some of the text about what happened:

    In the spring of 1542, Rheticus set off for Petreius’s presses in Nuremberg with the precious text in his bag. However, by this time Rheticus had been recruited to a new post at the University of Leibniz and had to go there to teach his classes. He entrusted supervision of the printing process to the Nuremberg-based Lutheran theologian, and mathematician, Andreas Osiander.

    It was said that Copernicus died the day a copy reached him, hastened by distress given Osiander’s unauthorized preface.

    In any case, I consider math to be a humanly invented language with no other tie to causality itself. Still to me it does make sense that mathematical observations can sometimes suggest effective causality paths that may not otherwise seem sensible to us given existing prejudices. Then once we’ve digested a given math inspired reduction that’s been empirically validated, our conception of causality tends to become improved. In retrospect we tend to realize that it couldn’t have been any other way. Unfortunately that point has not yet been reached regarding quantum mechanics. Perhaps it never will.

    Liked by 1 person

    1. I’ve made similar points to others about causality. Although if you start thinking about what causality is, you realize that it’s basically relations between variables in time, which can be described mathematically. And you can have other 4D patterns describing causal relations that aren’t themselves causal. Which seems to raise the requirement that is must be part of the causal relations in the world.

      I’ve never heard that Copernicus’ death was hastened by seeing Osiander’s preface. I have seen reports that he saw the book in a last lucid moment and then died happy. My suspicion is that those stories are apocryphal. But he had circulated treatments of his model around for years, and so had an idea of what his colleagues thought, and he knew it was a tough sell.

      I think math is a human invented framework to model the structure and relations of reality. But that framework can also be used to model structure and relations that don’t exist. It’s when extrapolating from the structures that do exist that we get the insights. The difficulty is in knowing how far we can push those patterns beyond current observables.

      Liked by 1 person

      1. Yes to get technical I should have said “worldly causal dynamics” rather than just “causality”. Beyond impossible causal dynamics this excludes the possibilities of a god “causally” doing things here.

        As I recall in McFadden’s book he also presented Copernicus as generally happy enough to not freak people out about us circling the sun. So I did find it a bit strange to imply in the end that he was mortally distressed by that unauthorized preface. But if it would have pleased him then he could have have added such a preface himself, which he of course did not.

        Since I’ve brought up McFadden’s book on the life of William of Occam however, there’s also his “nominalism” position to consider. This is to say that words are just tools from which to potentially describe reality, not elements of reality in themselves as such. Wikipedia says that this affront to Platonism and such goes back to Roscellinus (c. 1050 – c. 1125), though flowered under Ockham as the most influential and thorough nominalist. While most terms in this regard leave me doubtful or scratching my head, I feel like I shouldn’t be screwing things up too much when I pair nominalism with my belief in worldly causal dynamics.

        Liked by 1 person

        1. Copernicus actually held administrative posts in the church and had political skills. He had to be aware of the social currents in the part of Europe he lived, in the early days of the Reformation. And he also knew how much headwinds his ideas were likely to receive. In that climate, I think he would have appreciated that Osiander was actually defending him, even if intellectually he didn’t agree with the remarks.

          Since platonic objects have no spatiotemporal extent and are causally inert, it seems like any criteria involving causality would cull them. But I’m not a platonist, and I suspect many would say they’re using the word “exist” in a different manner. As I’ve noted before, if we can regard something as not existing and not pay any conceptual or operational price, it seems like the right move.

          Liked by 1 person

          1. Even if you and I can look back in hindsight and say that it was helpful for platonism to fall so that science might rise, it’s obviously not dead. Not only is platonism implicit to our language and so an ever present distortion to counter, but today there are popular people who hold it up as “true”, or at least in the form of mathematics.

            Surely I shouldn’t speak as such a snob however. I realize that lots of ideologies today have far worse general effects than modern platonism.

            Liked by 1 person

          2. I’m not a platonist, but I generally regard platonism as harmless, since it’s a purely metaphysical outlook. It’s a popular view among mathematicians, since they see themselves discovering mathematical entities rather than inventing them. I agree with them on the discovery part, but I think the discoveries are of possible structures of reality, with no need to bring in an additional platonic realm. But it seems to make no practical difference for their work.

            Even the mathematical universe hypothesis, mathematical platonism on steroids, is a pure metaphysical outlook (despite Tegmark’s arguments), which doesn’t change anything in science, at least not that I can see.

            Like

          1. You realize the chicken-egg thing is a metaphor. Is the math really out there in the world (prior to us) and we discover it? Or, is it a human/brain invention we use to make sense of the world? I’m saying there’s no way to know. We would see the same results either way. It might be like a chicken-egg thing where both things are kinda true.

            Regarding the chicken egg, eggs may have been around a billion years or so, but when did the first chicken egg appear?

            Did proto-chicken lay a proto-chicken egg and it developed into a chicken? Chicken first.

            Or, did proto-chicken lay a chicken egg? Egg first.

            You see it’s nonsense. Which is my point.

            Liked by 1 person

          2. I’m beginning to understand what you mean James or I might not be, but I’ll give it a go…
            I believe the egg was / is really out there, but Math was invented to make sense of the world. Math, unlike the egg and chicken is not material. The egg is not an invention. The answer to the chicken egg question you raise would be extrapolated in the course of the evolutionary process of the egg – would it not? It’s like saying how did LUCA or the first single cell organism evolve in the first place and create more life. It did because of the conditions which enabled it to. ie Volcanic vents 2000 metres below the surface of the Galapagos Is (or wherever).

            Liked by 1 person

          3. The problem with the “math out there” idea in its pure form is the math would have no physical existence. Where in spacetime does it exist? That puts in God/Supreme Being territory.

            If it is inextricably bound with physical existence, then is it still math or is it more than math? Or, maybe not math at all. That might be what Paul Torak is getting at in his comment.

            Liked by 1 person

  10. Is it *even* math? Eddy Keming Chen (“Fundamental Nomic Vagueness”, Philosophical Review 2022) proposes that

    For a case study, we turn to the Past Hypothesis, a postulate that (partially) explains the direction of time in our world. We have reasons to take it seriously as a candidate fundamental law of nature. Yet it is vague: it admits borderline (nomologically) possible worlds.

    I guess you could call the Past Hypothesis – that entropy in the distant past was “much less” than the maximum possible for the universe’s phase space – “mathematical” in some very weak sense. But it’s not mathematically precise.

    Liked by 1 person

    1. Definitely a lot of scientific theories aren’t mathematical, at least not obviously so. As I understand it, Darwin didn’t really use any math for natural selection (although others have developed mathematical models related to it).

      I think the issue is that there are usually a wide variety of math models which might be relevant to a theory like that. But being humans, we don’t necessarily feel the need, although that comes with the risk that there may be hidden assumptions in the vague model.

      Another way of looking at it is that the mathematics of entropy are the mathematics of the past hypothesis, and that we we call that hypothesis is just a way of talking about one of the predictions of those equations followed to their full implications.

      Like

  11. It’s funny to say it’s “just the maths”. It’s like saying “don’t worry about anything I’m saying, they’re just words!” It’s true that it’s just words, but we’re using those words intentionally to describe/explain something. We can’t just suddenly pretend they’re meaningless. Except as you’ve explained, we can, and it’s probably a good thing that we do so that we can sneak these ideas into our own minds and the minds of others, pretending they’re just empty words/maths. We probably sneak in all sorts of beliefs we’re not aware of all the time. It allows fantastic mental flexibility.

    I’m with you on Structural Realism, although I’ve been thinking a bit about perspectivism lately and would love to see the two reconciled somehow.

    Liked by 1 person

    1. That’s a good analogy, “they’re just words!”. And “mental flexibility” a good way to describe, I think, both instrumentalism and structural realism, instrumentalism because it makes no ontological commitments beyond observables, and SR because the commitments are limited and cautious. I think a structural realist would have had an easier time with the shift from Ptolemy to Copernicus, than full realists.

      Perspectivism looks interesting. I haven’t dug into it before. But just based on the first few sentences of the Wikipedia article, looks like something I need to look into. Thanks!

      Liked by 1 person

Leave a reply to J.S. Pailly Cancel reply