What would randomness in general relativity mean?

A new approach for reconciling general relativity and quantum mechanics proposes adding some randomness in general relativity, making it less deterministic on small scales.

For several decades, physicists have been trying to reconcile general relativity and quantum mechanics. These theories, despite each having been empirically validated to several decimal places, contradict each other. The problem is those contradictions seem to happen in places not conducive to empirical investigation, such as black holes or the early universe.

Most of the approaches to solving this try to quantize general relativity, in essence to find a theory of quantum gravity. This is what string theories and loop quantum gravity try to do. And there are others trying to “build up” to gravity and spacetime from quantum entanglement. But these approaches, and others, all seem to have their problems.

The new proposal is to keep spacetime smooth and continuous, but introduce some random wobbles into it. It’s an interesting idea, one which may be testable in the near future. Most physicists seem skeptical, but the ones I’ve read so far seem onboard with at least testing for it.

If the idea turned out to be right, it wouldn’t be a case where general relativity, or classical physics overall, “won”, since GR would end up needing modifications. One thing I wonder is whether scientists would be tempted to regard randomness in GR as fundamental the same way many do for quantum physics. Or would it be seen as a new mystery to be solved?

I personally would see it as a mystery. But I’m saying that as someone whose intuition is that reality is ultimately deterministic. And that when we have a scientific theory with randomness in it, we need an explanation for that randomness.

But we might end up in a situation like quantum physics, where there are different “interpretations” on what’s going on. Many people like the idea of randomness being fundamental. Just as with QM, they might be more inclined to dig in and say that the wobbles are just the way things are. People like me would be left holding out for additional data, or at least new theoretical breakthroughs, to map that randomness back to something more mechanistic.

Incidentally, related to the previous post, it seems likely this would have implications for the many-worlds interpretation. That theory is usually seen as needing spacetime to be quantized. If it turned out, once and for all, that spacetime can’t, many-worlds might be falsified. Given Carlos Rovelli’s 5000:1 bet with the proposing theoretician Johnathan Oppenheim, I suspect it would present issues for relational quantum mechanics as well. Just a reminder that many interpretations of QM aren’t just academic philosophies, but scientific theories in an of themselves.

But it’s also a reminder that no scientific theory can ever be confidently taken as the final word. New data can always dethrone a reigning theory. And that there remains value in continuing to stress test the fundamental structure of even the most successful theories.

What do you think? Would you welcome some randomness in your general relativity? Or like me, would you regard it as a new problem to be solved?

38 thoughts on “What would randomness in general relativity mean?

  1. @selfawarepatterns.com
    First of all, thanks for such a well written and informative post. Truly fascinating.

    I think randomness in GR would definitely need to be explained. I'm not sure the explanation necessarily needs to come from a mechanistic account. For me the most important thing is that we somehow get a principled account, in the sense of developing theories about principles that account for the observed randomness/phenomena

    Liked by 1 person

    1. @HernanLG @selfawarepatterns.com

      Thanks!

      "Mechanistic" may have been a poor word choice. I meant something understandable, something that tells us why it's random. (Note that not everyone sees the need for that "why". It's absent from most interpretations of QM.) But "principle" works too!

      Like

  2. I’m comfortable with the idea that randomness is just there, but I can see how it might be problematic for mathematics. The division of a continuum into discrete units takes us into the world of Zeno’s paradoxes. For some reason it makes me think about division by zero. We just can’t go there.

    There is a simple algebraic proof, in an old book of paradoxes on my shelf, that 2=1. (I’ll dig it out if anyone’s interested.) If I remember, the chapter is called “Thou Shalt Not Divide by Zero” and the algebra depends on a hidden divide-by-zero operation.

    We use imaginary numbers to handle the problem of negative square roots, and perhaps we need some mathematical equivalent for division by zero. The problem is that it implies infinity, which suggests an infinity of right answers. Perhaps we need to find a way to “pick one.” Randomness seems like a candidate. It also seems like a good way forward from Schrodinger’s equation. Instead of many worlds (instead of the lack of worlds it actually implies), we could have “pick one.”

    One would hope to tie it in with probability, but I’m not a mathematician. Actually I’m just hand-waving.

    Liked by 1 person

    1. Over the years, I’ve been assured by multiple people much more math literate than I am that randomness isn’t a problem for the math. Although I agree with you that it seems like it would be. I can see that it’s no problem to put a variable in the equations representing random values, but explaining why it’s random is a different matter. Of course, if you’re comfortable with that randomness as a brute fact, then it’s not an issue.

      I always thought that keeping things continuous is what gets us in trouble with Zeno. Of course that’s pretty much what this new theory is proposing, but as I understand it, it’s where general relativity has always been. It seems like discrete units would get us out of those issues. (Although I might be misreading your remarks here.)

      The thing about Schrodinger’s equation, is nothing in it, in and of itself, has any “pick one” principle. The mapping between its amplitudes and probabilities is a bolted add-on, the Born Rule some of us discussed in the other thread. Left on its own, the Schrodinger equation just evolves smoothly, linearly, and reversibly. I wonder if a general relativity equivalent would be any more elegant.

      On hand waving mathematics, I’m right there with you!

      Like

      1. > I always thought that keeping things continuous is what gets us in trouble with Zeno.

        It doesn’t. Aristotle was the first to put the finger on the the flaw in Zeno’s arguments: Zeno was assuming space to be infinitely divisible, but not time. But the full solution had to await the invention of the calculus in 17C (and its formalisation in 19C): an infinite sum can have a finite value. Specifically, the sequence deployed by Zeno 1/2, 1/4, 1/8… ads up to 1.

        See e.g. https://iep.utm.edu/zenos-paradoxes/ : “The current standard treatment, the so-called “Standard Solution,” implies Achilles’s path contains an actual infinity of parts, but Zeno was mistaken to assume this is too many parts for a runner to complete. This treatment employs the mathematical apparatus of calculus which has proved its indispensability for the development of modern science.”

        Liked by 1 person

        1. The IEP article about Zeno’s paradox explains that the standard solution defies certain intuitions, and that this is something we must accept. We can call it a “solution,” but it does not so much dissolve the conundrums of quantum mechanics, as leave us pondering them mutely once again.

          I’m not a mathematician, but I wonder if division by zero presents a similar quandary. My intuition is that division by zero amounts to declaring a continuum, or trying to present a continuum in terms of discrete units.

          Like

          1. The modern answer to Zeno becomes quite intuitive if you have the notion of limits drummed into you. 🙂 And that notion underlies all of the calculus and much of mathematics to boot. To take a very simple example, which is still causing trouble to some: 1.000000… = 0.999999…

            As for division by zero, it is a big subject in its own right. You are sort-of right that it relates discrete units with continuum — that’s what limits are all about.

            Trivially, division by zero is problematic simply because division is defined as inversion of multiplication. if a * b = c, then division is the operation which takes c and b (in that order) and yields a. But if b is zero, then c is also zero regardless of a. Thus dividing zero by zero can be any number you please — it is simply undefined.

            However, that’s true only for standard arithmetic, which is just one (pragmatically the most useful one in counting objects) of many possible mathematical systems. In some other systems division by zero is allowed. E.g. Riemann sphere adds infinity as a single complex value to complex numbers. In such systems it is only division of zero by zero that is still a problem.

            Wikipedia has an excellent article on the subject: https://en.wikipedia.org/wiki/Division_by_zero

            Like

          2. Thanks for following up on division by zero. I have yet to review the wikipedia page, but in other contexts, when one’s intuitions finally align with deep paradoxes, one may be said to have “got religion” or “drunk the Kool-Aid” — if it feels like revelation or epiphany. On the other hand, one might just be worn down or inured to the strangeness by long acquaintance, so that it no longer feels troubling. In either case, I’m not there yet.

            As for imaginary numbers, that’s a whole other incomprehensibility. I gather they’re regarded as a convenient fiction, but some recent research has suggested they represent some sort of quantum reality. I can come back with some links if you want. I once speculated idly on this, but I sincerely doubt what I said is worth serious attention.

            Like

          3. As for imaginary numbers, I sympathise. I well remember being outraged when after years of “you cannot take a square root of a negative number”, we were suddenly told “oh, by the way, the square root of -1 is called i”. But in hindsight, complex numbers are perfectly comprehensible. They are just taught all wrong. If you must start with i, at least explain how square root of -1 fell naturally out of Renaissance mathematicians’ struggles with cubic equations.

            Ideally, though, complex numbers should be introduced simply as 2D vector arithmetic. Denote vectors rooted in the intersection of two orthogonal axes by pairs of real numbers (x, y) with the intuitively obvious rule for adding such vectors together. Subtraction is then inversion of addition, multiplication is repeated addition and division is a reverse of multiplication. All very straightforward.

            As it happens, in this arithmetic (0, 1) * (0, 1) = (-1, 0) — big deal! It is only when for convenience we label the y axis with multiples of some unit called i and write (x, y) as x + i * y, that our intuition perks its ears. This shift in notation is convenient, because one can apply normal arithmetic rules to it and get exactly the same results as for vector arithmetic… as long as one assumes that i * i = -1, which just means (0, 1) * (0, 1) = (-1, 0) — queue outrage! 🙂

            So, are complex numbers essential for constructing mathematical models describing the world we live in? Undoubtedly so. Are imaginary numbers “real”? I don’t know. Are integers real? Show me an integer as such, not as a collection of objects! Are imaginary numbers necessary? Depends on what you mean by that. We could stick to the vector notation. But that shift in notation to x + i * y opens other mathematical vistas. Take my word for it, it results in some profound and beautiful magic, which absolutely does need a square root of -1.

            Like

  3. It isn’t at all obvious to me that if we keep digging down to smaller regions of space, shorter times and higher energies, we are bound to find a simple base layer from which everything else is made.
    Even if we do, it is of limited utility to us in making practical predictions, because our knowledge of what stuff is where and its rate of change is always partial, and we know that even very simple physical configurations can generate unpredictable futures without perfect knowledge of initial conditions, which we never have in practice.
    I feel there may be mileage in accepting this imperfect knowledge and finding better ways of representing the way clusters of solutions of the equations of physics behave, in the presence of this observational limited precision and incompleteness… and that in turn this might shed light on the randomness that you refer to.

    Liked by 2 people

    1. Thanks. If the randomness is indeed there, I hope you’re right.

      I do think there’s value in gaining an understanding of what’s going on below the practical layers. It might indeed be that knowledge would never be of pragmatic use, but it seems hard to judge that without the knowledge itself. And so much useful knowledge begins as something that seems purely academic, and only much later turns out to be useful. I think of spectroscopy existing for several decades before astrophysicists thought to use it for understanding what stars are made of.

      On the other hand, I do agree there is value in being able to put on the instrumentalist hat from time to time and proceed without worrying too much about what the numbers mean. Both Max Planck and Werner Heisenberg made breakthroughs in that mode, progress that might not have happened from people too preoccupied with the ontology.

      Liked by 1 person

  4. Heh… I am still mulling my response on the previous topic, but I can’t resist a quick shot at this one — particularly since I’ve been thinking about this one over the past few days.

    My immediate thought was that Leibniz would have to say something about it. 🙂 You must be aware of his great argument with Newton, in which he advocated a relational stance over Newton’s substantival one. The issue is still unresolved and both sides have their advocates (and no, GR does not resolve it in Newton’s favour). But suppose Leibniz was right… Might not be the apparent space-time wobble be simply the consequence of Heisenberg’s Uncertainty Principle? Since position and momentum cannot have absolutely precise values, would that fact not manifest itself as random fluctuations of the appearance of space-time constituted by positional and dynamic relations on the quantum level? Just a thought…

    Liked by 1 person

    1. I actually hadn’t heard of that Leibniz / Newton debate. I’ve heard of some of their other debates, such as who had calculus primacy, but not that one in particular. I’m learning all kinds of new stuff from you!

      I couldn’t quickly find much on Leibniz’s relationalism, and nothing at all on Newton’s outlook. But relationalism sounds interesting, particularly for a structural realist like me. Although to your point, the mapping seems fraught. Their debate might have implications on whether spacetime might be fundamental or emergent from something like quantum entanglement. And this new theory seems more on the fundamental side.

      I had similar thoughts about the wobble basically being in some sense the same as quantum randomness. I don’t know enough about the theory to say one way or another. Although if that were likely, it seems like the characterization of the theory would be different, maybe as another quantum one rather than the classical or semi-classical notion being discussed. But who knows?

      Like

      1. Never heard of the relationist/substantivalist controversy? “What do they teach them in those schools nowadays?” 🙂

        So you don’t know about “Newton’s bucket” either? Or for that matter (jumping ahead a bit) of Poincare’s conventionalism? Or (jumping to the present) of the Shape Dynamics reformulation of GR?

        It is a huge subject. See Stanford: https://plato.stanford.edu/entries/spacetime-theories-classical/ and https://plato.stanford.edu/entries/spacetime-theories/

        Anyway, back to wobbly space-time. A non-relationist in-princliple explanation also seems straightforward enough and again relies on Heisenberg. Virtual particle fleetingly popping out in vacuum do have mass/energy temporarily borrowed from vacuum (as proved by the Casimir Effect). Therefore they must introduce temporary localised departures from asymptotically flat space — a.k.a the wobbles. Just how it works mathematically, don’t ask me. 🙂 My maths was never *that* good.

        Like

        1. On what they teach in the schools, you’re talking to a guy whose degrees are in accounting and information systems, one who only took a few science electives and no philosophical ones, unfortunately. Most of what I know in these areas is self taught, and like most autodidacts, has a lot of gaps.

          I have heard of Newton’s bucket (probably from Brian Greene or somewhere), but not the others (or like the Zeno stuff, I’ve forgotten about them).

          Those SEPs look interesting. Thanks!

          Right, so your thinking is that the wobbles might have quantum causes. That would explain the randomness. Sounds plausible to me, at least right now.

          Like

          1. No offence meant — I was parodying my own incredulity, as I hoped the quotes and the smiley would indicate. My own philosophical education consisted of Marxism-Leninism (mostly thankfully forgotten now!). As for history of science — zilch. I just had more time to read around, I guess. And some of historical arguments are both fascinating and illuminating. Quite a few scientists seem to think that all of that can be safely ignored — “things have moved on” kind-of mindset. But I profoundly disagree.

            OTOH, you are much more informed than I am on current goings on. I guess I am just too busy reading up on the old stuff. 🙂

            I note, BTW, your apparent preference for determinism (please correct me if I am wrong). That surprises me. Not that I am committed to indeterminism, but I simply can’t see any reason to reject either option. In any case, some forms of determinism are in practice indistinguishable from indeterminism and at the same time quantum indeterminism leads to macro-scale “adequate determinism” for purely stochastic reasons. Plus, philosophically, it seems to me that neither option can be proved with any finality — one can always posit a deeper theory.

            And that in turn reminds me… Unlike you I don’t see interference phenomena as suggesting Schrodinger’s wave being real. As I see it, one can just as well say that matter behaves in a way for which waves provide the most useful model. And phenomenology always under-determines ontology. One can do Fourier transform analysis of species populations, which may be useful in some contexts 00 that does not mean actual waves are involved.

            Like

          2. No offense taken. Sorry if I implied it.

            I’m very interested in the history of science, but I have to admit that interest has rarely pushed me into reading primary sources. I’m usually content to read the summation of historians and science writers. It means that I overlook or forget many of the detailed debates. I agree that history should be studied, if for no other reason than to serve as a caution about what we think we know today, and how ideas that seem obvious now were once thought too outrageous to be tolerated in polite company.

            My preference for determinism is really more of an expectation. It’s just the way I think reality works, and I won’t think we have a real explanation until its deterministic. But as I noted to Paul, that doesn’t mean I won’t accept a predictive scientific theory with randomness in it. It just means I’ll be more skeptical than usual that it’s the final answer. I do agree with Paul that randomness seems inherently more complex than simple causal or interactive relationships, and complexity means not fundamental. But I’m also fully cognizant that the universe is under no obligation to make sense in the way a hairless primate expects it to.

            On interference, this gets down to what we consider “real”. For me, if something interacts with other things we consider real, if it’s part of the overall causal framework, and if our model of it is robustly accurate from many different perspectives and time points, then it’s real, at least at some level of description and approximation. It seems like if we’re prepared to accept tables, chairs, temperature, and baseball as real, then we’re not being consistent when we raise the bar so high that a robust predictive model isn’t counted as a description of something real. Note that this is different from saying it’s not fundamental, a stance I’m a lot more sympathetic with. For me “real” just means we should expect other “real” theories to reconcile with it, eventually.

            Like

  5. My thought on randomness as a brute fact in the world is that, if the assumption buys you better predictions, then fine. But other things being equal, it takes a little bit more complexity to describe a stochastic distribution than to describe a deterministic prediction. Prefer the simpler theory, if it can make the right predictions.

    I don’t quite get the advantage claimed for the “wobble” – does it makes the revised GR compatible with the QM prediction in the Don Page + CD Geilker thought experiment?

    I predict that the left-hand drawing in the illustration of the Page-Geilker thought experiment is correct. The gravitational interaction puts the test particle in a superposition of pulled-left and pulled-right, until the whole system decoheres with the environment and/or the observing scientist. At which point, exactly one of those is observed. I also want to point out that if the large mass is a macroscopic ball of lead, it will immediately interact with photons and neutrinos and thus decohere with the environment. At least in this version, this is a thought-experiment only, not a real experiment waiting for us to invent more sensitive detectors.

    Liked by 1 person

    1. I would definitely accept a theory with randomness in it if it were predictive, but its provisional nature would loom much larger for me than a full deterministic theory. I’d actively expect it to be replaced someday with a more fundamental theory. Of course “some day” might be centuries from now. (Or never if I’m just wrong about reality ultimately being deterministic.)

      I think the idea is that the wobble would frustrate the ability of the Page-Geilker experiment to tell us anything. Although I’m not sure it would work anyway.

      I’m with you that, based on Siegel’s description at least, I’d expect the left side to be what we’d see. But I think interpreting that as meaning gravity was classical would be a mistake.

      The test mass would be entangled (through gravitational interaction) with the mass particle. Even if we could avoid decohering the mass particle, the movement of the test particle would constitute a measurement, of “which way” information getting into the environment. It would tell us where the mass particle will be once it’s decohered. That would be no different than if we measured the spin of one of a set of spin-entangled particles. Even if the other particle remained coherent, we’d know what the result would be once it was decohered.

      You could keep both the test and massive particle in superposition. But now you can’t know which way the test particle is going. Until you measure it, but then you’ve released “which way” information. I don’t see a way to get the information without essentially causing a collapse / split / whatever.

      But I have to admit my grasp of their thinking is a bit murky. I may well be missing something important here.

      Liked by 1 person

      1. That’s a good point I hadn’t thought of – the test particle is also a potential trigger of decoherence. Especially because the scientists are intent on measuring it. Although, there is always “weak measurement”… dunno how the math on that would work here.

        Liked by 1 person

    1. I think the new wrinkle is random unpredictable wobbles. Deterministic wobbles that, assuming all the factors are taken into account, can be predicted fit within traditional GR. That said, I haven’t attempted to parse the actual scientific paper for this, just some of the popular news articles and posts. (Ethan Siegel’s discussion lightly touches on the details.)

      Liked by 1 person

      1. How predictable would wobbles be near Earth with the other planets, minor planets, asteroids, moon, and sun? Sort of like the surface of a pool of water during a heavy rainfall with waves from multiple directions colliding. Might be indistinguishable from random.

        I read the explanation of the test, but it would seem to me any device sensitive enough to pick up randomness of any sort would also be sensitive to wobbles caused by gravity waves from planetary and maybe galactic objects.

        Liked by 1 person

        1. Right. Isolating fundamental objective randomness from complexity randomness would be a major challenge. Supposedly the companion paper proposes an experiment that might be doable. I haven’t read it, and wouldn’t know enough to assess its plausibility anyway, so no idea how plausible it is.

          I did very briefly skim the main paper. The theory seems to require objective wave function collapse (along the Penrose line of gravitationally induced collapse). Given that that is already on the ropes empirically, I can see why most physicists don’t expect this to pan out. Still, you never know until you look.

          Liked by 1 person

  6. I’m kind of comfortable with the universe being fundamentally random. The thing is what happens with an individual electron (for example) may be random, but what happens with billions or trillions of electrons, taken all together, is quite predictable. So a universe that is totally random at the quantum level can still end up behaving like a deterministic universe at larger scales.

    Of course if new evidence comes along to show quantum mechanics is deterministic after all, I’d be fine with that, too. This isn’t a hill I’m willing to die on. But if determinism is just an emergent property of a fundamentally random universe, I’m comfortable with that model.

    Liked by 1 person

    1. My own concern about quantum randomness isn’t determinism at the macroscopic scale. I have no issue with emergent laws. But as Paul Torek pointed out in this thread, randomness just seems more complex than a deterministically produced value. Of course, the key word there might be “seems”. I’m fully aware nature has no obligation to my biases.

      And I’m definitely comfortable with a determinism we may never be able to cash out. If QM is deterministic due to either Everettian or Bohmian theory being right, it would be deterministic, but not in any way we’d ever be able to cash out, to do better than predict probabilities for a single measurement. That’s operationally random, but with an explanation for the randomness. That said, if the only option available is randomness with no clear explanation, I’d definitely accept it, albeit provisionally.

      Liked by 1 person

      1. I don’t know. I guess this is my own biases speaking, but randomness seems less complex to me. I’m reminded of a SMBC cartoon where God created our universe as a homework assignment, and He threw in quantum randomness because He waited to make the universe until the night before it was due.

        So I guess my argument is not so much that randomness is simpler but rather that it seems like the lazier way to make a universe.

        Liked by 1 person

        1. Could be. My bias, I suspect, comes from computer technology, where we can get a random number (technically a pseudo-random number) anytime we want by calling an RND or RANDOM function. Simple to use. But underneath the covers that function uses existing mechanics (getting a seed from the system clock and other hard to predict info) to generate the number.

          Of course, that’s a situation where the whole system is engineered to be rigidly deterministic, so generating randomness doesn’t come naturally. If nature is fundamentally random, then it won’t be a useful guide.

          In the end, all we can say for sure if we never know whether we’ve hit true fundamentality, or if there’s another theory waiting underneath. All scientific theories remain inescapably provisional.

          Liked by 1 person

          1. That kind of reminds me of hidden variable theories. As a recall, there have been a few different versions of that idea, but hidden variable theories still make testable predictions, and in experiments, those predictions never turn out to be correct. At least that has been the case so far. Quantum mechanics seems to be stubbornly random.

            Liked by 1 person

          2. I think most of us, when we first learn about quantum mechanics, instinctively reach for hidden variables. (It’s what most of the home grown theories that get discussed here amount to.) There just has to be something more there, we feel.

            But making hidden variable work turns out to be very hard, particularly when any attempt is made to reconcile the result with the same things straight QM currently reconciles with. One reason I find the Everett approach interesting, is it doesn’t add any hidden variables, nor do away with the randomness operationally, but explains why it’s there.

            I wouldn’t rule out something similar for a random version of general relativity. (Although the currently proposed version seems dependent on an objective collapse model for QM, which appears to already be on the ropes experimentally.)

            Liked by 1 person

          3. Guys, if I may… You are conflating randomness with indeterminism. Randomness does not entail indeterminism. E.g. weak causal determinism (in which causal influences can simply skip forward in time) features ineliminable epistemic randomness. While non-MW block universe is deterministic in the sense that there is just one future, yet it can allow ontic randomness (and must do so to be compatible with QM).

            Liked by 1 person

          4. Mike, I think our focus here was on fundamental randomness. Epistemic randomness I don’t see as an issue. For example, in natural selection, mutations are random, but it’s not fundamental, just an effective randomness for the theory. And epistemic randomness remains even for pilot-wave and many-worlds, even though they’re ultimately deterministic theories. (Super determinism is the only QM outlook I know which advocates for a determinism that could ever plausibly be cashed out.)

            Like

          5. As far as I can see, in-principle-ineliminable epistemic randomness is in principle indistinguishable from the ontic kind. A weekly deterministic universe is deterministic, yet may appear to some degree random, however minutely you examine it.

            And let’s not forget that logical determinism (a.k.a. non-MW block universe) happily accommodates ontic randomness. Superdeterminism is not required.

            Like

  7. Hey Mike – as an aside, may i pick your brain? In my post/extract (Black Holes are Dangerous), am I right in saying a part of ‘special relativity’ is objects moving relative to each other? If not, what law describes that?

    Liked by 1 person

      1. Ahhh, I love your brain! I searched and searched but couldn’t find any neat single sentence that confirmed (or denied) it. Basic blunders like this don’t look good in a sci fi story… even if it is a comedy.

        Liked by 1 person

Leave a reply to SelfAwarePatterns Cancel reply