The idea that the universe is fully deterministic is one that many people hold on to tightly, even though science has made that view questionable since the 1920s. Things that happen with a particular quantum particle, such as an electron, can’t be predicted. We can only assign probabilities to particular outcomes. It’s only with populations of vast number of those particles that we begin to be able to make predictions. Determinism appears to be an emergent phenomenon.
Many strict determinists find comfort in the notion that since the uncertainties average out over large enough scales, that we leave quantum uncertainty behind as we go up to the macroscopic scale. And we do, to some extent. It’s why we can use innumerable physical laws to make predictions. But quantum uncertainty does intrude in the macroscopic world. The very fact that we can do experiments that tell us about it is proof of that. The question is to what extent it bleeds into macroscopic reality in natural processes.
Even if it only does so in one in a trillion interactions, within the uncertainty involved in any scientific measurement, in complex dynamic systems, chaos theory shows that that one in a trillion outcome can snowball in time to make those complex dynamic systems unpredictable, even in principle. This means that complex dynamic systems such as the weather, economies, the human mind, and even sufficiently advanced computer systems, may have behavior that will never be predictable, at least not completely.
In my experience, those that do hold on to strict determinism, either don’t understand the implications of quantum mechanics (I won’t accuse them of not understanding quantum mechanics itself since even experts like Richard Feynman never claimed to have that understanding), choose to ignore those implications, or they tightly grasp on to interpretations of quantum mechanics that supposedly preserve determinism, such as the MWI (Many World Interpretation).
While I personally see the MWI as a candidate for reality, I’ve never been particularly impressed by the idea that it preserves determinism. What does it mean to say that reality is deterministic when everything possible happens, but we still can’t predict what we’ll observe, even in principle, along our subjective timeline? I’m not convinced that deserves the name “determinism.” It certainly isn’t very useful for predicting future observations.
Anyway, Siegel’s post is a reminder that we’re all human and fallible, including the geniuses who, sometimes despite themselves, have broken new ground that call into question our most fundamental assumptions about reality. And that reality itself has no obligation to conform to our most ingrained expectations.
I’ve mentioned a few times before that I’m not a convinced determinist, at least not of the strict or hard variety. I have three broad reasons for this. The first is that I’m not sure how meaningful it is to say something is deterministic in principle if it has no hope of ever being deterministic in practice.
The second is quantum physics with its inherent uncertainties, uncertainties that we know bleed into the macroscopic world by the very fact that we know about their existence. If they didn’t bleed into the world, how would us macroscopic entities know about them? (If you hold to an interpretation of quantum mechanics that posits unobservable determinism, see my first reason.)
The third reason is chaos theory, the fact that inherent uncertainties in any measure we might make means that many dynamic systems are indeterministic, even in principle. Along those lines, I discovered this documentary on chaos theory on Amazon today. If you have an Amazon account, and an hour to spare, it’s a fascinating show.
If you don’t have an Amazon account, or don’t want to spend an hour on it, this Youtube is a pretty good shorter introduction to it.
Are you a strict determinist? If so, what do you think about chaos theory? Or quantum uncertainty for that matter?
The other day, I mentioned that I had some sympathy for the deBroglie-Bohm interpretation of quantum mechanics, namely an interpretation that there isn’t a wave-function collapse as envisioned by the standard Copenhagen interpretation, but a particle that always exists but is guided by a pilot-wave.
The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.
Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.
The article notes that most particle physicists aren’t impressed. Despite my sympathy for the pilot-wave interpretation, I can definitely understand why. These are experiments with fluid dynamics, not with actual quantum systems. There doesn’t seem to be any good reason to suppose that these fluid dynamics match the dynamics of actual quantum systems, other than the coincidence of their dynamics matching a possible interpretation of those systems.
I recently finished reading Max Tegmark’s latest book, ‘Our Mathematical Universe‘, about his views on multiverses and the ultimate nature of reality. This is the third in a series of posts on the concepts and views he covers in the book.
In the early twentieth century, one of the mysteries of science was the constant speed of light. The speed of light was constant no matter how it was measured. This was in contrast to the speed of sound, or the speed of just about anything else, which varied depending on the speed of the observer.
Albert Einstein accepted the experimental evidence of the constancy of the speed of light, and explored its implications. If the speed of light was always constant, then something else had to give. Something that factored into that speed had to vary, something like mass, length, and time. Exploring those implications led to the special theory of relativity.
For several decades now, one of the mysteries of science has been wave / particle duality. We have strong evidence that light behaves like a wave, and strong evidence that it behaves like a particle. We have similar evidence for electrons and just about any other subatomic particle, as well as atoms themselves and even large molecules under isolated conditions.
The shape of a wave is modeled in a mathematical concept called the wave function. The particle will appear somewhere in that wave. There is no known way to predict where in the wave any individual particle will be found. All that can be known are probabilities of it appearing at various locations within the wave. Bizarrely, once the position of the particle is observed, once it is measured, all trace of the overall wave instantly disappears, with only the particle remaining.
Just to be clear, this is freaky strange, and no one is certain why it is so. Reality at the quantum level appears to be wavelike, to the degree that the wave can physically interfere with itself when split, but suddenly, instantly, becomes particle like when we look at it. As strange as it is, this has been confirmed for decades by extensive experimental data. It is reality.
There are a number of interpretations of what is happening. The oldest, and for a long time the most popular, is called the Copenhagen Interpretation. It is basically is a minimalist interpretation that says that this is simply reality, and that when a particle’s position is measured, the wave function “collapses”. Prior to the collapse, the particle exists in what’s called a superposition. It exists in multiple locations at the same time, but once the position is known, the existence of the particle in all but one of those locations disappears.
There are several other interpretations. All of them must throw one or more aspects of common sense reality under the bus in order to make sense of the data.
In the 1950s, Hugh Everett came up with a new interpretation. Everett accepted the mathematics of the wave function, but was troubled by the lack of anything in those mathematics that predicted a wave function collapse. The only reason that the wave function collapse is thought to exist is the fact that we only observe the particle in one location once it is measured.
Everett asked, what happens if the wave function, in fact, never collapses? If the wave function predicts two locations for the particle, then the mathematics say the particle is in both locations. Of course, we don’t observe it to be in both locations. So then, what’s going on? Similar to when Einstein was contemplating the constant speed of light, something else has to give.
According to the mathematics and our sensory data, we should see the particle in only one of the locations and we should see it only in the other one. No, the second “only” in the previous sentence is not a typo. We appear to have two realities in which we observe the particle. Prior to the measure, there was only one reality. After the measure, there are two.
In multiverse parlance, the many worlds interpretation asserts that our universe is cloned every time what appears to be a wave function collapse happens. Given that this happens an uncountable number of times per second throughout the universe, and given the large range of possibilities for each particle’s position, the number of universes being created every second is sublime.
The randomness of the particles location then is an illusion, created by the fact that we only observe the location particular to our universe. But the wave function unfolds unabated with the particle existing in each location in a different universe.
This means that there are an uncountable numbers of you in these alternate universes, where each quantum result is manifested. In other words, every random event that could happen, happens in some universe, and there are an uncountable versions of you living every conceivable version of your life.
In Tegmark’s framework, this is the Level III Multiverse. It is a superset of the Level I and II multiverses, although as formulated, there’s no particular reason that its existence, or non-existence, is dependent on the other ones. If all three levels exist, then Level III includes all the multiverses in the lower levels and reality continues to expand at an astounding rate.
Tegmark does note some similarities between the Level I and Level III multiverse. In both, there are an infinite number of you living every possible variation of your life. The result of every quantum possibility should be manifest in one of the Level I universes. Of course, if they were one and the same, it would mean that remote regions in the Level I multiverse were in some way quantum entangled with each other.
Tegmark also speculates about reconciling the Level II and III multiverse, but doesn’t currently see a way to do it.
Over time, support for the many worlds interpretation has grown in the particle physics community, although Copenhagen continues to hold a plurality in most polls. The question is, is there any way to test this idea? Brian Greene in ‘The Hidden Reality’ identified the possibility of the uncollapsed wave interfering with itself across universes, although he notes that observing this would be extremely difficult.
Tegmark proposes another one, although it’s not one that anyone is liable to volunteer for. The quantum suicide or subjective immortality thought experiment involves setting up a gun with a trigger set to fire if a random quantum event takes place, with a 50% chance of taking place in the first second. The experimenter then puts their head in front of the gun.
In 50% of the universes, the experimenter dies within the first second, but in the other 50%, they live. For each second, the probability of the experimenter being alive goes down. After a couple of minutes, the probability of the experimenter still being alive is infinitesimal. However, in at least some portion of the alternate universes, the experimenter lives on.
From the subjective point of view of the experimenter, the longer they live, the higher the probability of the many worlds interpretation being true. After a few hours, increasingly unlikely events (misfire, power outage, meteor strike, etc) begin to happen to prevent their death. If an experimenter subjectively survived this ordeal for several hours, they could have a high degree of confidence in the many worlds interpretation. (Of course, in virtually all universes, they would leave behind grieving friends and family who would be less convinced.)
Tegmark then points out that, if either the many worlds interpretation or infinite space scenario is true, then a version of each of us will, despite its improbability, live long enough to outlast all of humanity. In other words if is true, subjectively, you will live long enough to know it is true, at least assuming you recall reading this. Each of us may live to be the last human in our own improbable universe, knowing the truth of the multiverse.
In the next post, we will get into the main idea of Tegmark’s book, the mathematical universe hypothesis.
Conversation on yesterday’s post on free will has me thinking about determinism.
First, what is determinism? According to Merriam-Webster, my favorite dictionary because they seem to be extremely good at cutting to the chase, determinism is defined as:
a theory or doctrine that acts of the will, occurrences in nature, or social or psychological phenomena are causally determined by preceding events or natural laws
So, the basic idea of determinism is that everything has a cause, and that any appearance of uncaused actions, such as mental ones, are an illusion. As I argued in my free will post, I don’t think that libertarian free will is necessarily tied to determinism, but it’s pretty evident that a lot of people lean heavily on determinism as a reason for rejecting it.
Is that reliance justified? Can we say with certainty (to the extent we can say anything with certainty) that everything has a cause?
The standard answer to that question is, no, we can’t. Quantum events simply don’t behave according to our classic notions of how the world is supposed to work. Many quantum events appear to be random and uncaused. The foundations of determinism rest on indeterministic ground.
Why then are the laws of physics, above the quantum level, generally deterministic? As it turns out, single quantum events can’t be predicted, but large numbers of them follow statistical patterns. From these statistical patterns, deterministic regularities, natural laws, emerge.
Determinism is emergent. The layers below it are not deterministic. It is built on top of those indeterministic layers. Another way of looking at it is that determinism is a model of complex indeterminate events. A hard determinist should ponder that for a minute.
But, the fact remains that we have centuries of science which show that, above a certain layer, things are deterministic. And once things are deterministic, that should mean that all of the higher layers of abstraction (chemistry, biology, neuroscience, weather systems, etc) should themselves be deterministic, right?
Let’s back up and ask again what we mean by deterministic. What do we mean when we say something can be determined? Determined by whom? If no one could conceivably determine what a complex dynamic system will do, if it is only deterministic in principle, is it really still deterministic? What does it even mean to say that something is deterministic in principle?
Chaos theory is a field of study on the inherent limitations of making determinations on complex dynamic systems. The core idea of chaos theory is that no measurement is infinitely accurate. Anyone who has ever taken a high school or college science lab knows that every measurement comes with a margin of error.
Increasingly precise equipment reduces that error, but can never eliminate it entirely. As different measurements are taken into account, the errors multiply and snowball. Add lots of complexity and constant change, and some systems become inherently unpredictable, inherently indeterminate.
Most people have heard of the butterfly effect, the idea that a butterfly flapping it’s wings can eventually cause a hurricane somewhere else in the world. Small effects snowball into larger effects. And the small effect of errors in measurement snowball into unpredictability.
Of course, a determinist could insist that a perfectly omniscient being, such as Laplace’s demon, could still predict the effects of such a system. Quantum effects might be random, but that randomness doesn’t “bleed” out past the deterministic layer.
Except that they do.
Of course, the very fact that we are aware of quantum events shows that they do in fact bleed out into the macroscopic world, otherwise how would we be aware of them? But a clearer example is a famous thought experiment, Schrodinger’s cat.
Schrodinger imagined a cat in a box, with a device designed to release a deadly poison to the cat if a quantum event takes place. If we put the cat and the device in the box and close it, then wait until the quantum event might have taken place, the cat could be either alive or dead.
I’m going to skip any Copenhagen interpretation discussion here, but the important thing to realize is that the cat’s death or survival is not a deterministic event. The cat’s fate is not completely causally determined by events before it was placed in the box. Quantum events have affected a macroscopic event that should have been completely deterministic.
Although Schrodinger’s cat is a thought experiment, there have been real experiments (not endangering cats) to test the principle. It is a reality. Quantum events can bleed into the macroscopic world.
Of course, given the degree that we’re able to determine physical laws, such bleeding in nature must be incredibly rare and nuanced. But we can’t rule out that it happens within the margins of error in our measurements. And as mentioned above, the effects don’t have to be pervasive to eventually snowball in complex systems.
Determinism emerges at a certain layer. There is no guarantee that it persists above that layer. The effects of quantum uncertainty may be small at the deterministic layer, but it may be enough to cause indeterminate events to emerge at higher layers.
Determinism isn’t the slam dunk many assume it to be.