The main difference between a quantum computer and a classical one is the qubit. Qubits are like classical bits, in that they hold binary values of either 1 or 0, on or off, true or false, etc. However, qubits, being quantum objects, can be in a superposition of both states at once. The physical manifestation is often something like a particle in either a spin up or spin down state.
(This is true for digital quantum computing, where a discrete state is necessary. There is also analog quantum computing, which presumably works with other properties that are more continuous.)
We might write the superposition of a qubit as:
meaning it can be in a superposition of both 1 and 0 at the same time. So far so boring. But if we add a second qubit and have the two interact, we now have two entangled quantum objects which, together, can be in a superposition of four different states, which we might write as:
In other words, adding a second qubit doubled the number of parallel states they can collectively be in. If we add a third qubit into the mix, which also, through interaction, joins the entanglement, we get this list of states in the superposition:
It’s important to understand that these are superpositions, not alternatives. The three qubits, until a measurement is done, can be in all these states at the same time. If we increase the number to ten qubits, then the overall system can be in 210, or 1024 states at the same time. (Which I won’t attempt to lay out.)
The Google quantum computer that demonstrated quantum supremacy (over classical computers) was reported to have 53 qubits, which in principle meant it should have been capable of being in 253 or 9 x 1015 states concurrently. This is the power of quantum computing. It allows a level of parallel processing not possible with classical systems.
A 300 qubit system would be able to be in a superposition of more states than there are particles in the observable universe. Consider this. Where are all those states? According to quantum mechanics, they’re all right there, in those 300 particles.
Well, at least under interpretations that consider the wave function to model something real. The question is, under the interpretations that don’t, how do they account for these kinds of systems? One thing I’ve read indicates that maybe the systems aren’t really running in parallel. Maybe they’re just executing a far more clever algorithm, and the wave function mathematics are just a convenient mechanism to keep track of it. This move seems, to me, increasingly dubious as the number of qubits increase.
The interesting question is, what happens when the overall system is measured? In all interpretations, that act only provides access to one of the states, with no control over which one. A successful quantum circuit has to promote the desired answer so that all its states have it as the end result.
But it’s interesting to think about what happens under each interpretation. Before doing so, it’s worth noting the raw physics of the situation. When a measurement begins, the quantum particles / waves / objects in the measuring device interact with the quantum objects, the qubits, in the quantum circuit. There’s no real distinction between the atoms in the quantum circuitry and the ones in the measuring system. In most interpretations, what changes are the sheer number of interactions involved.
Under the Copenhagen interpretation, the involvement of macroscopic classical mechanisms cause the massive superposition of states to collapse to one classical state, although Copenhagen seems agnostic on the exact mechanisms. Various physical collapse interpretations see the wave physically reducing to a single state. Under the pilot-wave interpretation, there were always waves and particles, with the waves guiding the particles, and interaction with the environment causes the wave to lose coherence so that the actual particle states are now accessible. (At least I think that’s the way it would work under pilot-wave.)
The sequence under relational quantum mechanics (RQM) seems particularly interesting. If I’m understanding it correctly, each interaction results in a collapse, but only relative to a particular system. So from the second qubit’s perspective, its interaction with the first qubit causes it to collapse. But from the third qubit’s perspective, the first two qubits are in superposition until the interactions reach it. This sequence of disagreements continue all the way through the sequence. Of course, from the measuring device’s perspective, nothing has collapsed until it interacts with the system.
This seems similar to the sequence under the relative state formulation, also known as the many-worlds interpretation (MWI). The difference is under this interpretation, the disagreements are resolved into an objective reality. Of course, the only way to resolve them is to have a copy of qubit 2 seeing qubit 1 in its 0 state, and another copy seeing it in its 1 state. All of these copies exist in their own branch of the superposition.
Under both RQM and MWI, nothing fundamental changes on the event we label as “measurement.” The physical processes just cascade into a larger environment. Under RQM, this is handled by the stipulation that all states are only meaningful relative to a particular system, and that no universal description is possible.
MWI instead simply sees the superpositions continue to cascade out in an unending process. As the number of quantum objects involved skyrocket, the phase relation between the branches of the superposition that allowed for interference between them, begins to alter. As the number of constituents increase, each branch’s phase increasingly becomes more unique, isolated from the others, until they no longer interfere with each other. Each becomes causally isolated, their own separate world.
Some quantum computational theorists see the success of quantum computing as evidence for the MWI. Others point out that each of the other interpretations can provide an accounting. What that success does seem to do is put pressure on the interpretations that have an anti-real stance toward the wave function. As noted above, the idea that those computations aren’t physically happening in parallel somewhere seems dubious.
Unless of course, in my admittedly very amateurish musings here, I’ve missed something. In particular, is there a stronger anti-real account that I’m overlooking? Are there problems with the other interpretations that do take a realist stance?
23 thoughts on “Thoughts about quantum computing and the wave function”
If reading Scott Aaronson’s blog has taught me anything it’s that I’m way out of my depth when it comes to Quantum Computing.
I am increasingly convinced understanding the physicality of superposition is a major key to understanding quantum mechanics. What does it really mean for a qbit to be in superposition? It’s vexing because every time we look at it, it isn’t.
(One thing about the MWI connection. If we build a quantum computer that gives us results we see as correct, what results are the other worlds getting? AIUI, generating a result requires multiple runs, which get differing results, and selecting the most common result as correct. Does the math work out so the overwhelming number of worlds get valid averaged results?)
LikeLiked by 1 person
Have to admit I find much of what Aaronson writes on QC impenetrable, although I found his remarks in a podcast interview by Sean Carroll to be reasonably clear. My understanding comes from the introductory chapters of some QC books, as well as the occasional portion of an Aaronson post I manage to follow.
Superposition seems easy to understand in a scenario like a double slit experiment, but when it involves spin, and then with the sky high permutations possible in QC, I lose any ability to visualize it. Niels Bohr said that the quantum realm is inaccessible. Even when we talk about “looking”, it’s a measurement followed by inferences about what’s there. There are times when I can understand the “shut up and calculate” stance.
I hadn’t heard about the multiple run thing. Aaronson notes that you obviously can’t just let all the parallel computations run to completion, since you have no control over which one comes up in a measurement. The circuitry has to promote the correct answer. (This last point almost always gets omitted in news article descriptions, which left me confused for a long time about how this was supposed to work.)
Under the MWI, if you have to do multiple runs, then I think it would work out that in the vast majority of worlds, they would get the most common results as the most common, but each world would get them in a different order and in different individual runs. A small number of worlds would be unlucky and not get the right answer as the most common. A tiny fraction would never get them. In our subjective timeline, I think we’d see these sequences as very low probability events that, with enough runs, would come up, but very infrequently. Of course, some infinitesimal portion of worlds would be extraordinarily unlucky and never get correct results, and end concluding quantum computing just doesn’t work.
I downloaded Aaronson’s Quantum Computing Lecture Notes although I haven’t had time to more than glance at it. It does seem to contain enough basics to make for interesting reading.
Superposition is one of those things that, the more one looks into it, the weirder it seems. What’s happening in the two-slit experiment or with electron spins or any other superposition is deeply mysterious. (It occurred to me just yesterday that “shut up and calculate” is a humble point of view that admits how clueless we are.)
QC algorithms are twisted. Not like classical computing at all. The algorithm and circuitry conspire to create a high weighting (probability) of the desired answer. Shor’s algorithm, for instance, arranges that the prime factors are the most likely numbers to be measured. (It’s actually way more twisted, but that’s essentially what happens.) The multiple run thing comes from there being some probability of getting an incorrect answer.
As you say, under MWI some branch can’t get QC to work right at all.
One thing that’s interesting in adding bits is that the Gaussian gets narrower and narrower. In MWI, it means more and more branches cluster around the average and get expected results, but there are always a handful at either extreme (all 1s or all 0s) as well as weird ones getting strange patterns — even “messages.” Imagine creating a quantum random generator that spits out bits saying, “Hello, World!” in ASCII! 😀
This short video gives a sense of what’s involved in using QC to factor numbers using Shor’s algorithm:
This video quickly surpasses my ability to keep up, but the sense I get is that the multiple runs are to narrow the answer. (The sense I get might be hopelessly wrong.)
Just glanced at Aaronson’s notes. I thought I had looked at them before and found them impenetrable, but, at least the early sections seem pretty approachable. Thanks!
Superpositions are definitely weird. At one level, it’s just wave mechanics. But of course, QM waves are far weirder. But no interpretation that I know of banishes them. It’s just a matter of when or if they collapse.
One of my QC books has Shor’s algorithm in it. I was expecting something like pseudocode. Instead I got a sea of diagrams, bra-ket notation linear algebra. Apparently, to channel Boromir, one does not simply program a quantum computer.
“Imagine creating a quantum random generator that spits out bits saying, “Hello, World!” in ASCII!”
Pretty sure someone would conclude there had been a malfunction. But in some very improbable worlds, it would persist with additional messages. Then people might think God or aliens were talking to them.
Are you familiar with the experiment where you take two polarizing filters oriented at 90° which blocks 100% of the light, but inserting a third filter between them, if oriented at 45° suddenly allows (a surprising amount of) light transmission? That’s another superposition weirdness and a nice visual demonstration of quantum mechanics in action.
AIUI, what makes QC algorithms so tricky is exactly what you saw, the need to create a quantum state of the right kind. All that math is involved in that (as you saw, there’s a bunch in the video). I get the impression that, if quantum mechanics seems extremely mathematical (and it surely is), then quantum computing is like distilled pure quantum math. One has to master quantum mechanics and computer science and information theory.
MWI does seem to insist all possible branches must always occur, but that seems to require a true continuum. If reality isn’t quite so continuous, perhaps the the really extreme branches get rounded off.
I have heard about that filter experiment. As I understand it, the dynamics are similar to the ones you described for spin measurements. The 45 degree filter causes the other orientation measurement to go into superposition, leading to a portion of the photons now able to pass the last filter.
Interesting point on the MWI and continuous. Carroll in his book admits there’s nothing to indicate the granularity of the splitting for (seemingly) continuous variables. He admits it might be infinite, an answer I think would be problematic. If there is some minimal granularity, then it might cut off the most extreme scenarios (like a universe with no apparent entropy). A lot would depend on how big the unit of wave fragment (or whatever) would be.
There’s an aspect to the polarizing experiment compared to the electron spin experiment that confused me, so maybe it’s worth some details.
Electron spin on orthogonal axes is a conjugate measurement. Making one invalidates the other. So measuring an electron on the Y-axis and getting, say, UP invalidates any information about the X-axis and the electron can be seen as being in a superposition of LEFT-RIGHT. Measuring that UP on the x-axis and getting, say, LEFT invalidates the UP state. Now the electron is a superposition of UP-DOWN so a further measurement on the Y-axis returns a “random” result.
But with polarizing filters, the vertical and horizontal axes are associated. A photon that passes through, say, a vertical filter now has vertical polarization. A further vertical filter will pass it with 100% probability. Likewise, a horizontal filter will block it with 100% probability.
The wave-function here is such that the probability of a photon in a known polarization state passing through a filter is the cosine of the angle squared. That’s why the same filter passes 100% and an orthogonal one passes 0% — cos(0°)^2=1.0 and cos(90°)^2=0.0.
For a filter at 45° it’s cos(45°)^2=0.50, so there’s a 50% chance the photons pass through that middle filter.
And now they’re polarized at a 45° angle, and the third filter is 45° with respect to the photon polarization now, so again 50% pass through. That middle filter changes the state of the wave-function. Because of the last filter, the photons that do pass through are known to be horizontally polarized. (This was from memory, but I believe it’s right.)
The continuum raises all kinds of weird problems in physics, but our technology doesn’t get us into the zone where it really kicks in. Even Newtonian physics goes nuts at the limits. (For instance: masses are represented by points with no dimension, so on some level nothing can ever actually collide.)
Regarding “wave fragment” — the fundamental premise of MWI involves the reality of the universal wave-function, which is a superposition of all possible realities. So what we’d mean by fragment is a specific solution to the wave-function representing that reality. That, along with all other possible solutions, would comprise the universal w-f.
One thing I’m not clear on is whether the solution can be local or must involve the entire universe. Consider Alex doing quantum experiments that definitely result in branches. Is baby Alex a superposition of all the possible experiments in the future? Does it start that day? If it’s a universal description, then it almost seems like it has to begin with the universe (which as I mentioned elsewhere might be where Deutsch got the idea that realities already exist).
Double-checked and the above is correct. I wanted to be sure I hadn’t gotten it mixed up with other angles, which is where things get really weird.
A second filter at 22.5° has just over an 85% chance of passing a photon. If the third filter is 45° from the first (making it another 22.5° from the second), there is again an 85% chance of passing a photon.
So with just the third at 45° 50% of the photons pass through, but insert between them a filter at 22.5° and the chance of a photon passing through all three is 0.85×0.85, which is just under 73%. So, again, that middle filter allows more photons to pass because filtering changes the wave-function.
The weird part is the math. If 90° allows 0%, and 45° allows 50%, and 0° allows 100%, it seems reasonable to think 22.5° would be 75% but quantum math predicts, and experiments show, it’s 85%.
BTW, I’d mentioned an aspect of the energy problem with MWI, and the video I just watched to double-check my understanding of the above put it a mathematical way.
The polarization of a single photon, let’s say polarized at 45°, is also a superposition of vertical and horizontal polarization. It’s usually notated something like α|↔〉 + β|↕〉 where α and β would both be 1/sqrt(2) — which norm squared gives 0.50 probability.
But if the photon is one EMF quantum — the minimum possible energy — then the superposed photons each have an energy of 1/sqrt(2), which isn’t allowed. Seeing them as photons in actual worlds seems to violate the fundamental premise of quantum mechanics.
It’s always something. That second superposition term should be: β|↕〉
I think I fixed it, but let me know if I did it wrong.
Interesting stuff on the polarizing measurements.
On baby Alex and superpositions, I’m not sure if I’m understanding what you’re asking here. But as I understand it, under MWI, the universal wave function is rigidly deterministic. We could view it as a gigantic quantum clockwork universe. So everything would be predetermined at the beginning of the universe. It’s worth noting that in our emergent classical world, we’d still be faced with randomness, since in our subjective timeline, there’s no way to know which branches we’d end up on.
On the energy thing, I suspect the MWI response is it would be okay because, relative to everything else in that branch, it would still be at the minimum. That said, I’m not confident in that response. The energy issue still concerns me, although I’m increasingly confident it’s a matter of education on my part, not a real issue. I’m pretty sure if there was any chance something like this was an issue, MWI would have had to deal with it decades ago. It feels like something fundamental that trained physicists are quickly able to resolve. From what I can see, energy questions always come from people like us, not physicists. That said, I’d still like to understand why it’s not an issue.
Yep, that’s the fix. Weird how the vertical double-headed arrow is a fat icon while the horizontal double-headed arrow is normal. Bit of Unicode oddness there.
All wave-functions are fully deterministic. What I was getting at is that Everett’s formulation seems clear the branch occurs when the quantum system and the measuring device interact. His math certainly suggests it. But you do almost have to go back, even to the beginning of the universe, to make it work.
Consider the beam-splitter we discussed. On Everett, the branch seems to occur when the photon can either reflect off or transmit through the mirror. That’s the interaction. But I think the wave-function describing this involves a superposition of a photon going from the laser, off the mirror, and to a detector plus a photon going from the laser, through the mirror, and to the other detector.
My question was whether we need to go even further back and deal with Alex’s wave-function from birth. Or the universe’s wave-function from birth. (Assuming it even makes sense to talk about the wave-functions of macro objects. I’m not convinced it does.)
“It feels like something fundamental that trained physicists are quickly able to resolve.”
One would sure think so, but why isn’t the topic ever addressed? Is it possible it’s just being ignored and waved away?
How is a single quantum of energy split between supposedly real, physical worlds into two different photons? (FWIW, until I get a decent answer to this, I’m going to consider MWI falsified.)
On the arrows, what’s weird is that they’re consistent in the notification email I got on your comment, but not when I look at it on the website. Since they’re both in Chrome, I have to conclude it has something to do with something in WordPress.
On when the split occurs in MWI, the thing to remember is that we’re just talking about the wave function with no collapse. So the superposition of the measured system spreads into the environment. When the split happens is relative to an observer, on when the observer themselves (specifically the quantum objects of which they are composed) go into superposition. Or you could focus on when interference between the branches of the superposition are no longer detectable (decoherence), but even that isn’t a cut and dried process and depends on the equipment.
On energy, the conservation of energy question comes up a lot, so I often see it addressed, similar to this:
This doesn’t address the quanta issue. I’ve been scouting around on this question. I’ve come across a few people answering questions noting that energy is not always quantized. One person said it depended on whether it was “bound”. Frustratingly, these weren’t authoritative sources. And it’s not clear how it translates into the energy in branches of the wavefunction.
Oh, I have no illusions. I know when you talk about the MWI, you’re talking about a theory you’re convinced is wrong. 🙂
As I’ve said before, I’m not convinced it’s right, but I am convinced it’s plausible.
It’s definitely something WP is doing — they’ve substituted a PNG image for the HTML character, which is supposed to be the Unicode character \u2195. Why, WP, why? (And why not both, then? The horizontal double-arrow is the adjacent character \u2194.) [sigh]
“On when the split occurs in MWI, the thing to remember is that we’re just talking about the wave function with no collapse.”
I’m aware. 🙂 To me it raises yet another question: everything we experience is the result of what the CI would see as a measurement. But since none happen in the MWI, how do we experience anything? It seems the wave-function has to include modifying our brains as if we had made observations.
“So the superposition of the measured system spreads into the environment.”
Giving a local account, but my understanding (such as it is) is that the wave-function describes the full evolution of the system, which would seem to require including its history.
So we’d have Ψ1 which describes a universe with an Alex that makes measurements. There is also a Ψ2 which describes a universe with an identical Alex up to the point of a specific experiment on a specific day. I believe MWI would say Ψ1 and Ψ2 are coherent up to the point of that specific experiment. But they are slightly different wave-functions such that, at that particular time, they diverge and decohere. From then on they differ.
That make sense? Each Ψ involved in the universal combined Ψ describes an entire universe.
“This doesn’t address the quanta issue.”
Yeah, and the concept of “thin” seems like a huge hand-wave. The whole argument seems precarious to me.
“Bound” probably refers to bound states which are quantized due to the standing waves inherent in their confinement. It’s also true that energy levels are on a continuum in the sense that a given set of quantized states might have any ground value (all excited states will be some fixed multiple of that).
But the quantization of energy at the Planck scale is what started QM to begin with.
The thing about a photon is that energy=wavelength, which means its energy is its “color” so if superposed photons comprise photon we’re dealing with, and if those superposed photons are in real worlds with real physics, and seem identical (haven’t changed color), then there seems a very serious contradiction to physics here.
On observation and measurement, I’m not following your questions. I don’t know of any reason to conclude that observation and measurement don’t happen under the MWI. (I suspect I’m misunderstanding what you’re saying here.) They just don’t lead to a wave function collapse. They also, in and of themselves, don’t lead to spreading superpositions, since those happen anyway. A measurement, by magnifying the effects of a particular quantum event, does magnify the differences that will exist between the branches of the superposition.
On the wave function and universes, again I may not be following what you’re saying. Under MWI, it seems like talking about wave functions within universes is to talk about specific branches of the overall universal wave function (or the multiversal wave function if we’re going to use “universe” to refer to one of the branches). Of course, there is Deutch’s account, which considers all the universes to already exist, but I’m not sure exactly how to reconcile his version with what I understand about the wave mechanics.
The hand wave critique strikes me as an assertion that important details are being glossed over. What details do you see missing here?
On the photon and the relations of its energy level to its effects, the argument as I understand it is that in each branch, those relations are preserved. The overall energy of each branch is proportional to the amplitude squared of that branch. So the photon means the same thing to our retinal light cones because the energy level of those cones are scaled in the same manner as the photon for that branch.
“On observation and measurement, I’m not following your questions.”
I’m not entirely sure I follow it either; it’s something I’m chewing on. There’s something about the nature of observation without collapse that seems somehow incoherent to me. It kinda keys off something I recall Everett saying in his paper about memories.
“Under MWI, it seems like talking about wave functions within universes is to talk about specific branches of the overall universal wave function (or the multiversal wave function if we’re going to use “universe” to refer to one of the branches).”
Yeah, there’s potentially confusing terminology here. I tend to lean towards “universe” as including everything — all worlds. I use “world” to mean a specific solution to the universal wave-function. It would look something like:
Ψ_universe = c1|Ψ1〉 + c2|Ψ2〉 + … + cn|Ψn〉
What I’m not sure about is if a wave-function goes along and then branches, or is it necessarily all along a superposition of multiple world lines that at some time, t, diverge. I’m getting a sense it’s the latter. If so, each superposition in Ψ_universe is an entire world.
This is probably where Deutch is coming from. Reality is the wave-function, and it has always contained a superposition of all possible worlds. Some diverge soon, others later.
“The hand wave critique strikes me as an assertion that important details are being glossed over.”
😀 Indeed it is.
Firstly, MWI is, at root, rigorously mathematical to the point of reifying the Schrödinger equation. The decoherence work by DeWitt and others is also rigorous. There is no rigor in the quote. In particular I want math for the last sentence in the second paragraph, because it sounds self-contradictory to me.
Secondly, yes, conservation of energy is obviously conserved within a world. If it wasn’t, that would be huge news. The question involves the physicality of energy being split into multiple worlds — in some cases an infinite number of worlds (or nearly so).
QM is indeed formulated in terms of expectation values, which are associated with measurements. More to the point, again we’re talking about what happens in a single world.
And obviously “thin” is not a physics term.
“The overall energy of each branch is proportional to the amplitude squared of that branch.”
Keep in mind that E=mc2, so this also asserts that mass gets “thinned” out.
If observation without collapse is seeming incoherent, I might check to make sure I wasn’t looking at it through the assumptions of the collapse postulate.
On when the waves branch, I guess I wonder what happens when a new particle is created and then starts branching out. I can see the argument that its structure was predetermined by the structure of all the causal interactions that when into its creation. Or when two particles interact and become entangled, with new branches that exist in their combined wave function. It seems strange to say those branches always existed.
On rigor and mathematics, I suspect Michael Clive Price (the author of that FAQ) would direct us to the Schrodinger equation and other mathematical tools of quantum mechanics. Is there something in the E or H variables/operators that mandate granularity? Or are they smooth variables? (I’m asking. I really have no idea.)
Mass does get thinned out, but as I understand it, the mass of everything in that particular branch of the universe has been thinned consistently, so all the relative relationships are maintained. Unless there is something in the law of physics beyond those relations that mandates absolute values. But we can only ever measure values relative to others, so would we even know that?
“It seems strange to say those branches always existed.”
I know, that’s why I want to understand exactly how to apply the Schrödinger equation to both the two-slit experiment and the beam-splitter experiment. The latter seems to show a clear case of branching, but the former seems a different scenario — I’m not clear on what branching, if any, occurs there.
(As a reference point, the whole semester of this MIT QM class I’m watching deals with rather artificial single particle systems in one dimension, so [A] there’s a lot of ground to cover to get to the real experiments, and [B] more realistic scenarios are a lot more complicated.)
I’m pretty confident the two-slit experiment is described by a superposition of a particle going through each slit — that is, the entire flight of the photon is so described. But is that true in the beam-splitter? If so, then part of that description involves the shared path from the laser to the mirror. But one of them describes an interaction with the mirror whereas the other describes transmission through it.
Given that MWI says it’s all one big wave-function, then it seems that beam-splitter logic might apply in general. Two worlds are described, in full, and they coincide up to some branching point (mirror interaction or transmission) and then diverge.
(If quantum descriptions only apply to quantum systems, then this is all moot.)
“Is there something in the E or H variables/operators that mandate granularity?”
AIUI, yes. For one thing, note how Planck’s constant h (or more typically h-bar) in nearly every formula — that enforces the granularity of matter and energy.
There are also situations where confinement or boundaries require standing waves, which creates another source of quantization. (We see something similar in musical instruments. Strings with their ends bound, or tubes of a given length, have a standing wave and harmonics that give the note its pitch and timbre. All those harmonics are quantized — they’re multiples of the main frequency.)
“Mass does get thinned out, but as I understand it, the mass of everything in that particular branch of the universe has been thinned consistently,”
Does that require non-locality? It seems like we should detect waves of thinning matter passing. It almost seems, even with non-locality, we’d somehow notice some sort of transition.
“Unless there is something in the law of physics beyond those relations that mandates absolute values.”
You know I think MWI implies new physics in having to explain how real physical realities coincide, and there is also what seems new physics necessary for this energy business. (“Thinning” matter? Coincident matter? Don’t those ideas seem at least a little hard to swallow?)
But what does it mean to say an electron has a mass of 0.511 MeV if that’s somehow relative? Given how often reality must have branched in nearly 14 billion years, how “thin” is matter and energy now? What about the next 50 billion years?
Spending a whole semester on single particle dynamics is why I’m reluctant to get into the mathematics. The parts I’d find more interesting is what happens with multiple particles, and then whole populations of particles, which I suspect would be well beyond my limited math skills.
On the double slit experiment, my sense is that the dynamics, as typically described, that it goes in either one slit or the other, is vastly oversimplified. It goes not just in both slits, but along the length of both slits, and follows an interference pattern before reaching the back screen. Under MWI, I don’t think we’re talking about just two worlds, but innumerable ones, one for every place it might have hit the screen. It’s probably more accurate to talk in terms of proportions of worlds, so depending on the specifics of the wave function, maybe in 50% it went through the top slit, and in 50% the bottom.
On the variables in the Schrodinger equation, ok, I guess we’ll have to see. I remain skeptical of the idea that John Wheeler would have let Everett’s paper through with that basic of a mistake, or that the legions of physicists who see this interpretation as obscene wouldn’t have seized on it if was an actual vulnerability.
Does thinning require non-locality? Not that I can see. Any comparison we could make would involve local dynamics, where we’d either already be in the spreading superposition, or not yet in it. Even with a comparison made during the midst of a split, the dynamics of the comparison itself would be split, making the change undetectable.
Similarly, I don’t think we’d detect the thinning because everything we’d be able to compare it to would be equivalently thinned. Remember, every measurement anyone ever makes is a comparison of the thing being measured against some standard thing. If they’ve both been reduced proportionally, I can’t see any reason we’d be able to notice.
I know you think it would require new physics. If I thought you were right, I’d see that as a possibly fatal flaw in the MWI. But as I’ve learned more about it, every time I work through the scenarios, it seems to work out without having to introduce anything new. Of course, I’m an amateur, so that means nothing, except to me. But the fact that it seems to accord with what physicists say makes me think I’ve got a decent handle on it. (I do want to read more about decoherence though.)
“Under MWI, I don’t think we’re talking about just two worlds, but innumerable ones, one for every place it might have hit the screen.”
Yes. Every available electron in the screen, in the two-slit mask, and in the sides and bottoms, is an electron that might have absorbed the photon, and that does seem like it could be a branch. The same is true when one shines a flashlight at a wall. A vast number of branches for each photon and possible electron it might interact with.
Unless there’s something like entropy going on involving macro-states. If an interaction doesn’t change anything significant, a branch might not occur. (Carroll has said things that suggest branches are restricted to things like beam-splitter or spin measurements (or cats in boxes) with definite outcomes that can potentially result in distinct realities. In lectures, he says he’ll jump left or right depending on his beam-splitter outcome, and that macro difference does cause a branch.)
(This is why I keep asking if walking down the street wearing polarizing sunglasses creates zillions upon zillions of branches. 🙂 In one view, yes, but in others possibly not.)
FWIW, Feynman’s sum of paths is where the multiple paths through each slit idea comes from. The sum of all possible paths (including weird twisted ones) results in a shortest possible path that represents the straight path the photon would have taken it if actually was a point-like object.
AIUI, in the common diagram of “waves” fanning out from the laser, hitting the two slits, fanning out from each of them, and interfering on the screen, what that really plots is the phase of the momentum description. The momentum (energy) itself is constant, but its phase rotates, and it’s the phases that interfere.
With real waves, the cancellation is the result of opposing energies summing to zero. Water is present and has wave energy. But the interference of the momentum phases causes probability interference. The photons “avoid” those areas — any photon that gets through the slits lands somewhere — just hardly ever there.
As far as the energy issues and “thinning” (and coincident physical realities),… I dunno. I want to see the physics that makes sense of any of those.
When it comes to appeals to authority I’ll point to SUSY as another idea a lot of brilliant people can’t seem to accept is almost certainly wrong. The thing about brilliance is that it can be brilliantly wrong. Between biases and vested interests, I tend not to take anyone’s word for it. 😉
There are MWIers who seem to imply that splitting only happens when a quantum event gets magnified. If so, then the amount of branching is far less than my understanding. But I think about this quote from Bryce DeWitt.
What I take from this, is that there are large numbers of worlds that are macroscopically identical to each other, where the quantum interaction basically makes no difference. What is far less prevalent are branches where the interaction was magnified and made a difference. So, if there are bazillions of worlds that are macroscopically identical, do they count as separate worlds?
“But the interference of the momentum phases causes probability interference. The photons “avoid” those areas”
This is the kind of language from the antireal side that I find baffling. I don’t know what “probability interference” means. Or for photons to “avoid” particular areas. I accept Copenhagen as an instrumentalist theory and useful placeholder, c. 1927. It’s the idea that it, or similar instrumental placeholders, are meaningful answers that I find objectionable. At best they’re incomplete, at worse, they seem like paradox factories.
“So, if there are bazillions of worlds that are macroscopically identical, do they count as separate worlds?”
Exactly the question. I think it was Carroll who said something about MWI being a class of theories. We’ve touched on three here: low-branching views, high-branching views, Deutch’s no-branching view.
Somewhere I detailed a version of the two-slit experiment where the screen is a matrix of pixel-sized individual detectors wired to separate circuits controlling a giant billboard with a crowd of people watching. That seems like a version of the beam-splitter that definitely branches, especially if the experiment is stopped after only a few photons so the pattern is very distinct in each world.
After one photon, each world would be unique — one lit pixel per world. After two, there might be worlds with the same pattern, just obtained in reverse from each other. With each additional photon there is a chance of “merging” the pattern — of worlds that now have the same pattern.
When the detectors are electrons in atoms, plus enough photons to create the interference pattern, it’s possible the experiment ultimately has only one outcome: an interference pattern. (Except for some non-probable weird worlds where the photons are all on one side or something.)
So, yeah, does that mean: [A] no branching; [B] branches that merge to a single world; or [C] lots of worlds with patterns identical at the macro level, but not at the micro level. (If, for instance, each photon was time-coded, the patterns would vary on how they were built.)
Given the weirdness of quantum, a further question is whether it makes a difference keeping track of how the pattern is built or not. If I don’t open the cat box until the pattern is built, does that change the underlying dynamics?
“This is the kind of language from the antireal side that I find baffling. I don’t know what ‘probability interference’ means.”
Mathematically it works the same way any wave cancellation does, and the Schrödinger equation is a wave equation, so it falls out of the math naturally.
And I think it’s a huge question realist or non-realist. The MWI needs to account for what the actual physical ontology of the wave-function (a complex-valued function in abstract Hilbert space), and the CI needs to account for what the wave-function actually physically represents. Same problem seen from different perspectives.
“Or for photons to ‘avoid’ particular areas.”
Weird, isn’t it! How does MWI explain it? What is the math behind what happens when the pattern emerges?
LikeLiked by 1 person
“The parts I’d find more interesting is what happens with multiple particles, and then whole populations of particles, which I suspect would be well beyond my limited math skills.”
I think it’s not that the math is so difficult there — it’s the same math for single-particle systems in concept. It’s that things interact, so the calculations required expand exponentially. It may also be increasingly difficult to define operators, as those have to account for all the particles in the system.
As a reference point, besides breaking RSA encryption, a big deal about quantum computers is their ability to model quantum systems. In this case, quark interactions in nucleons, or molecular interactions — small-scale stuff. Conventional computers aren’t up to doing the job in any reasonable time.
LikeLiked by 1 person