The main difference between a quantum computer and a classical one is the qubit. Qubits are like classical bits, in that they hold binary values of either 1 or 0, on or off, true or false, etc. However, qubits, being quantum objects, can be in a superposition of both states at once. The physical manifestation is often something like a particle in either a spin up or spin down state.
(This is true for digital quantum computing, where a discrete state is necessary. There is also analog quantum computing, which presumably works with other properties that are more continuous.)
We might write the superposition of a qubit as:
meaning it can be in a superposition of both 1 and 0 at the same time. So far so boring. But if we add a second qubit and have the two interact, we now have two entangled quantum objects which, together, can be in a superposition of four different states, which we might write as:
In other words, adding a second qubit doubled the number of parallel states they can collectively be in. If we add a third qubit into the mix, which also, through interaction, joins the entanglement, we get this list of states in the superposition:
It’s important to understand that these are superpositions, not alternatives. The three qubits, until a measurement is done, can be in all these states at the same time. If we increase the number to ten qubits, then the overall system can be in 210, or 1024 states at the same time. (Which I won’t attempt to lay out.)
The Google quantum computer that demonstrated quantum supremacy (over classical computers) was reported to have 53 qubits, which in principle meant it should have been capable of being in 253 or 9 x 1015 states concurrently. This is the power of quantum computing. It allows a level of parallel processing not possible with classical systems.
A 300 qubit system would be able to be in a superposition of more states than there are particles in the observable universe. Consider this. Where are all those states? According to quantum mechanics, they’re all right there, in those 300 particles.
Well, at least under interpretations that consider the wave function to model something real. The question is, under the interpretations that don’t, how do they account for these kinds of systems? One thing I’ve read indicates that maybe the systems aren’t really running in parallel. Maybe they’re just executing a far more clever algorithm, and the wave function mathematics are just a convenient mechanism to keep track of it. This move seems, to me, increasingly dubious as the number of qubits increase.
The interesting question is, what happens when the overall system is measured? In all interpretations, that act only provides access to one of the states, with no control over which one. A successful quantum circuit has to promote the desired answer so that all its states have it as the end result.
But it’s interesting to think about what happens under each interpretation. Before doing so, it’s worth noting the raw physics of the situation. When a measurement begins, the quantum particles / waves / objects in the measuring device interact with the quantum objects, the qubits, in the quantum circuit. There’s no real distinction between the atoms in the quantum circuitry and the ones in the measuring system. In most interpretations, what changes are the sheer number of interactions involved.
Under the Copenhagen interpretation, the involvement of macroscopic classical mechanisms cause the massive superposition of states to collapse to one classical state, although Copenhagen seems agnostic on the exact mechanisms. Various physical collapse interpretations see the wave physically reducing to a single state. Under the pilot-wave interpretation, there were always waves and particles, with the waves guiding the particles, and interaction with the environment causes the wave to lose coherence so that the actual particle states are now accessible. (At least I think that’s the way it would work under pilot-wave.)
The sequence under relational quantum mechanics (RQM) seems particularly interesting. If I’m understanding it correctly, each interaction results in a collapse, but only relative to a particular system. So from the second qubit’s perspective, its interaction with the first qubit causes it to collapse. But from the third qubit’s perspective, the first two qubits are in superposition until the interactions reach it. This sequence of disagreements continue all the way through the sequence. Of course, from the measuring device’s perspective, nothing has collapsed until it interacts with the system.
This seems similar to the sequence under the relative state formulation, also known as the many-worlds interpretation (MWI). The difference is under this interpretation, the disagreements are resolved into an objective reality. Of course, the only way to resolve them is to have a copy of qubit 2 seeing qubit 1 in its 0 state, and another copy seeing it in its 1 state. All of these copies exist in their own branch of the superposition.
Under both RQM and MWI, nothing fundamental changes on the event we label as “measurement.” The physical processes just cascade into a larger environment. Under RQM, this is handled by the stipulation that all states are only meaningful relative to a particular system, and that no universal description is possible.
MWI instead simply sees the superpositions continue to cascade out in an unending process. As the number of quantum objects involved skyrocket, the phase relation between the branches of the superposition that allowed for interference between them, begins to alter. As the number of constituents increase, each branch’s phase increasingly becomes more unique, isolated from the others, until they no longer interfere with each other. Each becomes causally isolated, their own separate world.
Some quantum computational theorists see the success of quantum computing as evidence for the MWI. Others point out that each of the other interpretations can provide an accounting. What that success does seem to do is put pressure on the interpretations that have an anti-real stance toward the wave function. As noted above, the idea that those computations aren’t physically happening in parallel somewhere seems dubious.
Unless of course, in my admittedly very amateurish musings here, I’ve missed something. In particular, is there a stronger anti-real account that I’m overlooking? Are there problems with the other interpretations that do take a realist stance?