The other day I shared a video on quantum computing, which I thought was informative, but the feedback I received is that it wasn’t for anyone not already versed in the subject. Since I once struggled to understand this subject myself, I tried to think of a way of describing it that would actually help. This post is my shot at it.
Truth in advertising for anyone reading this: I’m not a physicist or expert in quantum computing, just an interested layperson. And of course, any description of quantum physics other than the math is going to be controversial. So read with these points in mind.
Quantum computing can be thought of as computation happening in a sort of massive parallel computing cluster. But unlike classic clusters, which might have tens, hundreds, or maybe even thousands of nodes, a quantum computer can have astronomically more nodes than any cluster ever built or that will ever be built. It accomplishes this with quantum superposition, interference, and entanglement, terms which will be explained below.
One way to think of quantum superposition is a particle constantly splitting into different versions of itself, zillions of different versions. The versions of the particle form waves and jostle each other (interfere), which leads to the interference patterns in the double slit experiment. The wave of the different versions spread out, until information about the location of the particle gets out into the environment, typically from a measurement. Then all but one of the versions of the particle disappear.
When this happens, there’s no way to know ahead of time which version of the particle will remain. The best quantum theory can do is provide probabilities for each version. This is known as the wave function collapse and is the central mystery of quantum mechanics. Don’t worry if this or any of the rest seems bizarre. It does for everyone. For quantum computing, we just need to accept that it happens.
For entanglement, let’s consider a classical macroscopic example first. Imagine two asteroids traveling through space: asteroid-a and asteroid-b. They pass near enough to each other to gravitationally alter each other’s course. Once that interaction has taken place, asteroid-b’s fate in the universe has been altered by its encounter with asteroid-a, and vice-versa. So there now exists some correlations between them.
Now imagine that later asteroid-b passes by asteroid-c and alters its course. So now, asteroid-a has affected asteroid-b’s fate, but has also indirectly had an affect on asteroid-c’s fate as well. So the correlations are transitive. They spread. We could say that all three asteroids are “entangled” with each other, although at a classical level this is only of limited interest.
Now, let’s imagine two quantum particles traveling through space. Both have been doing the quantum thing, splitting off into zillions of different versions. When they interact, we can model the interaction as two waves interacting. Or we can view it as zillions of interactions happening between different versions of each particle. But just like the asteroid examples above, the particle’s now have correlations between them.
What makes this interesting at a quantum level is that now, instead of having independent versions of particle-a and particle-b, we now have versions of each combination of those particles, that is, we have versions of the set of both particles. If we take a measurement of particle-a it will collapse into one definite version. But when we do that, we also know that a measurement of particle-b is guaranteed to collapse to the corresponding version of the set. This is quantum entanglement.
And like the classical version, quantum entanglement is transitive. So if particle-b later interacts with particle-c, particle-a and particle-c will now be entangled. If this continues for large numbers of particles, all of those particles will now be entangled with each other, meaning that they will be in a joint superposition, in which many versions of the entire set exist rather than separate independent versions of each particle.
One thing to keep in mind. As new particles are added to the entangled set, the versions of the overall set get multiplied by the versions of the new particle that are included in the interaction. In other words, sets with more particles have exponentially more versions of the overall set.
A note about quantum spin. The main thing to understand about spin is for any particular measurement, it will only be one of two values. That makes spin a useful property for physically implementing a computational bit. Of course, being quantum, particles can have different versions with one spin and other versions with the opposite spin. These are qubits (quantum bits).
Hopefully all of that was clear, because it serves as the raw material of quantum computing.
A quantum computer uses qubits. A qubit will have a version with the value of 1 and another version with the value of 0. When that qubit interacts with another qubit, they become entangled, so there are now multiple versions of the set of qubit-1 and qubit-2. Add a third, and the entanglement includes three qubits.
Eventually all the qubits in a quantum computer are entangled, which means there are many versions of the entire set. So this could be thought of as the quantum processor constantly splitting into different versions of itself, each able to perform a version of the computation it is currently running. We now have our parallel computing cluster.
That said, there are differences between a quantum computer and a classical cluster. A classic cluster has all its nodes right from the beginning. It also usually has one designated controller node, which is typically the one that will provide the final output of any calculation. And the nodes in a classic cluster often communicate with each other over some type of network.
In the quantum version, we can think of it as starting off with one computer that then begins splitting into different versions. The number of possible versions is a factor of how many qubits it has. Since each qubit can have two versions, the number of possible versions for the overall computer processor is two to the power of the number of qubits. So a ten qubit computer can have 210 or 1024 versions, a fifty qubit computer 250 or over a quadrillion versions. A 300 qubit computer can have 2300 or around 1090 versions, which is more versions than there are particles in the observable universe.
But there’s a catch. As soon as there is any output from the system, that counts as a measurement, and it will collapse to just one version of the processor. And just like with the lone particle, there’s no way to know ahead of time which version will be there. There’s no way to know which node in our vast cluster will be left standing to provide the output. We can have the cluster try vast numbers of possible solutions, but when one node finds the right one, we can’t just assume it will be the one left standing after the collapse.
So the system has to “promote” the right answer. One way to think of this is the system needing to get the right answer on as many nodes as possible, so when the collapse happens it will be in the output. This happens by the nodes “communicating” the answer to each other, not through a network, but through quantum interference, that is, by controlling the jostling of the different versions of the computer.
So hopefully when the output happens, the right answer is provided. Due to quantum uncertainty, not every node can get the right answer, so there’s always a small chance of the wrong answer coming out. Often this possibility is compensated for by multiple runs of the same algorithm and taking the most frequent answer as the right one.
So, that’s my simplified version of quantum computing shorn of a lot of complications like error correction and other issues (many of which I don’t understand myself). Hope it helps. And that the simplifications didn’t cross over into being misleading anywhere.