Anyone who follows the computing industry knows that Moore’s Law, the observation that computing power doubles every couple of years, has been sputtering in recent years. This isn’t unexpected. Gordon Moore himself predicted that eventually the laws of physics would become a constraint.
One of the technological hopes for a revival is quantum computing. Quantum computing depends on qubits, quantum bits, which like classical bits can be 1 or 0, but can also be in a quantum superposition of both 1 and 0 simultaneously. When entangled with other qubits, the number of states a quantum system can hold increases exponentially, far in excess of what a classical system can hold. In principle, a 300 qubit processor can hold more information than there are elementary particles in the observable universe.
When you read these types of projections, it’s easy to see why many people think quantum computing will have near infinite performance and capacity. An array of qubits allows for an enormous range of possible answers to be computed in parallel.
That’s the common description you see in the popular press. It used to leave me pretty confused, since it was clear that something was missing. Eventually, for a computation to be of any use, you have to access the result, that is, perform a measurement on the qubit, and when you do, it randomly collapses into one classical state. And under the rules of quantum physics, there’s no guarantee that the right answer will “win”.
As theoretical computer scientist Scott Aaronson makes clear in this podcast interview by Sean Carroll, you can’t just run a gigantic range of possible answers in parallel. Your quantum circuitry has to find a way to promote the right answer, that is, make it the most probable one that will be the outcome of the measurement, and this has to be done without knowing in advance what that right answer will be. A heavy burden that deflates things a bit.
He also makes it clear that while the range of problems quantum computing can solve is expected to be broader than what classical systems can solve (such as possibly using Shor’s algorithm to break RSA encryption), it doesn’t mean they will be able to solve every problem that a classic system can’t. There remain a class of problems that are expected to be unsolvable, except through brute force methods that, in some cases, may take longer than the heat death of the universe.
There are also a host of implementation issues that complicate everything. Quantum circuits must be kept isolated from the environment to keep them from decohering, which has turned out to be a major challenge. And these systems are unavoidably more stochastic, and so have higher error rates, requiring a lot more error correction circuitry than classical systems.
Still, quantum computing does seem to hold a lot of promise. For a long time, I was leery of those promises, not sure if it wouldn’t get stuck in the same development hell that fusion technology seems to be in. But with Google’s recent success, it seems like there’s reason for cautious optimism, that the universe may actually allow us to do this. It may open up new possibilities, ones that may be hard for us to imagine right now.
So, lots of potential, just not magical ones.
Unless of course I’m missing something.