I found this video on quantum computing educational. It confirmed some things that I’ve been pondering about quantum computing for a while, notably its limitations, which are discussed after about the five minute mark.
The strength of quantum computing is that it makes use of superpositions, the fact that quantum particles can be in multiple states at the same time. But it’s always bothered me that superpositions disappear as soon as we try to determine what they contain (or, if you’re an adherent of the many-world interpretation of quantum mechanics, they spread to us in such a way that “we” only have access to one of the superposition branches).
It was fellow blogger Disagreeable Me who explained to me, and this video confirmed, that the way to think of quantum computing is as of a type of double slit experiment, but in the shape of a logic circuit. Quantum computing allows for much more complex logic circuits than classical computing. But as soon as that circuit outputs its results, decoherence, the wave function collapse, the disappearance or spread of the superposition, or whatever we call it, happens, and all the data aside from that in the collapsed state, disappears.
This means that quantum computing is good for certain types of CPU bound processes, such as calculations, but not for I/O bound processes, which is most of computing. It means that those who believe that Moore’s Law is some cosmic law of physics are going to be disappointed when classical computing eventually hits fundamental physical laws. Science fiction authors and singularity enthusiasts shouldn’t expect quantum computing to ride in and provide infinite computing power.
Of course, no one knows when Moore’s Law is going to end. Experts seem to place it somewhere between 5 and 30 years. I suspect we’ll only know about the end in retrospect years after we’ve hit it. It won’t mean the end of progress in computing power, but it will mean that future gains past that point will be much harder, requiring alternate architectures.