> The most profound issue, however, concerns the meaning of quantum supremacy. After all, it doesn’t take qubits to solve important quantitative problems faster than any classical computer. Any carbon atom can “calculate” the solution of a very important practical problem—how does carbon behave?—simply by doing its thing.
The author [1] is trivializes the notion of computational machine. Calculating using a classical/quantum computer vs doing the experiment, differ in at least two important respects.
(1) Computers are fully programmable while experimental setups are usually specific to calculating a choice few parameters. The universality of various computational models implies exactly this: a well designed physical machine can simulate any other physical system, assuming you have the correct model of the system. A carbon-spectra calculating experimental setup does not have this universal programmability.
(2) A computer often yields the correct answer with a lot fewer resources, again assuming your model of the simulated system is correct. This is because, a well designed computation can ignore unneeded part of the model, which the experimental system must include. In fact, computers are precisely useful where we can get such an advantage, and useless where we can't. As the author points out this is not possible for some carbon atoms, and it also is not possible for medical drugs. But it is possible for a wide variety of other systems - say rockets to the moon.
[1] a Nobel prize winner, so take my words with a grain of salt.
I think he is well aware of this. The current quantum device Google build is however not much different than a sufficiently complex other physics experiment and so far they have not been able to demonstrate that it is capable of fault-tolerant error-corrected computation (which they basically won't be able to do in the foreseeable future).
Everyone agrees that Sycamore isn't fault tolerant, but I don't think many people agree that it is "not much different than a sufficiently complex other physics experiment". Here's a test: do you think there is any device that could be achieve quantum supremacy without being fault tolerant? If not, then your complaint isn't specific to Sycamore at all.
There are a lot of interesting applications that doesn't require a "computational machine".. While the carbon atom example seems excessively specific, there are other setups like D-Waves quantum computer that works in a sort of in-between way, with programmable links between "atoms" so you can setup a custom Hamiltonian that should be minimized by the computer.
That way you get sort of the best by both worlds, you get a (limited) programmability and you get easier observations than you would have access to just by for example designing a molecule encapsulating the problem you want to solve, as you would have to observe it "doing its stuff" somehow.
Just because it can't crack crypto doesn't mean a large-scale version of a setup like this can't be valuable. The field of quantum chemistry needs to use huge supercomputer clusters to simulate stuff like this.
The first aspect that you mention is actually not a disagreement but the whole point of his message - the recently popularized demonstrations of "quantum supremacy" did not allow for universal programmability, but instead were more comparable to an experimental setup for a particular problem of how do atoms behave in that setup.
Having a proper quantum computer with enough fully programmable bits to demonstrate quantum supremacy would be valuable, but (as the author states) the meaning of quantum supremacy is tricky, and we currently have achieved "quantum supremacy" only if we use a very limited, "cheating" definition of quantum supremacy.
> The first aspect that you mention is actually not a disagreement but the whole point of his message - the recently popularized demonstrations of "quantum supremacy" did not allow for universal programmability, but instead were more comparable to an experimental setup for a particular problem of how do atoms behave in that setup.
Where does Wilczek claim that? The only sentence I can see where he arguably comments on the programmability of Google's device is "the computation that Sycamore performed is very specialized". But this is before he brings up the carbon example, and he never says the programmability of Sycamore is comparable to the programmability of carbon.
Indeed, Sycamore is programable. It's obviously not capable of arbitrary-length computations, because the noise level is too high to be fault tolerant, but it has a certain accessible state space in which it can be arbitrarily configured. Here's Scott Aaronson if you don't believe me (emphasis mine):
> [Question:] Even so, there are countless examples of materials and chemical reactions that are hard to classically simulate, as well as special-purpose quantum simulators (like those of Lukin’s group at Harvard). Why don’t these already count as quantum computational supremacy?
> [Answer:] Under some people’s definitions of “quantum computational supremacy,” they do! The key difference with Google’s effort is that they have a fully programmable device—one that you can program with an arbitrary sequence of nearest-neighbor 2-qubit gates, just by sending the appropriate signals from your classical computer.
> In other words, it’s no longer open to the QC skeptics to sneer that, sure, there are quantum systems that are hard to simulate classically, but that’s just because nature is hard to simulate, and you don’t get to arbitrarily redefine whatever random chemical you find in the wild to be a “computer for simulating itself.” Under any sane definition, the superconducting devices that Google, IBM, and others are now building are indeed “computers.”
Devil's advocate: if you make a computerized system for generating a lot of similar atomic arrangements with random variations, then can't you call the process of placing the atoms "programming"?
The more I think about the distinction people are drawing, the fuzzier it gets in my mind.
Yes, as long as you can achieve a complete gate set (over some N-qubit logical subspace) by arranging the atoms, and as long as the computation and read-out is reliable enough to get results, I think that's literally what it means to build a computer. (To formalize this, more needs to be said in terms of what the gate set is and how reliable it needs to be, but I think that's the basic idea.)
Frank Anthony Wilczek (/ˈwɪltʃɛk/;[2] born May 15, 1951) is an American theoretical physicist, mathematician and a Nobel laureate. He is currently the Herman Feshbach Professor of Physics at the Massachusetts Institute of Technology (MIT), Founding Director of T. D. Lee Institute and Chief Scientist Wilczek Quantum Center, Shanghai Jiao Tong University (SJTU), Distinguished Origins Professor at Arizona State University (ASU) and full Professor at Stockholm University.
People riding the bandwagon and generally talking about it like they're in on it when they don't know what a qubit or bell state is (or how to read braket notation).
It's a similar line to AI - if the people talk about it like it has the potential to (and eventually will) destroy the world, they probably don't know much about it (in that case they're conflating sentience with intelligence, and ignoring the word "artificial).
The people who are working on it right now will - like with AI - tell you that it's in its infancy, that its current applications are more limited than they are made to appear in pop-culture, and that (generally) things are harder and take longer than people pretend.
Which is exactly what the author of the WSJ article is getting at (because he's not a hawker - he knows):
> "There’s little doubt that, in the long run, computers that exploit quantum features of matter will dramatically enhance our ability to address useful problems. But we’re not there yet, nor is success guaranteed. For the foreseeable future we will have, at best, a “quantum advantage” in well-chosen applications, not “quantum supremacy” along a broad front."
I'm disappointed that this article doesn't answer my only real question about quantum computers: What does a reasonable expectation look like? What is the range of potential performance gains?
I understand that this is the frontier and no one has any definite answers, but I hear everything from "trivial" improvements to "insta-crack AES-256" (thousands of trillion trillion trillion trillions times faster).
Does anyone have a well informed upper/lower bound estimate?
Quantum computers won't insta-crack AES. They only give a square-root speedup for symmetric crypto (so you can double the key length for equivalent security) with Grover's algorithm. It's public-key crypto algorithms based on prime factorization or the descrete log problem which will be broken by Shor's. However, running Shor's algorithm on production key sizes requires a huge quantum computer (with millions of qbits) and Scott Aaronson says he would be "astounded" if this was accomplished within the next decade.
Improvements are believed to be exponential when simulating physical quantum systems.
Quantum computing have be designed to solve a single problem : the simulation of quantum systems on a classical computer is very slow, lets build a quantum computer so that it will be fast.
It might seem like a fringe use case but it matters (a lot) to industrials and researchers in a wide variety of topics.
There's a lot of research at the "P = BQP?", "NP < BQP" kinda level, which is more like comparing big O notation runtimes.
I can tell you that Shor's algorithm for factoring takes O((log n)^2 (log log n)(log log log n)) (we can round that up to O((log n)^4) if you want)
to a the best known classical algorithm of
O(exp(1.9 * (log n)^(1/3) (log log n)^(2/3)))
But unless I tell you how fast each gate is, that tells you nothing about the constant time factors.
Also, all these quantum algorithms are constructed out of logical gates, but each logical gate might be built out of 10 physical gates, using some error correcting code.
Since you really need to know the physical characteristics of the particular architecture to know how much error correction you need, it's hard to even say how many gates something will use and how fast they are until we pick a particular architecture.
All we can really say is that given a big enough number, a quantum computer can factor it faster than a classic computer.
Getting an exponential speedup on simulating quantum-mechanical systems (protein folding/binding, material design) is the killer app of quantum computing. Nobody is funding quantum computers because they want to break public-key crypto; that's just a somewhat unfortunate (though interesting) side-effect bringing a bunch of costs with cryptographic R&D plus transitioning systems to post-quantum crypto.
Quantum computing is not expected to have large impacts on machine learning at this time after Ewin Tang's paper. There's an especially large amount of fluff in this area, though.
For most problems, no performance gains whatsoever. There's no expection for quantum computers to replace or improve general computing.
You can get immense speedups for a very particular set of problems, turning them from "can't possibly complete ever" to solvable; so it would be like an assistive device for these niche tasks, but it would be very important for them because otherwise they're not really practical to solve.
There's multiple bounds. The mounting evidence for BPP=BQP unfortunately falls aside P=BPP for evidence without proof. The weak interpretation is that BQP is vulnerable, and the strong interpretation is that BQP is broken. It's in the air, but for certain applications people can't afford to guess wrong. It's reasonable to remove it as a dependency in those fields.
Or, not exactly unheard of, they aim at finding a buyer for themselves well before a time when it may become clear whether the product would really be viable or not. In those cases the product is the company itself and they don't need to have a finalized working and useful real product. Works especially well in "hype" topics where hope can more easily win against common sense (of the buyer).
If they really have that bet (and not, you know, finding a sucker to buy a classical computer or a useless company), they are certainly not betting on selling the first thing that runs on their lab.
I think the intent is to reality check some of the marketing coming out of Google/IBM/Intel/Rigetti that seem to be trying to imply that quantum computers actually exist today (as opposed to laboratory science experiments which are what actually exist now) and are just a few years away from commercialization after which some form of Moore's Law will take hold. I've seen a number of Pop Sci articles that say things like "today's quantum computers fill an entire room like 1960s mainframes" or "today's leading edge quantum computers contain 53 qubits, experts say we'll need n-qubits to solve x problem which is intractable on a classical computer". All of this is intended to imply we just need to figure out how to shrink these existing quantum computers down while expanding the number of qubits available and we're about to have a quantum repeat of the PC Revolution. Nothing could be further from the truth.
The pop-sci headlines hinting that quantum supremacy shenanigans means that we are close to some new paradigm of general purpose quantum computers. I suppose.
The author [1] is trivializes the notion of computational machine. Calculating using a classical/quantum computer vs doing the experiment, differ in at least two important respects.
(1) Computers are fully programmable while experimental setups are usually specific to calculating a choice few parameters. The universality of various computational models implies exactly this: a well designed physical machine can simulate any other physical system, assuming you have the correct model of the system. A carbon-spectra calculating experimental setup does not have this universal programmability.
(2) A computer often yields the correct answer with a lot fewer resources, again assuming your model of the simulated system is correct. This is because, a well designed computation can ignore unneeded part of the model, which the experimental system must include. In fact, computers are precisely useful where we can get such an advantage, and useless where we can't. As the author points out this is not possible for some carbon atoms, and it also is not possible for medical drugs. But it is possible for a wide variety of other systems - say rockets to the moon.
[1] a Nobel prize winner, so take my words with a grain of salt.