Following the live talk on September 9, Scott Aaronson was kind enough to answer questions we were not able to get to during the Q&A session. The questions and answers are presented below:
Question 1: Is classical simulation of tensor networks an important part of checking quantum supremacy? Have computer scientists given them enough attention?
Answer 1: Yes, tensor network methods are one of the main tools (though not the only one) now being used both to verify these experiments and to try to spoof them.
Question 2: There’s a flurry of startups based on quantum computing claims – is it too early for such commercial offerings?
Answer 2: I’m not sure. I look at the current QC startups and I can plainly see that many (though not all) of them are completely bogus, or in some cases, have a bogus PR side that somehow coexists with real science. On the other hand, if I were an investor, even if I only thought 1 in 50 QC startups would make it big, maybe it would be rational to invest in the whole lot of them! You’d have to ask an investor, rather than a theoretical computer scientist like me.
Question 3: In 2005 you published 10 semi-grand challenges for quantum computing theory. What progress (if any) has been made in addressing these challenges?
Answer 3: An oracle separation between BQP and PH was finally achieved a few years ago, by Raz and Tal. There’s now a plausible argument, by Bouland-Fefferman-Vazirani, that some aspects of AdS/CFT go beyond BQP and violate the Extended Church-Turing Thesis. We now have, under crypto assumptions, an impossibility proof for quantum obfuscation and copy-protection of arbitrary functions. We also now have an oracle separation between BQP and BPP^BQNC, and we have evidence that AC0 is not quantumly PAC-learnable (assuming that lattice crypto is secure against QCs). I believe the rest is still open, although of course there’s been all sorts of progress on related things!
Question 4: What do you think of the era after Moore’s law? Can quantum computing be a solution?
Answer 4: Quantum computers are not a general-purpose replacement for classical computers, and will not just continue Moore’s Law where it left off after classical components reach a limit. They do, however, promise huge speedups for certain SPECIFIC tasks (those for which we can exploit interference).
Question 5: Qbits have component coefficients which are modelled as real numbers. In mathematics these presumably have infinite precision. Why aren’t qbit coefficients limited in precision by reality? (I don’t know if Planck constant applies.)
Answer 5: This is a lot like asking why probabilities aren’t limited to finite precision to reality. Amplitudes have in common with probabilities that, while they’re continuous parameters, they evolve only linearly with time and are not directly observable. These properties mean that a small error in estimating a probability or an amplitude won’t blow up and cause a huge error later.
Question 6: Is quantum supremacy equivalent to BPP being properly contained in BQP?
Answer 6: Excellent question! They’re closely related but not the same. For quantum supremacy experiments, it’s enough to have, e.g., a sampling task (like BosonSampling) that’s solvable in quantum polynomial time but requires exponential time classically. BPP and BQP, by contrast, are defined as classes of decision problems only.
Question 7: What is meant by sampling a distribution? What exactly is the task?
Answer 7: Given a description of a probability distribution D over n-bit strings, the task is to output a new string x that’s distributed according to D. E.g., if D assigns probability p to some string y, then x should equal y with probability p also. (Where here the probability is over runs of the quantum computer.)
Question 8: Why did you write all of these by hand!
Answer 8: Over the course of the pandemic, I found that writing out my talks on a virtual whiteboard is not only a lot faster for me than making PowerPoint slides, it also forces me to pare things down in a way that’s good for the audience!
Question 9: Asserting that quantum supremacy depends entirely upon setting up constructive interference sounds like setting up an exceptionally complicated lens. Since every photon in light remains fully quantum as it passes through a lens at room temperature, what if anything makes, e.g., superconducting quantum computers more powerful than passing a high-energy laser through a room-temperature optical lens whose topology describes, for example, a factoring problem?
Answer 9: If your goal is an exponential speedup over a classical computer, then you don’t only need interference, you need interference AMONG EXPONENTIALLY MANY AMPLITUDES (i.e., in an exponentially large Hilbert space). Classical light can’t give you that, since the Hilbert space dimension grows only linearly with the number of optical modes. A large number of ENTANGLED particles (whether photons, superconducting qubits, or whatever else) is going to be necessary.
Question 10: Does the boson sampling approach require extremely low temperatures like the superconducting q-bit approach?
Answer 10: No, the beamsplitters can be at room temperature (although I believe the photodetectors are typically just a few Kelvin).
Question 11: What do you think of applications of Boson Sampling, such as Georgios Nikolopoulos’ quantum one-way function, rather than just for a feat of supremacy? Are there any particular applications which seem promising to you?
Answer 11: I think the “quantum one-way function” based on BosonSampling doesn’t work, because as far as we know, you’d need exponentially many samples to get reliable point estimates that you couldn’t have gotten with a classical computer. Even beyond that, though, why not just use a CLASSICAL one-way function that’s believed to be quantumly hard to invert?
Question 12: I remember seeing a paper that claims that they were able to simulate Google’s Sycamore circuit classically with tensor network. Can you comment on that paper?
Answer 12: There was more than one such paper. If, however, you’re referring to Pan and Zhang, then as I briefly mentioned in the talk, once you know that the classical spoofer is generating highly CORRELATED bitstrings, you could evade the spoof by simply adding a check to LXEB to make sure that the sampled strings are sufficiently different from one another.
Question 13: Where/how did a particular distribution D get placed in the system? (In the description of the USTC/Google experiments.)
Answer 13: The distribution D is a function of the circuit C (in some talks I call it D_C). It’s simply whatever distribution you get by applying C to the all-0 initial state, then measuring all n qubits in the {|0>,|1>} basis.
Question 14: Can you reshare the arxiv reference for those of us too slow to take a screenshot ? Thanks
Answer 14: https://arxiv.org/abs/2109.03494
Question 15: Are the “spoofing” algorithms useful in a problem domain other than “spoofing”?
Answer 15: Many of them are indeed useful for the more general problem of classically simulating a quantum circuit.
Question 16: Why can’t we use Simon’s problem to argue quantum supremacy – is it just because a sufficiently large computer hasn’t been built yet? But unless we can build QCs at that scale, quantum computing won’t be useful anyway. Thanks! Great talk
Answer 16: Firstly, yes, as with Shor’s algorithm you’d need a fault-tolerant QC with thousands of error-corrected qubits (and hence, possibly millions or hundreds or millions of physical qubits). But secondly, even if you had that, to this day no one has discovered an “application” for Simon’s algorithm, in the sense of a class of functions that satisfy the Simon promise but for which we couldn’t easily learn the secret with a classical computer if given the code to compute the function!
Question 17: What use is there for the output samples? Is there a specific area where it can be used?
Answer 17: As I said in the talk, they might be useful as a source of cryptographically certified random bits. As for other applications, I’m not sure yet!
Question 18: The Chinese claim have system with quantum encryption for un-hackable satellite communication.
Answer 18: See my answer to #40.
Question 19: Should we expect to find any classical NP problems (e.g., SVP) which are soluble in P with access to QCs? Is there reason to believe factoring can be done in polynomial time with classical computers?
Answer 19: No one knows whether factoring is in P; there’s not even a clear conjecture. People have actively searched for a fast quantum algorithm for approximate-SVP for 25 years now, but there’s been only limited progress.
Question 20: What is your opinion on the role of quantum machine learning going forwards?
Answer 20: It will clearly continue to be a central part of research in quantum algorithms, but that’s mostly because, one way or the other, we need to know the truth about such a central application area! If there’s a “killer app” for QML analogous to Shor’s algorithm, I don’t think it’s been discovered yet. Indeed, some of the most impressive results about QML to date have been negative, ruling out potential speedups by “dequantizing” formerly quantum algorithms.
Question 21: Is Google’s Quantum A/I Labs able to perform any useful tasks? Same question for IBM System Q?
Answer 21: I don’t know of anything “useful” that either can yet do, that we couldn’t have done just as easily with a classical computer (unless you count experiments to inform the design of future quantum computers as “useful”).
Question 22: Thanks for the great talk. Is the LXEB test experimentally efficient? Meaning that the number of samples to evaluate it is not exponential in the number of qubits. The same question about the tests used for Boson sampling.
Answer 22: Yes, the number of samples required depends ONLY on your desired statistical confidence in the result, and not at all on the number of qubits.
Question 23: Where does the MIP* = RE result fit into the pursuit of quantum supremacy?
Answer 23: MIP*=RE is a wonderful breakthrough, but completely irrelevant to quantum supremacy experiments, as it assumes all-powerful provers with far more entangled qubits than would fit in the known universe.
Question 24: Has quantum error correction been demonstrated in experiments?
Answer 24: See my answer to #48.
Question 25: How mature is the field of the theory of quantum error correction?
Answer 25: See my answer to #48.
Question 26: What is your favorite quantum complexity-theoretic result?
Answer 26: It’s too hard to pick just one, but the “BBBV Theorem” (even quantum computers need at least sqrt(N) time to search an unordered list of size N), and the recent oracle separation between BQP and the polynomial hierarchy by Raz and Tal, would certainly be up there.
Question 27: What are your thoughts on Silicon Photonic Quantum Computing?
Answer 27: It’s clearly one of the leading approaches and I hope it succeeds! I’m lucky not to have to “pick favorites” among it, trapped ions, superconducting qubits, etc. – as a complexity theorist, I can just talk to everyone!
Question 28: What can we do to improve the prospect of creating more quantum algorithms – or is there a prospect of there being a way to define and limit the space of quantum algorithms?
Answer 28: I doubt we can “fully characterize” the space of quantum algorithms, any more than we can fully characterize the space of CLASSICAL algorithms! But yes, finding new quantum algorithms has been a major goal for almost 30 years. One big key seems to be broadening the space of problems one is willing to consider, rather than banging one’s head over and over against the same few problems (e.g., graph isomorphism, NP-hard optimization problems).
Question 29: If LXEB is exponentially hard to compute, how would it be usable for cryptographically certified random bits?
Answer 29: The idea would be that you could do the verification occasionally and at your leisure, using days or weeks on a supercomputer, even while you demanded responses back from the QC in a matter of seconds. But yes, the exponential cost of verification is a big drawback here – which is one reason why I put so much stress on the open problem of fast verification of sampling-based quantum supremacy experiments.
Question 30: Which archives are you referencing?
Answer 30: arXiv.org
Question 31: Can quantum computing be applied to difficult topological problems?
Answer 31: There’s a quantum algorithm for “topological data analysis” (estimating Betti numbers) by Lloyd, Garnerone, and Zanardi, but we don’t yet know whether it achieves a speedup over classical in any situation that could matter in practice. There’s also a quantum algorithm to estimate the Jones polynomial of a knot at a root of unity (Aharonov, Jones, Landau, building on earlier work).
Question 32: Really good sharing session! In your opinion, how to learn quantum computing for undergraduate students?
Answer 32: You could try my undergrad lecture notes! Or lecture notes of Umesh Vazirani, John Preskill, or others. Or a course, including several MOOCs that are now available!
Question 33: Current state of molecular modeling (e.g. Anton-2) ignores quantum effects. Can quantum computing simulation improve this?
Answer 33: Any situation where current molecular modeling ignores quantum effects, is a potential target for quantum simulations to do better. But I’m not a chemist and can’t comment on specific platforms like Anton-2.
Question 34: While practical demonstration of quantum supremacy has not been achieved as you said in today’s talk, what can you say about the current status of theoretical demonstration of quantum supremacy over classical computing (especially with respect to MIP* result)?
Answer 34: Practical demonstration of quantum supremacy MAY WELL HAVE BEEN ACHIEVED! As I explained in the talk, right now it’s a complicated, hard-to-interpret battle between the experiments and the classical spoofers, but I expect the experiments to pull ahead decisively before long. As for MIP*, see my answer to #17.
Question 35: Got your perspective on quantum supremacy but what are your thoughts on near-term feasibility of distributed quantum computing or the “quantum Internet”?
Answer 35: The “quantum Internet” has always been sort of a solution in search of a problem. The one clear application we know is Quantum Key Distribution, but that has to compete against classical encryption, which is much easier to use! Beyond that, a quantum Internet seems like it would mainly be useful in a future filled with quantum computers that needed to exchange quantum data.
Question 36: Are there a class of problems for which quantum supremacy may be achievable via quantum annealing based approaches?
Answer 36: Very recent work by Matt Hastings, The Power of Adiabatic Quantum Computation with No Sign Problem. See also (Sub)Exponential Advantage of Adiabatic Quantum Computation with No Sign Problem by Gilyen and Vazirani, has constructed artificial examples of solution landscapes where the adiabatic algorithm finds the global minimum in polynomial time, yet any classical black-box algorithm provably needs exponential time. Alas, these examples are exceedingly artificial, and 20 years after the adiabatic algorithm’s invention, I’d say it remains unclear whether it will give important speedups over the best classical algorithms for non-black-box problems, let alone non-black-box problems with real-world significance. In other words, it remains a great research topic!
Question 37: How many years do you think it will take for quantum computers to reach the maturity needed to become really useful?
Answer 37: I have no idea about that question – understanding the principles of QC provides almost no help in answering it. If I did know how to forecast such things, I wouldn’t be a theoretical computer scientist but an investor, and I’d be rich.
Question 38: Is there any other way to check the results other than LXEB?
Answer 38: Yes, there are many other related tests one could apply, like the “HOG” (Heavy Output Generation) that Lijie Chen and I proposed in 2017. But all the known tests have a similar flavor, and they all require exponential time for a classical computer.
Question 39: Near terms what do enterprise or businesses do in terms of getting ready for quantum exploration?
Answer 39: An interested company could start by hiring someone who’s been trained in QC, understands the subject, and (especially) is able to distinguish real advances from marketing hype, always carefully comparing a proposed quantum solution against the best we know how to do classically.
Question 40: Is quantum cryptography also on the horizon?
Answer 40: Quantum cryptography is already commercially available! Alas, there’s been only an EXTREMELY tiny market for it, firstly because of technological limitations (with fiber, you’re limited to ~10 miles, while satellite QKD so far has poor bit rate and only works at night if the weather is good)…but more importantly, because it’s an exotic solution to a problem that’s already extremely well solved by conventional crypto. Conceivably quantum computers, by breaking much of our conventional crypto, will create more of a market for QKD! Even then, though, there’s plenty of conventional crypto that’s believed to be secure even against QCs, so an easier response might just be to migrate to that.
Question 41: Is there a way to relate/explain quantum entanglement (two states separated by a long/infinite distance able to reflect each other’s state or state-change instantaneously) using ideas in today’s talk? I realize this may not be possible, just asking.
Answer 41: Yes, as I said in the talk, entanglement is just the quantum version of correlation. It occurs when, e.g., two qubits have a 1/sqrt(2) amplitude to both be 0 and a 1/sqrt(2) amplitude to both be 1, so that by learning the state of one qubit, you’d immediately learn the state of the other as well. Entanglement does not allow faster-than-light communication, basically because you don’t get to choose the outcome of measuring a qubit. It does, however, allow types of correlation that are provably impossible classically (that’s the famous Bell’s Theorem).
Question 42: Can you comment on the approach of quantum annealing of D-Wave in relation to quantum advantage?
Answer 42: After a first decade filled with hype and exaggerations, which did not inspire confidence in the research community, D-Wave now has some simulation results for ground states of condensed matter systems that look interesting – conceivably even like quantum supremacy. There’s an urgent need for someone from outside D-Wave to look more carefully at these results, and especially at how well they can be replicated on a classical computer using techniques like Quantum Monte Carlo.
Question 43: What is your opinion on using Quantum Volume as the metric to measure quantum computing power, as an alternative to mere qubits, since they also incorporate redundancy and error correction?
Answer 43: See my blog post “Turn Down the Quantum Volume”. I like how quantum volume captures more than just the number of qubits, and dislike how it became just another metric for companies to game for the purpose of press releases. I prefer to look at the whole reality of an experimental effort, including all the tradeoffs they’ve made, rather than boiling everything down to a single invented number.
Question 44: Can quantum computation provide a “true” one-way function?
Answer 44: We already have excellent candidate one-way functions classically. We don’t need QC for that.
Question 45: Factoring is hard to compute but easy to check, which is good for proving quantum supremacy. What makes Shor’s algorithm out of reach to run on current QCs?
Answer 45: As I said in the talk, the fact that you need FAULT-TOLERANT qubits (thousands of them, which probably means millions or hundreds of millions of physical qubits).
Question 46: How many qubits will it take to simulate chemistry?
Answer 46: Of course it depends on which molecule and what you want to know about it, but a 2016 paper from Microsoft, Elucidating Reaction Mechanisms on Quantum Computers argued that ~100 qubits might already be enough to start learning important new things about chemistry, like the mechanism of the Haber process. The problem is that they meant ~100 PERFECT (or nearly perfect) qubits. How to learn new things about chemistry using the noisy qubits of the near future – things that we couldn’t have learned using a classical computer – remains an open problem.
Question 47: Interference-canceling seems to be a key idea. How is it accomplished in practice? If that’s very hard to do, can one just run a “quantum computation” many many times, because if at all it succeeds, success is very easy to verify?
Answer 47: Yes, you can always run a quantum computation many times until success, but if you don’t start out with a good probability of a right answer, you won’t end up with a speedup. If you’re asking me to explain how interference is actually orchestrated in quantum algorithms, see e.g. my undergraduate lecture notes.
Question 48: What’s the current state of error correction? How would that work for a quantum computer?
Answer 48: This is an enormous question. Briefly, no one has yet experimentally demonstrated quantum error-correction that gives a net gain in qubit lifetime in the presence of general qubit errors. But, kind of like with controlled fusion, they’ve been closing in on the “break-even point,” and the major players are now in a race to be the first to demonstrate it. From there, of course, we’ll still have a ways to go to get full fault-tolerant quantum computation.
Question 49: Are room temperature scalable quantum computer feasible?
*Answer 49: Maybe someday! Right now we’d gladly accept any scalable quantum computer at all.
Question 50: Quantum supremacy is concerned with the absolute statement without considering the demonstrated computational advantages – should there be a different property that measures computation over the resources used for computation? Maybe in terms of energy…?
Answer 50: No, quantum supremacy is not “absolute” – it’s all about performing the same task with dramatically fewer resources (including time, processors, and energy) than are needed classically.
Question 51: Is the distance between LXEB=1.0001 in experiment and the theoretical perfect value of 2 all due to decoherence?
Answer 51: No, I believe it’s also due to calibration and many other things besides decoherence.
Question 52: Would quantum computing have benefited most from having been developed by physicists, mathematicians, or computer scientists?
Answer 52: Historically, it needed contributions from all of them, but especially a rich interplay between physics and CS. Quantum mechanics was in place by 1926 and Turing machines by 1936, but the centrality of polynomial vs. exponential time wasn’t firmly understood until the 60s and 70s – which is probably a major reason why QC didn’t start getting explored until the 80s!