As the search continues for useful applications of noisy intermediate scale quantum devices, variational simulations of fermionic systems remain one of the most promising directions. Here, we demonstrate an experimental quantum simulation of chemistry with more than ten times times the number of entangling gates as the largest prior implementation. We variationally model the isomerization of diazene and correctly distinguish between competing reaction mechanisms to within the model chemistry. The parameterized circuits explored in our work realize the Givens rotation approach to free fermion evolution. This ubiquitous algorithmic primitive performs an arbitrary change of orbital basis and is required by many proposals for correlated simulations of molecules and Hubbard models. Because free fermion evolutions are classically tractable to simulate yet still generate highly entangled states over the computational basis, we are able to use these experiments to benchmark the performance of our quantum processor while establishing a foundation for scaling up intermediate scale correlated electron simulations.
"st-connectivity" is the problem of deciding whether two points in a graph are connected or not (i.e. whether there is a path between them). I will show that a range of common problems can be reduced to st-connectivity, and furthermore, this reduction leads to an optimal quantum algorithm in many cases of interest. The analysis of the quantum algorithm is fairly simple, and uses standard graph theoretic quantities. These properties suggest that st-connectivity would make a good building block, or primitive, for designing and analyzing quantum algorithms.
The subject of this talk will be quantum distributed computing, i.e., distributed computing when the processors of the network can exchange quantum information. After describing the basics of distributed computing, I will explain a result obtained with Frédéric Magniez (arXiv:1804.02917) on quantum algorithms computing the diameter of the network. I will then present others results (arXiv:1810.10838 and arXiv:1908.11488) that show separations between the computational powers of quantum and classical distributed algorithms in several fundamental models of distributed computing. I will conclude my talk by mentioning interesting and important open questions in quantum distributed computing.
Noisy quantum computers can outperform classical computers on certain sampling tasks, but it's unclear whether NISQ machines can outperform classical computers on a commercially relevant task. If NISQ machines can't do this, then it will be necessary to build fault tolerant quantum computers before a "quantum information age" could truly begin. The goal of this talk is to contextualize the expected overhead of fault tolerant quantum computation using superconducting qubits and the surface code. We will cover the cost of performing the simplest possible classically intractable task (random circuit sampling) in a fault tolerant fashion.
This talk aims at providing an overview of some of the major ideas and techniques that were developed for Hamiltonian simulation, including Trotterization, linear combination of unitaries, oblivious amplitude amplification and quantum signal processing. The adaptation and clever combination of these techniques led to a flurry of new results in different variants of Hamiltonian simulation problems, as well as in other important quantum algorithmic tasks.
In the past few years there have been a number of major advances for quantum simulation. I will be reviewing a number of them in this talk including the interaction picture simulation method, the QDRIFT algorithm, well conditioned multi-product formulas and new commutator bounds for high-order Trotter Suzuki methods which show that those algorithms are much more efficient than previously thought. Finally I will discuss applications of these algorithms to problems in chemistry and simulation of the Schwinger model in 1+1D.
Recently, protocols based on statistical correlations of randomized measurements were developed for probing and verifying engineered quantum many-body systems. After a general introduction in the context of Renyi entropies, I focus in this talk on the cross-platform verification of quantum computers and simulators by means of fidelity measurements. I show how to measure the overlap between (reduced) density matrices, and thus a (mixed-state) fidelity of two quantum states prepared on separate experimental platforms. The protocol requires only local measurements in randomized product bases and classical communication between the devices. As a proof-of-principle, I present the measurement of experiment-theory fidelities for entangled 10-qubit quantum states in a trapped ion quantum simulator. To conclude, I will present further applications of randomized measurements for probing quantities beyond standard observables, such as out-of-time-ordered correlation functions.
Cycle benchmarking is a new approach for scalable, complete and efficient error diagnostics that will be essential to understanding and improving quantum computing performance from the NISQ era to fault-tolerance. Cycle benchmarking is born from ideas of randomized benchmarking and exposes tomographic methods as impractical and obsolete. When combined with randomized compiling, cycle benchmarking can identify the full impact of errors and error correlations for any (parallel) gate-combination of interest. I will show cycle benchmarking data from experimental implementations on multi-qubit superconducting qubit and ion trap quantum computers revealing that: (1) in leading platforms, cross-talk and other error correlations can be much more severe than expected, even many order-of-magnitude larger than expected based on independent error models; (2) these cross-talk errors induce errors on other qubits (e.g., idling qubits) that are an order of magnitude larger than the errors on the qubits in the domain of the gate operation; and thus (3) the notion of "elementary gate error rates" is not adequate for assessing quantum computing operations and cycle benchmarking provides the tool to provide an accurate assessment. I will then discuss how the aggregate error rates measured under cycle benchmarking can be applied to provide a practical bound on the accuracy of applications in what I call the "quantum discovery regime" where quantum solutions can no longer be checked via HPCs.
Testbed-class quantum computers -- fully programmable 5-50 qubit systems -- have burst onto the scene in the past few years. The associated surge in funding, hype, and commercial activity has spurred interest in "benchmarks" for assessing their performance. Unsurprisingly, this has generated both a number of scientifically interesting ideas *and* a lot of confusion and kerfuffle. I will try to explain the state of play in this field -- known historically as "quantum characterization, verification, and validation (QCVV)" and more recently and generally as "quantum performance assessment" -- by briefly reviewing its history, explaining the different categories of benchmarks and characterization protocols, and identifying what they're good for. The overarching message of my talk will be these are distinct tools in a diverse toolbox -- almost every known protocol and benchmark really measures a distinct and particular thing, and we probably need *more* of them, not fewer.
Pauli channels are ubiquitous in quantum information, both as a dominant noise source in many computing architectures and as a practical model for analyzing error correction and fault tolerance. Here we prove several results on efficiently learning Pauli channels, and more generally the Pauli projection of a quantum channel. We first derive a procedure for learning a Pauli channel on n qubits to a fixed relative precision with O(n 2^n) measurements. For a Pauli channel with only s nonzero error rates, we show how to learn it with only O(s n) measurements. Finally, we show that when the Pauli channel is given by a Markov field with at most k-local correlations, we can learn an entire n-qubit Pauli channel with only O(n^2 log n) measurements, which is efficient in the number of qubits. These results enable a host of applications beyond just characterizing noise in a large-scale quantum system: they pave the way to tailoring quantum codes, optimizing decoders, and customizing fault tolerance procedures to suit a particular device. Joint work with Robin Harper, Joel Wallman, and Wenjun Yu, arXiv:1907.13022, arXiv:1907.12976.
Several lattice-based cryptography primitives and protocols are now practical and even available in commercial products, for example, public-key cryptography, homomorphic encryption, proxy re-encryption (PRE), and digital signatures. Many of these primitives based on the Learning With Errors problem are implemented in the PALISADE lattice cryptography library. This talk presents a survey of state-of-the-art lattice algorithms implemented in PALISADE (both for already practical primitives and research prototypes of more advanced capabilities) and discusses some of their applications. The first part of the talk focuses on FHE schemes and their extensions, such as PRE and threshold FHE. The remaining part is centered around lattice trapdoors and the protocols where lattice trapdoors are used as a primitive, including identity-based encryption, key-policy attribute-based encryption, and program obfuscation.