Qu&Co comments on this publication:

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Qu&Co comments on this publication:

At Qu&Co we always restrained ourselves from reacting to exaggerated claims about the short-term potential of quantum-computing. Rather we focused on our scientifically rigorous work to advance the field of quantum as we strongly believe in its long-term potential. However, we draw the line at quantum being pushed as a short-term solution for researchers working on COVID, like this WSJ article in which a quantum hardware manufacturer offers free hardware access to researchers studying COVID, stating ‘we have a fairly unique system that could add value’. Although this offer could be a misplaced April-fools joke, we want to stress that, although quantum has strong long-term potential, there is zero chance it will provide any short-term value for COVID research. Therefore, no serious researchers working on the current pandemic should be distracted by this offer. If you are determined to use novel methods to solve today’s combinatorial optimisation problems, perhaps try simulated annealing on a purpose-built classical processor. And of course, if your time horizon is >2 years and you want to work on collaborative quantum-algorithm R&D, without distracting scarce COVID R&D staff, we are here to help. Stay safe and focused!

Qu&Co comments on this publication:

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Qu&Co comments on this publication:

Quantum computers were initially proposed to efficiently simulate quantum mechanical systems with an exponential speedup compared to classical computers. We are currently in the noisy intermediate-scale quantum (NISQ) era, which means quantum chips still have a small number of qubits. This prohibits straightforward quantum simulations of realistic molecules and materials, whose description requires hundreds of atoms and thousands to millions of degrees of freedom to represent the electronic wavefunctions. One research direction which attempts to bypass this restriction is the development of hybrid quantum-classical methods where the quantum computation is restricted to a small portion of the system.

In this paper, a quantum embedding theory is proposed for the calculation of strongly-correlated electronic states of active regions, while the rest of the system is described with density functional theory (DFT). DFT (and its various approximations) has been extremely successful in predicting numerous properties of solids, liquids and molecules, and in providing key interpretations to a variety of experimental results, but is often inadequate to describe strongly-correlated electronic state. The novel theory proposed in this paper is built on DFT and does not require the explicit evaluation of virtual electronic states, thus making the method scalable to materials with thousands of electrons. Also, it includes the effect of exchange-correlation interactions of the environment on active regions, thus going beyond commonly adopted approximations in conventional DFT.

The proposed quantum embedding theory utilizes a classical and a quantum algorithm to solve the Hamiltonian that describes the problem and yields results in good agreement with existing experimental measurements and still-tractable computations on classical computing architectures. The theory is tested in various solid-state quantum information technologies, which exhibit strongly-correlated electronic states. In this way, the authors show how the quantum-classical hybrid approach incorporating DFT enables the study of large-scale material systems while adding the strongly-correlated dynamics analysis which the quantum simulation algorithm can provide.

Qu&Co comments on this publication:

In this arXiv submission by Qu & Co and Covestro, a well-known approximation in classical computational methods for quantum chemistry is applied to a quantum computing scheme for simulating molecular chemistry efficiently on near-term quantum devices. The restricted mapping allows for a polynomial reduction in both the quantum circuit depth and the total number of measurements required, as compared to the conventional variational approaches based on near-term quantum simulation of molecular chemistry, such as UCCSD. This enables faster runtime convergence of the variational algorithm to a potentially higher accuracy by using a larger basis set allowed by the restricted mapping. The latter is shown via an example simulation of the disassociation curve of lithium hydride. These results open up a new direction for efficient near-term quantum chemistry simulation, as well as decreasing the effective quantum resource requirements for future fault-tolerant quantum computing schemes.

Qu&Co comments on this publication:

Recently some contributors to a paper describing a quantum-supremacy experiment inadvertently posted an older version of this paper online, which was quickly picked-up by the popular press resulting in a flurry of (in many cases) unfounded claims about the progress of quantum-computing. We believe that it is important for people interested in this topic to inform themselves through reading a balanced opinion from someone who is an expert in this field. Therefore we kindly refer to Scott Aaronson's excellent blogpost on this matter. 

Qu&Co comments on this publication:

In this article by McKinsey & Co, a strategy consulting firm, Florian Budde and Daniel Volz state that the chemical companies must act now to capture the benefits of quantum computing. Of course we at Qu & Co are a bit biased on this topic, but we do agree with the authors that the chemical sector is likely to be an early beneficiary of the vastly expanded modeling and computational capabilities, which is promised to be unlocked by quantum computing.

Qu&Co comments on this publication:

In this market outlook, The Boston Consulting group assesses how and where quantum computing will create business value, the likely progression, and what steps executives should take now to put their firms in the best position to capture that value. The report is based on interviews and workshops involving more than 100 experts, a review of some 150 peer-reviewed publications, and analysis of more than 35 potential use cases. 

Qu&Co comments on this publication:

Currently, the latest state-of-the-art quantum computers are so-called NISQ (noisy intermediate-scale quantum) devices, meaning they have a number of qubits which approaches competition with classical simulation of the output of such systems, yet the systems are noisy and no fault-tolerance can be achieved yet. The question is: are there methods which can sufficiently compensate for their noisy nature, enabling the emergence of quantum advantage on these devices? In recent years, many error correction and mitigation schemes have been developed: from Richardson extrapolation techniques to extend results down to `zero noise’, to parity check measurements and more. But typically, those techniques require additional complicated circuitry, ancillary qubits, pulse modifications, or calibration/tuning steps. In this paper, an alternative strategy based on the general principle of a class of methods called Quantum Subspace Expansion (QSE) is proposed. In this strategy, one performs clever post-processing of classical data with or without additional measurements with (at most) simple additional operations in the circuit and no (scaling) ancillary qubits. This paper generalizes the application of QSE error mitigation to any quantum computation, not restricting itself necessarily to problem-specifics like chemistry. Another interesting idea presented here is to use NISQ devices to experimentally study small quantum codes for later use in larger-scale quantum computers implementing error correcting code, such as in future FTQC (fault-tolerant quantum computing).

Qu&Co comments on this publication:

Thus far, quantum chemistry quantum algorithms have been experimentally demonstrated only on gate-based quantum computers. Efforts have been made to also map the chemistry problem Fermionic Hamiltonian to an Ising Hamiltonian in order to solve it on a quantum annealer.  However, the number of qubits required still scales exponentially with the problem size (the number of orbitals considered in the electronic structure problem). As an alternative, this paper presents a different approach exploiting the efficiency at which quantum annealers can solve discrete optimization problems, and mapping a qubit coupled cluster method to this form. They simulate their method on an ideal Ising machine and on a D-Wave 2000Q system, and find promising success rates for smaller molecules. However, further investigation would be necessary to investigate the usability for larger or more complex systems, as the scaling of their folding technique with the number of local minima is unknown. In addition, it is unclear from the experimental data whether the limitations of the D-Wave system  as compared to a perfect Ising machine could hinder expected performance gains for more complex systems.

Qu&Co comments on this publication:

This report by The Boston Consulting Group, a strategy consulting firm, targets business executives and other people looking for a broader market overview on quantum computing. The authors (Philipp Gerbert et al.) provide some insight in where the technology currently stands, who is who in the emerging ecosystem, and the potentially interesting applications. The report also analyzes some of the leading indicators of investments, patents, and publications, which countries and entities are most active and the status and prospects for the main categories of quantum hardware technologies. Additionally, the report aims to provide a simple framework for understanding quantum algorithms and assessing their applicability and potential. Finally, the authors provide their view of what can be expected in the next five to ten years, and what corporates should be doing, or getting ready for, in response. 

Qu&Co comments on this publication:

Ever since the publication of Shor’s algorithm in 1994, efficient integer factorization has been a key application area envisioned for quantum-computers, with important implications for the security of some of the most used cryptosystems. Because Shor’s algorithm requires a large-scale fault-tolerant quantum-processor, RSA-3072 encryption was so-far believed to remain safe until 2030. However, in recent years hybrid (classical-quantum) alternatives have been developed for many important quantum-algorithms. Such hybrid algorithms can be run on current-day noisy and small-scale quantum-processors. In this paper Eric Anschuetz et al. describe such a hybrid alternative for Shor’s algorithm, which they call variational quantum factoring (VQF). If some pre-processing is applied VQF scales with O(n), n being the number of bits of the integer being factored. If VQF can be optimized to scale well up to 3000+ qubits, which is very challenging, but not completely unthinkable, and if we assume the number of physical qubits in quantum-processors doubles every year, quantum-processors could have sufficiently high qubit count to break RSA-3072 as early as 2025. However, as VQF relies on a quantum-optimization algorithm (QAOA) it seems unlikely that the speed-up of VQF could be more than quadratic, which means that the runtime for breaking RSA-3072 could very well be prohibitively long and that doubling the RSA-6144 (double the key-length) would again be just  as safe as RSA-3072 is currently.

Page 6 of 12

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close