Displaying items by tag: Error mitigation

Variational Quantum Algorithms (VQAs) are one of the most prominent methods used during the Noisy Intermediate Scale Quantum (NISQ) era as they adapt to the constraints of NISQ devices. VQAs are used in a wide range of applications from dynamical quantum simulation to machine learning. NISQ devices do not have the resources to exploit the benefits arising from quantum error correction (QEC) techniques, therefore such algorithms suffer from the effects of noise, which decreases their performance. As a substitute of full QEC, many algorithms employ error mitigation techniques to counter the effects of noise. An example of a VQA is the Variational Quantum Eigensolver (VQE), a setting which does offer some strategies to mitigate errors, however additional strategies are required to complement the existing advantages over noise.

It has been proven that noise can significantly decrease the trainability of linear (or super-linear) depth VQAs. One effect of noise is the flattening of the cost landscape as a function of the variational parameters, thus requiring an exponential precision in system size to resolve its features. This phenomenon is known as Noise-Induced Barren Plateaus (NIBPs), because the algorithm can no longer resolve a finite cost gradient (known as Barren Plateau). It can be advantageous to pursue algorithms with sublinear circuit depth and utilizing strategies that limit the hardware noise.

In this work, the authors investigate the effects of error mitigation on the trainability of noisy cost function landscapes in two regimes: i) asymptotic regime (in terms of scaling with system size) and ii) non-asymptotic regime for various error mitigation strategies including Zero Noise Extrapolation (ZNE), Virtual Distillation (VD), Probabilistic Error Cancellation (PEC), and Clifford Data Regression (CDR). The error mitigation protocols are subjected to a class of local depolarizing noise, known to cause NIBP. The first case of an asymptotic regime was considered in terms of scaling with system size. The theoretical results show that, if VQA suffers from exponential cost concentration, the error mitigation strategies cannot remove this exponential scaling, implying that at least an exponential number of resources is required to extract accurate information from the cost landscape in order to find a cost-minimizing optimization direction.

In the second case, it is shown that VD decreases the resolvability of the noisy cost landscape, and impedes trainability. ZNE also shows similar results under more restrictive assumptions on the cost landscape. It is also observed that any improvement in the resolvability after applying PEC under local depolarizing noise exponentially degrades with increasing number of qubits. Finally, for Clifford Data Regression, there is no change to the resolvability of any pair of cost values if the same ansatz is used.

The work also numerically investigates the effects of error mitigation on VQA trainability for the case when the effects of cost concentration is minimal. This is done by simulating the Quantum Approximate Optimization Algorithm (QAOA) for 5-qubit MaxCut problems on a realistic noise model of an IBM quantum computer, obtained by gate set tomography of IBM’s Ourense quantum device. The results compare the quality of the solutions of noisy (unmitigated) and CDR-mitigated optimization and demonstrate that CDR-mitigated optimization outperforms noisy optimization for all considered implementations.

Unlike all the considered error mitigation strategies, it is shown that CDR reverses the concentration of cost values more than it increases the statistical uncertainty as it has a neutral impact on resolvability under a global depolarizing noise model. Also, it can remedy the effects of more complex noise models. This indicates that CDR could resolve trainability issues arising due to corruptions of the cost function outside of cost concentration, whilst having a neutral effect on cost concentration itself, and thus improving overall trainability. A potential direction for future work will be investigating other mechanisms which can allow mitigation strategies to improve the trainability of noisy cost landscapes. As is known from the theory of error correction, once sufficient resources exist, then NIBPs can indeed be avoided.
Published in Blog
Tagged under
The capabilities of present-day NISQ devices are constrained by high noise levels and limited error mitigation of qubits. Hence, optimizing NISQ algorithms to use a limited amount of resources is an essential requirement to reduce the effect of errors. One strategy is to use a hybrid classical-quantum approach that employ shallow quantum circuits to solve the “hard” part of the problem. These shallow circuits are more resilient to noise by construction and require limited resources. Variational Quantum Algorithms (VQA) such as the Variational Quantum Eigensolver(VQE) and Quantum Approximate Optimization Algorithm(QAOA) have shown great promise in exploiting this hybrid approach, where a cost function is evaluated in the quantum circuit whose parameters are optimized by a classical non-linear optimizer. The application areas cover an extensive range including chemistry, machine learning, circuit compilation, and classical optimization among others.

Although VQA algorithms are shown to be resilient against coherent errors, the qubits utilized in the quantum circuit are still affected by decoherence. Also, due to high qubit requirements, quantum error correction methods cannot be utilized with VQAs to overcome the effect of decoherence. Another significant source of error that limits the current capability of quantum devices is the readout error caused by the imperfect measurement devices. To combat the noise, one can have the circuit evolution taking place only on a subspace of the full Hilbert space, which will lead to valid and invalid measurement outcomes. The invalid measurement outcomes are attributed to the noise and are discarded. Another helpful approach in combating noise is encoding the data in a format that indicates errors in a straightforward way. One popular approach is one-hot encoding, which results in one-hot quantum states. Such type of binary encodings are used to obtain qubit-saving formulations for the Travelling Salesman Problem, graph coloring problem, quadratic Knapsack problem, and Max k-Cut problem.

In this paper, the authors propose schemes for error mitigation in variational quantum circuits through mid-circuit post-selection, which is performed by injecting the error mitigation sub-circuit consisting of gates and measurements, that detects errors and discards the erroneous data. The work presents post-selection schemes for various encodings which were used to obtain different valid subspaces of quantum states to be used with VQA while solving particular combinatorial optimization problems and problems from quantum chemistry. The encoding methods are i) k-hot, ii) one-hot iii) domain wall encoding iv) Binary and gray encoding and v) one-hot and binary mixed. The measurement was done via two approaches: post-selection through filtering and post-selection through compression. The advantage of the second approach is that it does not require ancilla qubits.

The work implements one-hot to binary post-selection through a compression scheme to solve the Travelling Salesman Problem (TSP) using the Quantum Alternating Operator Ansatz (QAOA) algorithm. In the case of the one-hot method, encoding works by compressing the valid subspace to a smaller subspace of quantum states and differentiates from the known methods. The experiment results show that for amplitude damping, depolarizing, and bit-flip noise, the mid-circuit post-selection has a positive impact on the outcome compared to just using the final post-selection as a criterion. The proposed error mitigation schemes are qubit efficient i.e. they require only mid-circuit measurements and reset instead of a classical “if” operation. The presented methods are currently applicable to existing NISQ algorithms, as well as outside the scope of VQA and with different objective Hamiltonians. Furthermore, mid-circuit measurements have been increasingly made available by providers of quantum hardware such as IBM and Honeywell.

The ancilla-free post-selection through compression scheme can be applied to any problem where the feasible states are one-hot, including the problems defined over permutations such as Vehicle Routing Problem, variations of TSP, Railway Dispatching Problem, Graph Isomorphism Problem, and Flight Gate Assignment Problem. There are multiple factors that should be considered when designing such schemes, including the complexity of the post-selection, the form of the feasible subspace S, the strength, and form of the noise affecting the computation. It is advantageous to design methods that would choose the optimal number (and perhaps the position) of mid-circuit post-selections to be applied automatically. Utilizing such optimized error mitigation schemes can lead to high level of cancellation of noise in NISQ devices.
Published in Blog
Tagged under
The main objective of error mitigation techniques in quantum computing algorithms is to make quantum computation more efficient by reducing the noise level due to noisy quantum hardware to a low level. The main advantage of error mitigation techniques compared to error correction techniques lies in the lower resource requirements.

There already exist a variety of error mitigation techniques. A popular example is ZNE, which relies on collecting data at different noise levels and fitting an extrapolation to the noiseless case. However, this is limited to small size problems and is ineffective in high noise regimes. On the other hand, classically simulable circuits based on the Clifford data regression (CDR) method can be used to inform researchers about noise in a quantum device. CDR constructs a training set using data obtained from classically simulable near-Clifford circuits and performs a regressive fit on an ansatz mapping noisy values to exact expectation values. The fitted ansatz is used to estimate the value of the noiseless result. The combination of both methods is proven to be outperforming either individual method. Another recent method is Virtual Distillation (VD), which mitigates noise by simultaneously preparing multiple copies of a noisy state, entangling them, and making measurements to virtually prepare a state which is purer than the original noisy states, achieving exponential error suppression.

In this work, the authors propose to unify zero noise extrapolation (ZNE), variable noise Clifford data regression (vnCDR), and virtual distillation (VD) into one mitigation framework called UNIfied Technique for Error mitigation with Data (UNITED). The objective is to combine the advantages of each method in order to improve the mitigation of noise. The methods were applied to the mitigation of errors in local observables, which are measured from states prepared with random circuits. Each of these methods has different strengths and resource requirements, but their performance with 10^5 - 10^10 total shots was considered enough to mitigate the expectation value of an observable of interest.

The work combines the ideas of near-Clifford circuit data and the multi-copy data from VD to formulate Clifford guided VD (CGVD), which uses Clifford data to guide a Richardson-like extrapolation to the many copy limit. Furthermore, the work generalizes this method to perform Clifford-guided double Richardson-like extrapolation to the zero-noise limit. It is also shown that performance improves if the CDR training data are supplemented with data from the training circuits that run under increased noise. Similarly, supplementing the vnCDR training set with results obtained by VD will improve its performance.

A realistic simulation of a noisy trapped-ion device was used to enable all-to-all connectivity and to benchmark UNITED and the other methods for problems with various numbers of qubits, circuit depths, and total numbers of shots. The results show that different techniques are optimal for different shot budgets. For instance, ZNE is comparatively more efficient for small shot budgets (10^5), while Clifford-based approaches perform better for larger shot budgets (10^6 - 10^8). UNITED outperforms all other methods for shot budgets on the scale of 10^10.

These results offer new hope for future work in the direction of further unification with other error mitigation techniques such as noise-resilient quantum compiling, verified phase estimation, and specific approaches leveraging symmetries and/or post-selection. Another potential improvement lies in the timing of measurements of new expectation values (i.e. more near-Clifford circuits, more noise levels, more copies of the state, etc.) versus expending more shots per expectation value in order to get the best mitigation possible, given fixed resources. Finally, investigation of the effects of noisy controlled derangement on VD and UNITED performance is crucial in determining the maximum potential of these methods.
Published in Blog
Tagged under

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close