Mirror Benchmarking¶
Mirror Benchmarking (MB) is a noise benchmarking method that works as follows: you prepare a state, perform the circuit and its inverse, then measure the result.
Ideally, the system returns to its initial state. However, due to gate errors, this is not guaranteed. The result of the test is the success probability, which is the probability of arriving at the initial state.
When factoring the state preparation in the circuit \(G\), this is the success probability:
The technique is inspired by similar approaches with random circuits referred to as Mirror Benchmarking or Mirror Circuit Benchmarking [1-2]. To leverage the full capabilities of the Classiq synthesis engine, MB is done on a functional level model rather than a low-level circuit, enabling application-oriented benchmarking. This allows hardware-aware synthesis, making sure the program is optimized for each hardware, and that the test indeed measures how well the hardware suits a specific program.
Usage¶
To use the MB package, import the MirrorBenchmarking
object and construct it from a model.
The object has a synthesize
method, and code execution is the same as in any other circuit.
from classiq.applications.benchmarking import MirrorBenchmarking
circuit = MirrorBenchmarking(model).synthesize()
Example¶
This example contains code performing an MB test on the IBM Quantum "Quito" device,
with hardware settings defined automatically for the device.
The measured success probability is 11.65%.
Note that to run the example, you should assign access_token
a valid value.
from classiq.applications.benchmarking import MirrorBenchmarking
from classiq.model import Preferences
from classiq.builtin_functions import QFT
from classiq import Model, synthesize, execute, set_execution_preferences
from classiq.execution import (
ExecutionPreferences,
ExecutionDetails,
IBMBackendPreferences,
)
model = Model(
preferences=Preferences(
backend_service_provider="IBM Quantum", backend_name="quito"
)
)
num_qubits = 5
qft_params = QFT(num_qubits=num_qubits)
model.QFT(qft_params)
model = MirrorBenchmarking(model).mirror_benchmarking_model()
model.sample()
execution_preferences = ExecutionPreferences(
num_shots=10000,
backend_preferences=IBMBackendPreferences(
backend_name="ibmq_quito",
access_token=ACCESS_TOKEN, # use your own access token
),
)
model = set_execution_preferences(model.get_model(), execution_preferences)
qprog = synthesize(model.get_model())
res = execute(qprog)
counts = res[0].value.counts
success_probability = counts["0" * num_qubits] / sum(counts.values())
print(success_probability)
References¶
[1] T. Proctor, K. Rudinger, K. Young, E. Nielsen, R. Blume-Kohout, "Measuring the capabilities of quantum computers". https://www.nature.com/articles/s41567-021-01409-7.
[2] K. Mayer, A. Hall. T. Gatterman. S. K. Halit, K. Lee, J. Bohnet, D. Gresh, A. Hankin, K. Gilmore, J. Gerber, J. Gaebler, "Theory of mirror benchmarking and demonstration on a quantum computer". https://arxiv.org/pdf/2108.10431.pdf.