Technical Benchmarking¶
Technical Benchmarking refers to tests measuring fidelities, success probabilities, or other noise measures for specific sets of gates and qubits. The Classiq package supports, for example, randomized benchmarking, a test measuring the average error per Clifford gate on a specific, usually narrow, set of qubits.
Randomized Benchmarking¶
Randomized Benchmarking (RB) is a test meant to measure the average Clifford error or fidelity on a specific set of qubits. The test is performed by applying a series of random Clifford gates, then their inverse, which is another Clifford gate, precomputed in advance. It is similar to the Mirror Benchmarking test, the difference being the inverted part, which can be thought of as highly optimized. Several analytical results allow to extract the fidelity by fitting the RB experiment results. More mathematically inclined readers can see the next subsection for a summary. Following it is a usage example of RB on the Classiq platform.
Theory of Randomized Benchmarking¶
The Clifford group forms a two-design; namely, a set on which all degree two-polynomial integrals may be evaluated by a discrete sum. Random samples from the Clifford group approximately have the same property [1]. A classical paper by Nielsen [2] shows that when averaging over all unitary gates according to the Haar measure, the average noise channel is a depolarizing channel. This process is commonly referred to as "twirling". Direct calculation of the survival probability (the chance not to be "depolarized"), yields an exponentially decreasing success probability, with the rate given by the average Clifford fidelity \(f\).
Here, \(m\) is the number of Clifford gates, and \(A\) and \(B\) are constants that depend on state preparation and measurement (SPAM) errors. This is the basic RB scheme, which may be extended. See, for example, [3].
Usage¶
This code is an example of a random benchmarking test of two-qubit Clifford fidelity comparing IBM's "Nairobi" device and an AER simulator (which is noiseless, hence the trivial result). The figure contains the obtained results. This example takes advantage of the concurrent programming mechanism, to allow submitting the different jobs directly to the provider in an asynchronous manner. See Concurrent Programming.
Note that to run the example, you should assign access_token
a valid value.
from typing import Dict
import asyncio
from itertools import product
from classiq import (
Model,
Preferences,
synthesize_async,
execute_async,
set_quantum_program_execution_preferences,
GeneratedCircuit,
)
from classiq.builtin_functions import RandomizedBenchmarking
from classiq.execution import (
ExecutionDetails,
ExecutionPreferences,
IBMBackendPreferences,
)
from classiq.analyzer.rb import RBAnalysis, order_executor_data_by_hardware
async def main():
num_of_qubits = 2
numbers_of_cliffords = [5, 10, 15, 20, 25]
params_list = [
RandomizedBenchmarking(
num_of_qubits=num_of_qubits,
num_of_cliffords=num_of_cliffords,
)
for num_of_cliffords in numbers_of_cliffords
]
preferences = Preferences(
backend_service_provider="IBM Quantum",
backend_name="nairobi",
transpilation_option="decompose",
)
models = [Model(preferences=preferences) for _ in numbers_of_cliffords]
for model, params in zip(models, params_list):
model.RandomizedBenchmarking(params)
model.sample()
# Synthesize all quantum programs asynchronously
quantum_programs = await asyncio.gather(
*[synthesize_async(model.get_model()) for model in models]
)
# Create each program with each backend preferences
backend_names = ("ibm_nairobi", "ibmq_qasm_simulator")
backend_prefs = IBMBackendPreferences.batch_preferences(
backend_names=backend_names,
access_token=ACCESS_TOKEN,
)
qprogs_with_preferences = list()
for qprog, backend_pref in product(quantum_programs, backend_prefs):
preferences = ExecutionPreferences(backend_preferences=backend_pref)
qprogs_with_preferences.append(
set_quantum_program_execution_preferences(qprog, preferences)
)
# Execute all quantum programs asynchronously
results = await asyncio.gather(
*[execute_async(qprog) for qprog in qprogs_with_preferences]
)
samples_results = [res[0].value for res in results]
parsed_programs = [
GeneratedCircuit.from_qprog(qprog).to_program() for qprog in quantum_programs
]
clifford_number_mapping: Dict[str, int] = {
program.code: num_clifford
for program, num_clifford in zip(parsed_programs, numbers_of_cliffords)
}
mixed_data = tuple(
zip(
backend_prefs * len(parsed_programs),
parsed_programs * len(backend_names),
samples_results,
)
)
rb_analysis_params = order_executor_data_by_hardware(
mixed_data=mixed_data, clifford_numbers_per_program=clifford_number_mapping
)
multiple_hardware_data = RBAnalysis(experiments_data=rb_analysis_params)
total_data = await multiple_hardware_data.show_multiple_hardware_data_async()
fig = multiple_hardware_data.plot_multiple_hardware_results()
fig.show()
asyncio.run(main())
References¶
[1] C. Dankert, R. Cleve, J. Emerson, and E. Livine, “Exact and Approximate Unitary 2-Designs and their Application to Fidelity Estimation”. https://arxiv.org/pdf/quant-ph/0606161.pdf.
[2] M. A. Nielsen, "A simple formula for the average fidelity of a quantum dynamical operation". https://arxiv.org/pdf/quant-ph/0205035.pdf.
[3] J. Helsen, X. Xue, L. M. L Vandersypen, S. Wehner "A new class of efficient randomized benchmarking protocols". https://www.nature.com/articles/s41534-019-0182-7.