Randomized Benchmarking¶
In this notebook we perform a full, end to end performance of a randomized benchmarking (RB) experiment using the Classiq platform. The notebook is divided into several parts describing the different steps of the workflow with the Classiq program - Model Definition, Synthesis, Execution and Analysis.
1) Model Definition¶
We start by defining the model, the high level description of the function we want to run and the constraints to which it is subjected.
a) Define the parameters of the problem. In this specific case we have 5 FunctionParams objects which will correspond to 5 different models. This part is hardware unaware.
b) Define hardware settings for the problem to run on. Here, these are the basis gates necessary for execution on IBM Quantum machines, which we will do later.
c) Create models from the results of the previous steps. In this step, we may add additional constraints to models (width, depth, etc.) Specifically for RB, the width is set by num_of_qubits
and the depth by num_of_cliffords
. Therefore the synthesis engine won't make use of these constraints, and they are omitted.
from classiq import Model
from classiq.builtin_functions import RandomizedBenchmarking
from classiq.model import CustomHardwareSettings, Preferences
# a) Params definition
num_of_qubits = 1
numbers_of_cliffords = [5, 10, 15, 20, 25]
params_list = [
RandomizedBenchmarking(
num_of_qubits=num_of_qubits, num_of_cliffords=num_of_cliffords
)
for num_of_cliffords in numbers_of_cliffords
]
# b) Hardware definition
ibmq_basis_gates = ["id", "rz", "sx", "x", "cx"]
hw_settings = CustomHardwareSettings(basis_gates=ibmq_basis_gates)
# c) Model creation
preferences = Preferences(custom_hardware_settings=hw_settings)
models = [Model(preferences=preferences) for _ in numbers_of_cliffords]
for model, params in zip(models, params_list):
model.RandomizedBenchmarking(params)
model.sample()
2) Synthesis¶
Continue to synthesize the constructed models using the synthesize_async
command. This creates a circuit in the Classiq engine's GeneratedCircuit
format for you to access in different low-level formats. This example shows the transpiled_qasm
format, which takes into account the basis gates defined in the model. Then, prepare QuantumProgram objects ready to run on actual hardware.
import asyncio
from classiq import GeneratedCircuit, synthesize_async
from classiq.execution import QuantumInstructionSet, QuantumProgram
async def synthesize_all_models(models):
return await asyncio.gather(
*[synthesize_async(model.get_model()) for model in models]
)
quantum_programs = asyncio.run(synthesize_all_models(models))
circuits = [GeneratedCircuit.from_qprog(qprog) for qprog in quantum_programs]
programs = [
QuantumProgram(code=circ.transpiled_circuit.qasm, syntax=QuantumInstructionSet.QASM)
for circ in circuits
]
3) Execution¶
Once we have the programs we are ready to run. We allow running multiple programs on multiple backends in a single command. The hardwares to run are to be specified by the user, see the executor user guide for more details. Here we run on simulators IBM Quantum simulators. These may be replaced by any other hardware, with the proper access credentials. For IBM Quantum hardware access for example, simply replace the ibmq_access_t with an API token from IBMQ's website and specify the hardware name in the backend_name
field of the desired BackendPreferences
objects.
# Execution
from itertools import product
from classiq import execute_async, set_quantum_program_execution_preferences
from classiq.execution import (
ClassiqBackendPreferences,
ExecutionDetails,
ExecutionPreferences,
)
ibmq_access_t = None
backend_names = ("aer_simulator_statevector", "aer_simulator")
backend_prefs = ClassiqBackendPreferences.batch_preferences(
backend_names=backend_names,
)
qprogs_with_preferences = list()
for qprog, backend_pref in product(quantum_programs, backend_prefs):
preferences = ExecutionPreferences(backend_preferences=backend_pref)
qprogs_with_preferences.append(
set_quantum_program_execution_preferences(qprog, preferences)
)
async def execute_program(qprog):
job = await execute_async(qprog)
return await job.result_async()
async def execute_all_programs(qprogs):
return await asyncio.gather(*[execute_program(qprog) for qprog in qprogs])
results = asyncio.run(execute_all_programs(qprogs_with_preferences))
samples_results = [res[0].value for res in results]
4) Analysis¶
The final part is the analysis of the RB data. While the last two steps where independent of the problem at hand, this part is RB unique. We start by reordering the data, which is given in a 'batch'. For RB analysis we need to match a program to the number of cliffords it represents, hence the clifford_number_mapping
variable. Then we reorder the data according to hardware, call the RBAnalysis class present the hardware comparison histograms.
Note - if the backends are not replaced to real hardware, expect the trivial result of 100% fidelity for both backends.
from typing import Dict
from classiq.analyzer.rb import RBAnalysis, order_executor_data_by_hardware
clifford_number_mapping: Dict[str, int] = {
prog.code: num_clifford
for prog, num_clifford in zip(programs, numbers_of_cliffords)
}
mixed_data = tuple(
zip(backend_prefs * len(programs), programs * len(backend_names), samples_results)
)
rb_analysis_params = order_executor_data_by_hardware(
mixed_data=mixed_data, clifford_numbers_per_program=clifford_number_mapping
)
multiple_hardware_data = RBAnalysis(experiments_data=rb_analysis_params)
total_data = asyncio.run(multiple_hardware_data.show_multiple_hardware_data_async())
fig = multiple_hardware_data.plot_multiple_hardware_results()
fig.show()