Randomized Benchmarking
This notebook explains how to perform a full, end-to-end, randomized benchmarking (RB) experiment using the Classiq platform. The notebook is divided into several parts describing the different steps of the workflow: model definition, synthesis, execution, and analysis.
1) Model Definition
Start by defining the model, then the high-level function and its constraints:
a) Define the parameters of the problem. In this case, five FunctionParams objects correspond to five different models. This part is hardware-unaware.
b) Define hardware settings for the problem. These are the basis gates necessary for execution on IBM Quantum machines, which you do later.
c) Create models from the results of the previous steps, adding width, depth, and other constraints. Specifically for RB, num_of_qubits
determines the width and num_of_cliffords
determines the depth. The synthesis engine does not make use of these constraints, and they are omitted.
from classiq import *
# a) Parameter definitions
num_of_qubits = 1
numbers_of_cliffords = [5, 10, 15, 20, 25]
# b) Hardware definitions
ibmq_basis_gates = ["id", "rz", "sx", "x", "cx"]
hw_settings = CustomHardwareSettings(basis_gates=ibmq_basis_gates)
preferences = Preferences(custom_hardware_settings=hw_settings)
# c) Model creation
def get_model(num_cliffords):
@qfunc
def main(target: Output[QArray[QBit]]):
allocate(num_of_qubits, target)
randomized_benchmarking(num_cliffords, target)
return create_model(main, preferences=preferences)
qmods = [get_model(num_cliffords) for num_cliffords in numbers_of_cliffords]
2) Synthesis
Synthesize the constructed models using the synthesize_async
command. This creates a circuit in the Classiq engine's GeneratedCircuit
format for you to access in different low-level formats. This example shows the transpiled_qasm
format, which takes into account the basis gates defined in the model.
import asyncio
async def synthesize_all_models(models):
return await asyncio.gather(*[synthesize_async(qmod) for qmod in qmods])
quantum_programs = asyncio.run(synthesize_all_models(qmods))
3) Execution
When you have the programs you are ready to run. Classiq allows running multiple programs on multiple backends in a single command. You specify the hardware (see details in the executor user guide ). This example runs on IBM Quantum simulators but may be replaced by any hardware with the proper access credentials. For IBM Quantum hardware access, for example, replace ibmq_access_t
with an API token from IBMQ's website and specify the hardware name in the backend_name
field of the BackendPreferences
objects.
# Execution
from itertools import product
from classiq.execution import (
ClassiqBackendPreferences,
ClassiqSimulatorBackendNames,
ExecutionPreferences,
)
ibmq_access_t = None
backend_names = (
ClassiqSimulatorBackendNames.SIMULATOR_STATEVECTOR,
ClassiqSimulatorBackendNames.SIMULATOR,
)
backend_prefs = ClassiqBackendPreferences.batch_preferences(
backend_names=backend_names,
)
qprogs_with_preferences = list()
for qprog, backend_pref in product(quantum_programs, backend_prefs):
preferences = ExecutionPreferences(backend_preferences=backend_pref)
qprogs_with_preferences.append(
set_quantum_program_execution_preferences(qprog, preferences)
)
async def execute_program(qprog):
job = await execute_async(qprog)
return await job.result_async()
async def execute_all_programs(qprogs):
return await asyncio.gather(*[execute_program(qprog) for qprog in qprogs])
results = asyncio.run(execute_all_programs(qprogs_with_preferences))
samples_results = [res[0].value for res in results]
4) Analysis
The final step is to analyze the RB data. While the last two steps were independent of the problem at hand, this part is RB unique. Start by reordering the data, which is given in a 'batch'. For RB analysis, match a program to the number of Clifford gates it represents, hence the clifford_number_mapping
variable. Then, reorder the data according to the hardware, calling the RBAnalysis
class to present the hardware comparison histograms.
Note: If the backends are not replaced with real hardware, expect the trivial result of 100% fidelity for both backends.
from classiq.analyzer.rb import RBAnalysis, order_executor_data_by_hardware
mixed_data = tuple(
zip(
backend_prefs * len(quantum_programs),
numbers_of_cliffords * len(backend_names),
samples_results,
)
)
rb_analysis_params = order_executor_data_by_hardware(mixed_data=mixed_data)
multiple_hardware_data = RBAnalysis(experiments_data=rb_analysis_params)
total_data = asyncio.run(multiple_hardware_data.show_multiple_hardware_data_async())
fig = multiple_hardware_data.plot_multiple_hardware_results()
fig.show()