Quantum Machine Learning with Classiq
Welcome to the "Quantum Machine Learning with Classiq" tutorial. This guide is designed for users already familiar with the fundamentals of the Classiq platform and Quantum Machine Learning (QML) concepts. The aim is to showcase how to implement QML using Classiq. It covers three main methods to implement QML with Classiq:
-
Using the VQE Primitive
-
Using the PyTorch Integration
-
Using the QSVM Built-in App
Each section briefly explains the method, followed by an illustrative example that demonstrates the integration. These examples are intended to be straightforward to help you get started quickly.
In This Tutorial
Using the VQE Primitive
The Variational Quantum Eigensolver (VQE) is an algorithm for finding the ground state energy of a Hamiltonian operator, often described by Pauli operators or in the equivalent matrix form. The VQE was proposed in 2014 [1].
The algorithm follows these steps:
-
Create a Parameterized Quantum Model: Design a quantum model, also known as an ansatz, that captures the problem.
-
Synthesize, Execute, and Estimate Expectation Values: Synthesize the quantum model into a quantum program. Run the quantum program, then measure and calculate the expected value of the Hamiltonian based on this generated program.
-
Optimize Parameters: Use a classical optimizer to adjust the quantum program's parameters for better results.
-
Repeat: Continue this process until the algorithm converges to a solution or reaches a specified number of iterations.
For more details, refer to this review article [2] and the corresponding preprint [3].
Example Using Classiq
Start with this example, creating a VQE algorithm that estimates the minimal eigenvalue of the following 2x2 Hamiltonian:
\(\begin{equation}H = \frac{1}{2}I + \frac{1}{2}Z - X = \begin{bmatrix} 1 & -1 \\ -1 & 0 \end{bmatrix}\end{equation}\)
Define the Hamiltonian using a PauliTerm
list:
from typing import List
from classiq import *
HAMILTONIAN = QConstant(
"HAMILTONIAN",
List[PauliTerm],
[PauliTerm([Pauli.I], 0.5), PauliTerm([Pauli.Z], 0.5), PauliTerm([Pauli.X], -1)],
)
For a single qubit problem, to capture any rotation on the Bloch sphere, use the U-gate (also known as the U3-gate). This includes the state with the minimal energy with respect to the Hamiltonian.
NOTE on U-gate
The single-qubit gate applies phase and rotation with three Euler angles.
Matrix representation:
\(\begin{equation}U(\gamma,\phi,\theta,\lambda) = e^{i\gamma}\begin{pmatrix} \cos(\frac{\theta}{2}) & -e^{i\lambda}\sin(\frac{\theta}{2}) \\ e^{i\phi}\sin(\frac{\theta}{2}) & e^{i(\phi+\lambda)}\cos(\frac{\theta}{2}) \\ \end{pmatrix}\end{equation}\)
Parameters:
-
theta
:CReal
-
phi
:CReal
-
lam
:CReal
-
gam
:CReal
-
target
:QBit
@qfunc
def main(q: Output[QBit], angles: CArray[CReal, 3]) -> None:
allocate(1, q)
U(angles[0], angles[1], angles[2], 0, q)
To seamlessly harness the power of VQE, use a classical execution function called cmain
, specifying that the VQE primitive is used and all the parameters are initialized:
@cfunc
def cmain() -> None:
res = vqe(
hamiltonian=HAMILTONIAN,
maximize=False,
initial_point=[], # Must be initialized for some optimizers
optimizer=Optimizer.COBYLA, # Constrained Optimization BY Linear Approximation
max_iteration=1000,
tolerance=0.001,
step_size=0, # Must be initialized as none-zero value for some optimizers
skip_compute_variance=False,
alpha_cvar=1.0,
)
save({"result": res})
For the cmain
function, use the @cfunc
decorator rather than @qfunc
as is common in the SDK.
Description of VQE Parameters
Configure the vqe
function in the cmain
execution function with these parameters:
-
hamiltonian: The Hamiltonian of the system to be minimized. In this case, it is specified as
HAMILTONIAN
. -
maximize: A Boolean indicating whether to maximize the Hamiltonian's expected value. It is set to
False
, meaning the goal is to minimize the Hamiltonian. -
initial_point: The starting point for the optimizer. It is set to an empty list, which means the default initial point is used. This must be initialized for some optimizers.
-
optimizer: The classical optimization algorithm used to adjust the parameters of the quantum circuit.
-
max_iteration: The maximum number of iterations for the optimizer. It is set to
1000
, indicating that the optimizer performs up to 1000 iterations. -
tolerance: The convergence tolerance for the optimizer. It is set to
0.001
, meaning the optimization stops if the change in the expected value of the Hamiltonian is less than this value. -
step_size: The step size for the optimizer. It is set to
0
, which is required to be initialized as a non-zero value for some optimizers. -
skip_compute_variance: A Boolean indicating whether to skip the computation of the variance. It is set to
False
, meaning the variance is computed. -
alpha_cvar: The confidence level for the Conditional Value at Risk (CVaR) optimization. It is set to
1.0
, which indicates full confidence in the expected value without considering risk aversion. For details, refer to the article "Improving Variational Quantum Optimization using CVaR" [4].
The cmain
function then saves the result of the VQE optimization using the save
function.
Supported Optimizers
-
ADAM: Adam and AMSGRAD optimizers.
-
COBYLA: Constrained Optimization BY Linear Approximation optimizer.
-
L_BFGS_B: Limited-memory BFGS Bound optimizer.
-
NELDER_MEAD: Nelder-Mead optimizer.
-
SPSA: Simultaneous Perturbation Stochastic Approximation (SPSA) optimizer.
Now create the model, specifically using classical_execution_function=cmain
:
qmod_1 = create_model(
main, classical_execution_function=cmain, out_file="vqe_primitive"
)
qprog_1 = synthesize(qmod_1)
Executing from the Classiq platform:
show(qprog_1)
Opening: https://platform.classiq.io/circuit/2twUZZfHBVxLoNxBTFnDPCnaTdp?version=0.70.0
And then:

Or directly from the SDK:
job = execute(qprog_1)
# job.open_in_ide()
vqe_result = job.result_value()
Printing the final results:
print(f"Optimal energy: {vqe_result.energy}")
print(f"Optimal parameters: {vqe_result.optimal_parameters}")
print(f"Eigenstate: {vqe_result.eigenstate}")
Optimal energy: -0.61376953125
Optimal parameters: {'angles_param_0': -2.1642654384946343, 'angles_param_1': 3.114217843670511, 'angles_param_2': -1.226756089452782}
Eigenstate: {'0': (0.4576818286211503+0j), '1': (0.8891160462785497+0j)}
The VQE algorithm outputs these key results:
-
Optimal energy: The lowest energy found for the Hamiltonian, representing the ground state energy (minimal eigenvalue).
-
Optimal parameters: The parameters of the quantum program that achieve the optimal energy, corresponding to rotation angles in the U-gate.
-
Eigenstate: The quantum state associated with the optimal energy, given as probability amplitudes for the basis states.
More information is collected in the vqe_result
variable for you to explore further.
For example, plotting the convergence_graph
:
vqe_result.convergence_graph
Summary and Exercise
You designed a parameterized quantum circuit capable of capturing a simple Hamiltonian. You defined cmain
as the classical execution function, including all necessary parameters for VQE execution, and plotted the results.
Exercise - Two Qubits VQE
Now, practice the implementation of a similar case to the previous example, but this time for two qubits, following the Hamiltonian:
Use the last example to implement and execute VQE for this Hamiltonian.
Code skeleton:
HAMILTONIAN = QConstant("HAMILTONIAN", List[PauliTerm], [...]) #TODO: Complete Hamiltonian
@qfunc
def main(...) -> None:
#TODO: Complete the function according to the instructions, choosing simple ansatz.
@cfunc
def cmain() -> None:
res = vqe(
HAMILTONIAN,
False,
[],
optimizer=Optimizer.COBYLA,
max_iteration=1000,
tolerance=0.001,
step_size=0,
skip_compute_variance=False,
alpha_cvar=1.0,
)
save({"result": res})
qmod = create_model(main, classical_execution_function=cmain)
qprog = synthesize(qmod)
show(qprog)
Hint
QArray
Read More
Algorithms and application tutorials using the VQE primitive:
Further reading from the reference manual:
Using the PyTorch Integration
Classiq integrates with PyTorch, enabling the seamless development of quantum machine learning and hybrid classical quantum machine learning models. This integration leverages PyTorch's powerful machine learning capabilities alongside quantum computing.
Note on PyTorch Installation:
To properly install and run PyTorch locally, check this page.
Workflow
-
Defining the Model
-
1.1: Define the quantum model and synthesize it into a quantum program.
-
1.2: Define the execute and post-process callables.
-
1.3: Create a
torch.nn.Module
network.
-
-
Choosing the Dataset, Loss Function, and Optimizer
-
Training the Model
-
Testing the Model
If you are not familiar with PyTorch, read the following documentation:
PyTorch Documentation
Example - Demonstrate PyTorch Integration with Classiq
This example demonstrates PyTorch integration using a simple parameterized quantum model.
It utilizes one input from the user and one weight, while using one qubit in the model. The goal of the learning process is to determine the correct angle for an RX gate to perform a "NOT" operation. (Spoiler alert: The correct answer is \(\pi\).)
The dataset DATALOADER_NOT
is used, as defined here.
DatasetXor
is also available from the link for further practice.
from classiq import *
from classiq.applications.qnn.datasets import DATALOADER_NOT
for data, label in DATALOADER_NOT:
print(f"--> Data for training:\n{data}")
print(f"--> Corresponding labels:\n{label}")
--> Data for training:
tensor([[0.0000],
[3.1416]])
--> Corresponding labels:
tensor([0., 1.])
This dataset contains two items. The first item indicates no rotation (0.0000
) and is labeled as 0, indicating the state \(|0\rangle\). The second item indicates a rotation of 3.1416
and is labeled as 1, indicating the state \(|1\rangle\).
Read an explanation on creating PyTorch datasets here:
Step 1.1 - Define the Quantum Model and Synthesize It into a Quantum Program
The first part of the parameterized quantum model has an encoding section, which loads input data (\(|0\rangle\) or \(|1\rangle\)) into the parameterized quantum model:
@qfunc
def encoding(theta: CReal, q: QArray[QBit]) -> None:
RX(theta=theta, target=q[0])
The second part is the mixing
function, which includes an adjustable parameter for training the RX gate to act later as a NOT gate:
@qfunc
def mixing(theta: CReal, q: QArray[QBit]) -> None:
RX(theta=theta, target=q[0])
Combining the two functions into the main
function:
@qfunc
def main(input_0: CReal, weight_0: CReal, res: Output[QArray[QBit]]) -> None:
allocate(1, res)
encoding(theta=input_0, q=res) # Loading input
mixing(theta=weight_0, q=res) # Adjustable parameter
Finally, create a model, synthesize it, and display it in the IDE:
qmod_2 = create_model(main, out_file="qnn_with_pytorch")
qprog_2 = synthesize(qmod_2)
show(qprog_2)
Opening: https://platform.classiq.io/circuit/2twUarxEpqk3Sjo48cHRqVxVkgm?version=0.70.0
Step 1.2 - Define the Execute and Postprocess Callables
Before using the quantum layer (QLayer), define the execute
and post-processing
functions. These functions are essential for integrating the quantum layer in a PyTorch neural network, as classical layers require classical data as input. This means that only after executing the QLayer (the ansatz) and post-processing the results the data can be further used in other layers of the neural network or be output.
The execute
function is straightforward. It takes the quantum program (here, the QLayer) and its parameters, and executes it:
from classiq.applications.qnn.types import (
MultipleArguments,
ResultsCollection,
SavedResult,
)
from classiq.execution import execute_qnn
from classiq.synthesis import SerializedQuantumProgram
def execute(
quantum_program: SerializedQuantumProgram, arguments: MultipleArguments
) -> ResultsCollection:
return execute_qnn(quantum_program, arguments)
In general, the post_process
function is needed to prepare the execution results for output or for loss calculation during the training phase.
In this specific example, it returns the probability of measuring \(|0\rangle\). This function assumes that only the differentiation between the single state \(|0\rangle\) and all other states is relevant. If a different differentiation is needed, modify this function accordingly.
import torch
def post_process(result: SavedResult) -> torch.Tensor:
"""
Take in a `SavedResult` with `ExecutionDetails` value type, and return the
probability of measuring |0> which equals the amount of `|0>` measurements
divided by the total number of measurements.
"""
counts: dict = result.value.counts
# The probability of measuring |0>
p_zero: float = counts.get("0", 0.0) / sum(counts.values())
return torch.tensor(p_zero)
Using these functions allows QLayers and PyTorch layers to be properly integrated into the same neural network.
Step 1.3 - Create a torch.nn.Module Network
Define the torch.nn.Module
class with a single QLayer
as follows:
from classiq.applications.qnn import QLayer
class Net(torch.nn.Module):
def __init__(self, *args, **kwargs) -> None:
super().__init__()
self.qlayer = QLayer(
qprog_2, # the quantum program, the result of `synthesize()`
execute, # a callable that takes
# - a quantum program
# - parameters to that program (a tuple of dictionaries)
# and returns a `ResultsCollection`
post_process, # a callable that takes
# - a single `SavedResult`
# and returns a `torch.Tensor`
*args,
**kwargs
)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.qlayer(x)
return x
model = Net()
In self.qlayer = QLayer(...)
, define the only layer in the neural network as a single QLayer. Specify the previously defined quantum_program
, execute
, and post_process
as arguments for the layer. Finally, create the neural network and assign it to the variable model
.
Step 2 - Choose a Dataset, Loss Function, and Optimizer
For the loss function and optimizer, use L1Loss and SGD, respectively.
import torch.nn as nn
import torch.optim as optim
_LEARNING_RATE = 1
# choosing the data
data_loader = DATALOADER_NOT
# choosing the loss function
loss_func = nn.L1Loss() # Mean Absolute Error (MAE)
# choosing the optimizer
optimizer = optim.SGD(model.parameters(), lr=_LEARNING_RATE)
Available Optimization Algorithms and Loss Functions
For details of the optimization algorithms and a comprehensive list of loss functions in PyTorch, refer to the official documentation:
Step 3 - Train and Evaluate
Import DataLoader
:
from torch.utils.data import DataLoader
A DataLoader
in PyTorch efficiently iterates over datasets, handling batching, shuffling, and parallel data loading. It streamlines the process of training and evaluating models by managing data efficiently.
Now you are ready to define the training function. \ This simple example follows a loop similar to that recommended by PyTorch here.
def train(
model: nn.Module,
data_loader: DataLoader,
loss_func: nn.modules.loss._Loss,
optimizer: optim.Optimizer,
epoch: int = 1, # About 40 epochs needed for full training
) -> None:
for index in range(epoch):
print(index, model.qlayer.weight)
for data, label in data_loader:
optimizer.zero_grad()
output = model(data)
loss = loss_func(output, label)
loss.backward()
optimizer.step()
Here, trained parameters are loaded for demonstration, and only one epoch is performed.\ You may comment on the following cell, change the number of epochs above, and expect about 40 epochs for full training for non-trained parameters.
trained_weights = torch.nn.Parameter(
torch.Tensor([3.1169])
) # The value from the last step of the trainingd
model.qlayer.weight = trained_weights
train(model, data_loader, loss_func, optimizer)
0 Parameter containing:
tensor([3.1169], requires_grad=True)
Great!Observe that the parameter is approximately equal to \(\pi\). \ Now, test the network accuracy using the suggested method here.
def check_accuracy(model: nn.Module, data_loader: DataLoader, atol=1e-4) -> float:
num_correct = 0
total = 0
model.eval()
with torch.no_grad(): # Temporarily disable gradient calculation
for data, labels in data_loader:
# Let the model predict
predictions = model(data)
# Get a tensor of Booleans, indicating if each label is close to the real label
is_prediction_correct = predictions.isclose(labels, atol=atol)
# Count the number of `True` predictions
num_correct += is_prediction_correct.sum().item()
# Count the total evaluations
# the first dimension of `labels` is `batch_size`
total += labels.size(0)
accuracy = float(num_correct) / float(total)
print(f"Test Accuracy of the model: {accuracy*100:.2f}%")
return accuracy
check_accuracy(model, data_loader)
Test Accuracy of the model: 100.00%
1.0
The results show an accuracy of 1, indicating a 100% success rate in performing the required transformation (i.e., the network learned to perform an X-gate). You can further validate this by printing the value of model.qlayer.weight
, which is a tensor of shape (1,1). After training, this value should be close to \(\pi\).
Summary and Exercise
In this tutorial, you integrated a quantum layer in a PyTorch neural network, defined the necessary execution and post-processing functions, and trained the model using a simple dataset. You tested the network's accuracy using a recommended method. To explore further, try experimenting with different quantum circuits, datasets, and optimizers. Integrating more classic layers or more complex layers should be straightforward now for those with experience in PyTorch.
Exercise - Training U Gate
Now, for practice, implement a similar case to the last example, but this time train the U gate to act as a NOT gate instead of the Rx gate.
How many parameters must you train?
What must you change to accomplish this?
Hint
You only have to adapt mixing
and model
.
Read More
Algorithms and application tutorials using the PyTorch integration:
Further reading from the reference manual:
Using QSVM Primitive
Classiq also enables executing classification tasks using the Quantum Support Vector Machine (QSVM) module. This module leverages the principles of quantum computing to enhance traditional support vector machine algorithms, offering significant improvements in classification accuracy and efficiency. The QSVM module integrates seamlessly with the Classiq platform, allowing you to implement quantum-enhanced classification models effortlessly. By utilizing quantum kernels, the QSVM can handle complex datasets and capture intricate patterns that may be challenging for classical SVMs, making it a powerful tool for machine learning applications.
To understand how to use it and explore it further, examine this example: QSVM with Classiq.
References
[1]: Peruzzo, A., McClean, J., Shadbolt, P., et al. (2014). A variational eigenvalue solver on a photonic quantum processor, Nature Communications.
[2]: Cerezo, M., Arrasmith, A., Babbush, R., et al. (2021). Variational quantum algorithms, Nature Reviews Physics, 3, 625–644.
[3]: Corresponding preprint arXiv:2104.02281.
[4]: Barkoutsos, Panagiotis Kl., et al. (2020). Improving variational quantum optimization using CVaR, Quantum 4, 256.