Variational Quantum Linear Solver (VQLS) with Linear Combination of Unitaries (LCU) Block Encoding
The Variational Quantum Linear Solver (VQLS) is a quantum algorithm that harnesses the power of Variational Quantum Eigensolvers (VQE) to efficiently solve systems of linear equations:
-
Input: A matrix \(\textbf{A}\) and a known vector \(|\textbf{b}\rangle\).
-
Output: An approximation of a normalized solution \(|x\rangle\) proportional to \(|\textbf{x}\rangle\), satisfying the equation \(\textbf{A} |\textbf{x}\rangle = |\textbf{b}\rangle\).
While the output of VQLS mirrors that of the HHL Quantum Linear-Solving Algorithm, VQLS distinguishes itself by its ability to operate on Noisy Intermediate-Scale Quantum (NISQ) computers. In contrast, HHL necessitates more robust quantum hardware and a larger qubit count, despite offering a faster computation speedup.
In this tutorial we will cover an implementation example of a Variational Quantum Linear Solver [1] using block encoding , in particular, we will use linear combinations of unitaries, or LCUs for short, for the block encoding
As all variational algorithms, the VQLS is a hybrid algorithm in which we apply a classical optimization on the results of a parametrized (ansatz) quantum program
How to Build the Algorithm with Classiq
Quantum Part: Variational Circuit
Given a block encoding of the matrix A:
we can preapre the state
We can approximate the solution \(|x\rangle\) with a variational quantum circuit, i.e., a unitary circuit \(V\) depending on a finite number of classical real parameters \(w = (w_0, w_1, \dots)\)
Our objective is to address the task of preparing a quantum state \(|x\rangle\) such that \(A |x\rangle\) is proportional to \(|b\rangle\), or equivalently, ensuring that:
The state \(|b\rangle\) arises from a unitary operation \(U_b\) applied to the ground state of \(n\) qubits: i.e.,
The parameters should be optimized in order to maximize the overlap between the quantum states \(|\Psi\rangle\) and \(|b\rangle\). We define the following cost function,
At high level, the above could be implemented as follows:
We construct a quantum model as depicted in the figure below. When measuring the above circuit in the computational basis, the probability of finding the system qubits it in the ground state (given the ancillary qubits measured in their ground state), is \(|\langle 0 | U_b^\dagger |\Psi \rangle|^2 = |\langle b | \Psi \rangle|^2\)
To block encode a Variational Quantum Linear Solver as explained above we can define a high-level block_encoding_vqls
function as follows:
from typing import List
import numpy as np
from classiq import *
@qfunc
def block_encoding_vqls(
ansatz: QCallable,
block_encoding: QCallable,
prepare_b_state: QCallable,
) -> None:
ansatz()
block_encoding()
invert(lambda: prepare_b_state())
From here, all we need to do is to define our ansatz
, block_encoding
and prepare_b_state
to fit the above specific example and we are ready to build our model, synthesize it, execute it and analyze the results.
Classical Part: Finding Optimal Parameters
To estimate the overlap of the ground state with the post-selected state, one could directly make use of the measurement samples. However, since we want to optimize the cost function, it is useful to express everything in terms of expectation values through Bayes\' theorem:
To evaluate the conditional probability from the above equation we construct the following utility function to operate on the measurements results:
In order to variationally solve our linear problem, we define the cost function \(C = 1- |\langle b | \Psi \rangle|^2\) that we are going to minimize. As explained above, we express it in terms of expectation values through Bayes\' theorem.
We define a classical function that gets the quantum program, minimizes the cost function using the COBYLA optimizer, and returns the optimal parameters.
import random
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from classiq.execution import ExecutionSession
class VqlsOptimizer:
def __init__(self, qprog, ansatz_param_count, ansatz_var_name, aux_var_name):
self.qprog = qprog
self.ansatz_param_count = ansatz_param_count
self.ansatz_var_name = ansatz_var_name
self.aux_var_name = aux_var_name
self.es = ExecutionSession(qprog)
self.intermediate = {}
def get_cond_prop(self, res):
aux_prob_0 = 0
all_prob_0 = 0
for s in res:
if s.state[self.aux_var_name] == 0:
aux_prob_0 += s.shots
if s.state[self.ansatz_var_name] == 0:
all_prob_0 = s.shots
return all_prob_0 / aux_prob_0
def my_cost(self, params):
results = self.es.sample(params)
return 1 - self.get_cond_prop(
results.parsed_counts_of_outputs([self.ansatz_var_name, self.aux_var_name])
)
def f(self, x):
cost = self.my_cost(
{"params_" + str(k): x[k] for k in range(self.ansatz_param_count)}
)
self.intermediate[tuple(x)] = cost
return cost
def optimize(self):
random.seed(1000)
out = minimize(
self.f,
x0=[
float(random.randint(0, 3000)) / 1000
for i in range(0, self.ansatz_param_count)
],
method="COBYLA",
options={"maxiter": 2000},
)
print(out)
out_f = [out["x"][0 : self.ansatz_param_count]]
print(out_f)
plt.plot(
[l for l in range(len(self.intermediate))], list(self.intermediate.values())
)
return {
"params_" + str(k): list(self.intermediate.keys())[-1][k]
for k in range(self.ansatz_param_count)
}
Once the optimal variational weights w
are found, we
can generate the quantum state \(|x\rangle\). By measuring \(|x\rangle\) in
the computational basis we can estimate the probability of each basis
state.
Example using LCU Block Encoding
We treat a specific example based on a system of 3 qubits:
where \(Z_j, X_j, H_j\) represent the Pauli \(Z\), Pauli \(X\) and Hadamard gates applied to the qubit with index \(j\).
Block encoding the matrix A with LCU
Our first goal is to encode the Matrix A on a quantum circuit.
Using linear combinations of unitaries (LCU) we can encode the operator A inside U as in Eq.(1)
This block is defined by the subspace of all states where the auxiliary qubits are in state \(|0\rangle\).
Given a \(2^n \times 2^n\) matrix \(A\), representable as a linear combination of \(L\) unitary matrices \(A_0, A_1, \dots, A_{L-1}\) we can implement a quantum circuit that applies the associated operator.
where \(c_l\) are arbitrary complex numbers. Crucially, we assume that each unitary \(A_l\) can be efficiently implemented using a quantum circuit acting on \(n\) qubits.
The concept of applying a linear combination of unitary operations has been explored in Ref[2] and we adopt a similar approach here.
We can assume, without loss of generality, that the coefficients \(c=(c_1, c_2, \dots c_L)\) in the definition of \(A\) form a positive and normalized probability distribution:
The complex phase of each coefficient \(c_l\) can be absorbed into its associated unitary \(A_l\), resulting in a vector comprised of positive values. Additionally, as the linear problem remains unchanged under constant scaling, we can normalize the coefficients to form a probability distribution.
To simplify matters, we assume that \(L\) is a power of 2, specifically \(L=2^m\) for a positive integer \(m\) (we can always pad \(c\) with additional zeros if needed).
Let us consider a unitary circuit \(U_c\), embedding the square root of \(c\) into the quantum state \(|\sqrt{c}\rangle\) of \(m\) ancillary qubits:
where \(\{ |l\rangle \}\) is the computational basis of the ancillary system.
Now, for each component \(A_l\) of the problem matrix \(A\), we can define an associated controlled unitary operation \(CA_l\), acting on the system and on the ancillary basis states as follows:
i.e., the unitary \(A_l\) is applied only when the ancillary is in the corresponding basis state \(|l\rangle\).
So our LCU quantum circuit will look as follows:
Let's apply the previous theory on our simple example based on a system of 3 qubits:
First we will need to define some unitary operations associated to the simple example presented above.
The coefficients of the linear combination are three positive numbers \((0.55, 0.225, 0.225)\). So we can embed them in the state of \(m=2\) ancillary qubits by adding a final zero element and normalizing their sum to \(1\):
paulis = [("III", 0.55), ("IZI", 0.225), ("IIZ", 0.225), ("III", 0)]
num_system_qubits = 3
num_ancila_qubits = 2
ansatz_param_count = 9
error_bound = 0.0
error_metric = "LOSS_OF_FIDELITY"
To block encode the A we need only 3 thing: 1. Representing it as a linear combination of \(L\) unitary matrices (in our case we will use Pauli matrices)] 2. A a unitary circuit \(U_c\) that embeds the square root of \(c\) (linear combination coefficients) into the quantum state 3. A circuit that encodes \(CA_l\)
Preparing \(U_c\)
To construct \(\vec{c}\) from the above example we will we want to apply our Pauli's strings decomposition. For this we need additional functions:
from typing import cast
from classiq import Pauli, PauliTerm
my_list = {"I": Pauli.I, "X": Pauli.X, "Y": Pauli.Y, "Z": Pauli.Z}
def pauli_str_to_enums(pauli):
return [my_list[s] for s in pauli]
def pauli_operator_to_hamiltonian(pauli_list):
return [
PauliTerm(
pauli=pauli_str_to_enums(pauli), coefficient=cast(complex, coeff).real
)
for pauli, coeff in pauli_list
]
Now we can construct \(\vec{c}\):
pauli_terms_structs = pauli_operator_to_hamiltonian(paulis)
c = []
for pauli_struct in pauli_terms_structs:
c.append(pauli_struct.coefficient)
c = c / np.sum(c)
To prepare \(U_c\) we need to embed the square root of the probability distribution c
into the amplitudes of the ancillary state:
@qfunc
def prepare_c(ancillary_qubits: QArray[QBit]):
inplace_prepare_state(list(c), error_bound, ancillary_qubits)
LCU sequence of all controlled-unitaries \(CA_l\)
Next, we are left to define the LCU sequence of all controlled-unitaries \(CA_l\),
acting as \(A_l\) on the system whenever the ancillary state is
\(|l\rangle\).
The prepare_ca
function iterates over a list of Pauli terms \(A_l\) and will apply \(CA_l\) on the 3 qubit state phi
controlled by the ancillary state \(\vec{c}\)
@qfunc
def apply_pauli(pauli_value: CInt, qubit: QBit):
switch(
pauli_value,
[lambda: IDENTITY(qubit), lambda: X(qubit), lambda: Y(qubit), lambda: Z(qubit)],
)
@qfunc
def apply_pauli_term(pauli_term: PauliTerm, x: QArray[QBit]):
repeat(
count=x.len,
iteration=lambda index: apply_pauli(pauli_term.pauli[index], x[index]),
)
@qfunc
def apply_controlled_pauli_term(pauli_term: PauliTerm, x: QArray[QBit]):
repeat(
count=x.len,
iteration=lambda index: apply_pauli(pauli_term.pauli[index], x[index]),
)
@qfunc
def prepare_ca(
pauli_terms_list: CArray[PauliTerm],
system_qubits: QArray[QBit],
ancillary_qubits: QNum,
):
repeat(
count=pauli_terms_list.len,
iteration=lambda i: control(
ancillary_qubits == i,
lambda: apply_pauli_term(pauli_terms_list[i], system_qubits),
),
)
Fixed Hardware Ansatz
Let's consider our ansatz \(V(w)\), such that :
This allows us to "search" the state space by varying some set of parameters, \(w\).
The ansatz that we will use for this 3 qubits system implementation takes in 9 parameters as defined in apply_fixed_3_qubit_system_ansatz
function:
@qfunc
def apply_ry_on_all(params: CArray[CReal], io: QArray[QBit]):
repeat(count=io.len, iteration=lambda index: RY(params[index], io[index]))
@qfunc
def apply_fixed_3_qubit_system_ansatz(
angles: CArray[CReal], system_qubits: QArray[QBit]
):
apply_ry_on_all([angles[0], angles[1], angles[2]], system_qubits)
repeat(
count=(system_qubits.len - 1),
iteration=lambda index: CZ(system_qubits[0], system_qubits[index + 1]),
)
CZ(system_qubits[1], system_qubits[2])
apply_ry_on_all([angles[3], angles[4], angles[5]], system_qubits)
repeat(
count=(system_qubits.len - 1),
iteration=lambda index: CZ(
system_qubits[system_qubits.len - 1], system_qubits[index]
),
)
CZ(system_qubits[1], system_qubits[0])
apply_ry_on_all([angles[6], angles[7], angles[8]], system_qubits)
To view our ansatz implementation we will create a model and view the synthesis result:
@qfunc
def main(
params: CArray[CReal, ansatz_param_count], system_qubits: Output[QArray[QBit]]
):
allocate(3, system_qubits)
apply_fixed_3_qubit_system_ansatz(params, system_qubits)
model = create_model(main)
qprog = synthesize(model)
show(qprog)
This is called a fixed hardware ansatz: the configuration of quantum gates remains the same for each run of the circuit, all that changes are the parameters. Unlike the QAOA ansatz, it is not composed solely of Trotterized Hamiltonians. The applications of \(Ry\) gates allow us to search the state space, while the \(CZ\) gates create "interference" between the different qubit states.
Running the VQLS
Now, we can define the main function: we call block_encoding_vqls
with the arguments of our specific example.
@qfunc
def main(
params: CArray[CReal, ansatz_param_count],
ancillary_qubits: Output[QNum],
system_qubits: Output[QNum],
):
allocate(num_ancila_qubits, ancillary_qubits)
allocate(num_system_qubits, system_qubits)
block_encoding_vqls(
ansatz=lambda: apply_fixed_3_qubit_system_ansatz(params, system_qubits),
block_encoding=lambda: within_apply(
within=lambda: prepare_c(ancillary_qubits),
apply=lambda: prepare_ca(
pauli_terms_structs, system_qubits, ancillary_qubits
),
),
prepare_b_state=lambda: apply_to_all(H, system_qubits),
)
Constructing the model, synthesizing and executing on Classiq's simulator:
from classiq.execution import ClassiqBackendPreferences, ExecutionPreferences
from classiq.synthesis import set_execution_preferences
backend_preferences = ClassiqBackendPreferences(backend_name="simulator_statevector")
model = create_model(main)
model = set_execution_preferences(
model,
execution_preferences=ExecutionPreferences(
num_shots=204800, backend_preferences=backend_preferences
),
)
qprog = synthesize(model)
show(qprog)
from classiq import write_qmod
write_qmod(model, name="vqls_with_lcu", decimal_precision=15)
We run the classical optimizer to get the optimal parameters
optimizer = VqlsOptimizer(
qprog, ansatz_param_count, "system_qubits", "ancillary_qubits"
)
optimal_params = optimizer.optimize()
message: Optimization terminated successfully.
success: True
status: 1
fun: 0.10350122850122845
x: [ 2.450e+00 3.067e+00 6.646e-01 2.540e+00 3.124e+00
1.037e+00 1.242e+00 2.095e+00 5.886e-01]
nfev: 157
maxcv: 0.0
[array([2.45041025, 3.06660619, 0.66458375, 2.53966883, 3.12356603,
1.03745639, 1.24164918, 2.09544279, 0.58863841])]
Measuring the Quantum Solution
Finally, we can apply the optimal params to measure the quantum results for \(\vec{x}\):
@qfunc
def main(io: Output[QArray[QBit]]):
allocate(num_system_qubits, io)
apply_fixed_3_qubit_system_ansatz(list(optimal_params.values()), io)
from classiq.execution import ClassiqBackendPreferences, ExecutionPreferences
from classiq.synthesis import set_execution_preferences
backend_preferences = ClassiqBackendPreferences(backend_name="simulator_statevector")
qmod = create_model(main)
qmod = set_execution_preferences(
qmod,
execution_preferences=ExecutionPreferences(
num_shots=20480, backend_preferences=backend_preferences
),
)
qprog = synthesize(qmod)
job = execute(qprog)
results = job.result()[0]
state_vector = results.value.state_vector
probabilities = []
for key, val in state_vector.items():
probabilities.append(abs(complex(val)) ** 2)
probabilities
[0.023828155423254158,
0.021170251577971314,
0.08175832492395262,
0.09260055118752117,
0.0893584973258954,
0.09592556792888468,
0.2672253961045733,
0.3281332555279478]
amplitudes = []
for key, val in state_vector.items():
amplitudes.append(abs(complex(val)))
amplitudes
[0.15436371148444883,
0.14550000542258174,
0.28593412689630565,
0.3043033867500018,
0.2989289168446161,
0.3097185301671256,
0.5169384838687997,
0.5728291678397215]
Comparison to the classical solution
Since the specific problem considered in this tutorial has a small size, we can also solve it in a classical way and then compare the results with our quantum solution.
To solve the problem in a classical way, we use the explicit matrix representation in terms of numerical NumPy arrays.
Classical calculation:
Id = np.identity(2)
Z = np.array([[1, 0], [0, -1]])
X = np.array([[0, 1], [1, 0]])
A_0 = np.identity(8)
A_1 = np.kron(np.kron(Id, Z), Id)
A_2 = np.kron(np.kron(Z, Id), Id)
A_num = c[0] * A_0 + c[1] * A_1 + c[2] * A_2
b = np.ones(8) / np.sqrt(8)
Calculating the classical \(\vec{x}\) that solves the equation :
A_inv = np.linalg.inv(A_num)
x = np.dot(A_inv, b)
classical_probs = (x / np.linalg.norm(x)) ** 2
classical_probs
array([0.00464634, 0.00464634, 0.0153598 , 0.0153598 , 0.0153598 ,
0.0153598 , 0.46463405, 0.46463405])
To compare the classical to the quantum results we will compute some post-processing. In order to do this, we will apply \(A\) to our optimal vector \(|\psi\rangle_o\), normalize it, then calculate the inner product squared of this vector and the solution vector, \(|b\rangle\)! We can put this all into code as:
print(
"overlap =",
(b.dot(A_num.dot(amplitudes) / (np.linalg.norm(A_num.dot(amplitudes))))) ** 2,
)
overlap = 0.8929204638007067
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(7, 4))
ax1.bar(np.arange(0, 2**num_system_qubits), classical_probs, color="blue")
ax1.set_xlim(-0.5, 2**num_system_qubits - 0.5)
ax1.set_xlabel("Vector space basis")
ax1.set_title("Classical probabilities")
ax2.bar(np.arange(0, 2**num_system_qubits), probabilities, color="gold")
ax2.set_xlim(-0.5, 2**num_system_qubits - 0.5)
ax2.set_xlabel("Hilbert space basis")
ax2.set_title("Quantum probabilities")
plt.show()
The classical cost function basically agrees with the algorithm result.
References
[1]: Bravo-Prieto et al.,Variational Quantum Linear Solver. (2020)
[2]: Robin Kothari. \"Efficient algorithms in quantum query complexity.\" PhD thesis, University of Waterloo, 2014.
assert optimizer.my_cost(optimal_params) < 0.5