Solve Differential Equations of the Lanchester Model with HHL
Welcome to the Jungle of HHL - Rabbits vs. Foxes
The Lanchester model is widely used to describe the dynamics of combat between two opposing forces. Originally formulated for military applications, this model can also be adapted to other contexts, such as ecological competition between two species or market competition between two businesses. The model uses linear differential equations to capture the interactions between the populations, making it suitable for various scenarios where interaction terms are purely linear. In this notebook, we will explore solving this modified Lanchester model using the Harrow-Hassidim-Lloyd (HHL) algorithm, a quantum algorithm designed for efficiently solving linear systems of equations.
Introduction to the Lanchester model
Lanchester Model is described in this link. The behavior of 2 forces x and y is described with the following differential equation:
Where the coefficients describe in the next sections describes the sensitivity of each sides to the other side and to itself.
Describe the classical model using Finite Difference Method
So assuming b==0, f==0:
First sample is simply the initial condition:
Then, for any next point, we solve numerically using the previous point
So the system of linear equations describing the model should look like:
Building the matrix with numpy
import numpy as np
def diff_eq_model(a, b, c0, d, e, f, g, h, dt, N, x0, y0):
assert b == 0, "model is currently unsupported"
assert f == 0, "model is currently unsupported"
A = np.identity(2 * N)
for r in range(2 * N):
if 1 <= r <= N - 1:
c = r - 1
A[r][c] = -(dt * a + 1)
c = N + r - 1
A[r][c] = -dt * c0
elif N + 1 <= r:
c = r - 1
A[r][c] = -(dt * e + 1)
c = r - 1 - N
A[r][c] = -dt * g
b1 = np.ones((N, 1)) * dt * d
b1[0] = x0
b2 = np.ones((N, 1)) * dt * h
b2[0] = y0
b = np.concatenate([b1, b2])
return A, b
Let's choose some specific values for the matrix, representing a predator-prey system. Here, \(x\) represents the prey population (e.g., rabbits), and \(y\) represents the predator population (e.g., foxes). The coefficients will have the following meanings and specific values:
-
\(a\) and \(e\) define the rate of natural losses or birth (due to death, disease, etc.).
-
For example, \(a = -0.01\) (1% natural death rate for rabbits), \(e = -0.02\) (2% natural death rate for foxes).
-
\(b\) and \(f\) define the rate of losses due to environmental factors (affecting both species).
-
For example, \(b = 0\), \(f = 0\) (assuming no such environmental exposure for simplicity).
-
\(c\) and \(g\) are losses or gains due to interactions between species (prey being hunted by predators and vice versa).
-
For example, \(c = -0.1\) (10% loss of rabbits due to predation), \(g = 0.2\) (20% increase in predator population due to hunting prey).
-
\(d\) and \(h\) are gains due to migration.
-
For example, \(d = 0.4\) (0.4 KRabbits/year migration rate for rabbits), \(h = 0.01\) (0.01 KFoxes/year migration rate for foxes).
N = 8 # N time steps
dt = 6 # Sample every dt years
A, b = diff_eq_model(
a=-0.01, b=0, c0=-0.1, d=0.4, e=-0.02, f=0, g=0.02, h=0.01, dt=dt, N=N, x0=30, y0=1
)
import matplotlib.pyplot as plt
plt.matshow(A)
plt.title("Matrix A")
plt.show()
plt.matshow(b.transpose())
plt.title("Vector b")
plt.show()
The classical solution
x = np.matmul(np.linalg.inv(A), b)
plt.matshow(x.transpose())
plt.title("Solution x")
plt.show()
import matplotlib.pyplot as plt
t = dt * np.array(range(N))
x_sol = x[0:N]
y_sol = x[N : 2 * N]
plt.plot(t, x_sol, label="Rabbits")
plt.plot(t, y_sol, label="Foxes")
plt.xlabel("t [years]")
plt.ylabel("Population [Thosands of individuals]")
plt.legend()
plt.show()
Notice that at some critical point, Rabbits population is so low, that foxes population also starts to decrease. This is a typical predator-prey model behavior.
Re-defining the matrix
The matrix of HHL is a canonical one, assuming the following properties: 1) The RHS vector \(\vec{b}\) is normalized. 2) The matrix \(A\) is a Hermitian one. 3) The matrix \(A\) is of size $2^n\times 2^n $. 4) The eigenvalues of \(A\) are in the range \((0,1)\).
However, any general problem that does not follow these conditions can be resolved as follows:
1) Normalized b
As preprocessing, normalize \(\vec{b}\) and then return the normalization factor as a post-processing
norm_factor = np.linalg.norm(b)
b_normalized = b / norm_factor
2) Hermitian Matrix
Symmetrize the problem as follows:
This increases the number of qubits by 1.
def to_hermitian(A):
N = A.shape[0]
A_hermitian = np.concatenate(
[
np.concatenate([np.zeros((N, N)), A.transpose().conj()], axis=1),
np.concatenate([A, np.zeros((N, N))], axis=1),
]
)
return A_hermitian
b_new = np.concatenate([np.zeros((2 * N, 1)), b_normalized])
plt.matshow(b_new.transpose())
plt.title("Normalized and Padded Vector b")
plt.show()
A_hermitian = to_hermitian(A)
plt.matshow(A_hermitian)
plt.title("Hermitian Matrix A")
plt.show()
3) Make the matrix \(A\) of size $2^n\times 2^n $
Complete the matrix dimension to the closest \(2^n\) with an identity matrix. The vector \(\vec{b}\) will be completed with zeros.
However, our matrix is already in the right size.
4) Rescaled Matrix
If the eigenvalues of \(A\) are in the range \([w_{\min},w_{\max}]\) you can employ transformations to the exponentiated matrix , and then undo them for extracting the results:
The eigenvalues of this matrix lie in the interval \([0,1)\), and are related to the eigenvalues of the original matrix via
with \(\tilde{\lambda}\) being an eigenvalue of \(\tilde{A}\) resulting from the QPE algorithm. This relation between eigenvalues is then used for the expression inserted into the eigenvalue inversion, via the AmplitudeLoading
function.
def condition_number(A):
w, _ = np.linalg.eig(A)
return max(np.abs(w)) / min(np.abs(w))
QPE_RESOLUTION_SIZE = 6
assert QPE_RESOLUTION_SIZE > np.log2(
condition_number(A)
), "condition number is too big, and QPE resolution can not hold all eigenvalues"
w, v = np.linalg.eig(A_hermitian)
w_max = np.max(w)
w_min = np.min(w)
mat_shift = -w_min
mat_rescaling = (1 - 1 / 2**QPE_RESOLUTION_SIZE) / (
w_max - w_min
) # assures eigenvalues in [0,1-1/2^QPE_SIZE]
min_possible_w = (
w_max - w_min
) / 2**QPE_RESOLUTION_SIZE # this is the minimal eigenvalue which can be resolved by the QPE
A_rescaled = (
A_hermitian + mat_shift * np.identity(A_hermitian.shape[0])
) * mat_rescaling
# verifying that the matrix is symmetric and has eigenvalues in [0,1)
if not np.allclose(A_rescaled, A_rescaled.T, rtol=1e-6, atol=1e-6):
raise Exception("The matrix is not symmetric")
w_rescaled, _ = np.linalg.eig(A_rescaled)
for lam in w_rescaled:
if lam < -1e-6 or lam >= 1:
raise Exception("Eigenvalues are not in (0,1)")
plt.matshow(A_rescaled)
plt.title("Rescaled Matrix A")
plt.show()
Defining HHL Algorithm for the Quantum Solution
Based on Classiq HHL in the User Guide here and here
Notice the rescaling in simple_eig_inv
based on the matrix rescaling
from classiq import *
@qfunc
def simple_eig_inv(
gamma: CReal,
delta: CReal,
c_param: CReal,
phase: QNum,
indicator: Output[QBit],
):
allocate(1, indicator)
indicator *= c_param / ((gamma * phase) + delta)
b_new = b_new.tolist()
sol_classical_hermitian = np.array([s[0] for s in np.linalg.solve(A_hermitian, b_new)])
compared_sol = sol_classical_hermitian * min_possible_w
amp_compared = (compared_sol / np.linalg.norm(compared_sol)).tolist()
Besides HHL, we also perform swap test as in the user guide, comparing the HHL solution to a state preparation of the known solution
import scipy
exponentiation_A_rescaled = scipy.linalg.expm(1j * 2 * np.pi * A_rescaled).tolist()
@qfunc
def main(indicator: Output[QBit], test: Output[QBit]) -> None:
state = QArray("state")
compared_state = QArray("compared_state")
rescaled_eig = QNum("rescaled_eig")
allocate_num(QPE_RESOLUTION_SIZE, False, QPE_RESOLUTION_SIZE, rescaled_eig)
prepare_amplitudes(b_new, 0, state)
within_apply(
lambda: qpe(
unitary=lambda: unitary(exponentiation_A_rescaled, state),
phase=rescaled_eig,
),
lambda: simple_eig_inv(
gamma=mat_rescaling ** (-1),
delta=-mat_shift,
c_param=min_possible_w,
phase=rescaled_eig,
indicator=indicator,
),
)
prepare_amplitudes(amp_compared, 0.0, compared_state)
swap_test(state, compared_state, test)
We will create the circuit with depth limitation compared to the maximal width in a simulator (25). We also set the preferences for synthesis and execution.
from classiq.execution import (
ClassiqBackendPreferences,
ClassiqSimulatorBackendNames,
ExecutionPreferences,
)
qmod_hhl_swap_test = create_model(main)
constraints = Constraints(
max_width=25,
# optimization_parameter=OptimizationParameter.DEPTH,
)
qmod_hhl_swap_test = set_constraints(qmod_hhl_swap_test, constraints)
preferences = Preferences(optimization_timeout_seconds=90, transpilation_option="none")
qmod_hhl_swap_test = set_preferences(qmod_hhl_swap_test, preferences)
NUM_SHOTS = 2048
backend_preferences = ClassiqBackendPreferences(
backend_name=ClassiqSimulatorBackendNames.SIMULATOR
)
qmod_hhl_swap_test = set_execution_preferences(
qmod_hhl_swap_test,
ExecutionPreferences(num_shots=NUM_SHOTS, backend_preferences=backend_preferences),
)
synthesize and show
qprog_hhl_swap = synthesize(qmod_hhl_swap_test)
show(qprog_hhl_swap)
Execution and Results analysis
As explained in the swap test user guide
Comparing the measured overlap with the exact overlap done using the expected probability of measuring the state \(|0\rangle\) as defined as:
we extract the overlap \(|\langle \psi_1 |\psi_2 \rangle |^2=\sqrt{2 P\left(q_{\text{test}}=|0\rangle\right)-1}\) The exact overlap is computed with the dot product of the two state-vectors. Note that for the sake of this demonstration we execute this circuit \(100,000\) times to improve the precision of the probability estimate. This is usually not required in actual programs.
Since we are in HHL context, we filter only the indicator==1
results
execution_job_id = execute(qprog_hhl_swap)
results = execution_job_id.result()
histogram = results[0].value
fidelity_basic = np.sqrt(
histogram.counts_of_multiple_outputs(["indicator", "test"])[("1", "0")]
* 2
/ (histogram.counts_of_output("indicator")["1"])
- 1
)
print("Fidelity between basic HHL and classical solutions:", fidelity_basic)
Fidelity between basic HHL and classical solutions: 0.9666572451020045
We can also check in the IDE that: among the indicator=1
results, that the bar of test=0
is much higher than test=1
execution_job_id.open_in_ide()
State vector simulation
We can also run the statevector of the HHL result and examine it, without a swap test. Extracting the full information, is exponentially hard, and used here just for educational purposes. Extension of this work can extract some information from the statevector, like the last element, which is the most important one. We can also pad with extension of the last solution, and therefore measure the last point in high probability.
class TimeIndexAndGroup(QStruct):
rabbits: QBit
time_index: QNum
@qfunc
def main(
indicator: Output[QBit],
time_index_and_group: Output[TimeIndexAndGroup],
rescaled_eig: Output[QNum],
) -> None:
allocate_num(QPE_RESOLUTION_SIZE, False, QPE_RESOLUTION_SIZE, rescaled_eig)
prepare_amplitudes(b_new, 0, time_index_and_group)
within_apply(
lambda: qpe(
unitary=lambda: unitary(exponentiation_A_rescaled, time_index_and_group),
phase=rescaled_eig,
),
lambda: simple_eig_inv(
gamma=mat_rescaling ** (-1),
delta=-mat_shift,
c_param=min_possible_w,
phase=rescaled_eig,
indicator=indicator,
),
)
qmod_hhl_basic = create_model(main)
constraints = Constraints(
max_width=18,
)
qmod_hhl_basic = set_constraints(qmod_hhl_basic, constraints)
preferences = Preferences(optimization_timeout_seconds=90, transpilation_option="none")
qmod_hhl_basic = set_preferences(qmod_hhl_basic, preferences)
backend_preferences = ClassiqBackendPreferences(
backend_name=ClassiqSimulatorBackendNames.SIMULATOR_STATEVECTOR
)
qmod_hhl_basic = set_execution_preferences(
qmod_hhl_basic,
ExecutionPreferences(num_shots=1, backend_preferences=backend_preferences),
)
qprog_hhl_basic = synthesize(qmod_hhl_basic)
show(qprog_hhl_basic)
execution_job_id = execute(qprog_hhl_basic)
results = execution_job_id.result()
statevector = results[0].value.state_vector
execution_job_id.open_in_ide()
We should filter the results, and extract the amplitudes of the statevector, and compare them to the classical solution. We filter only the indicator=1
and phase=0
results.
filtered_hhl_statevector = dict()
for sample in results[0].value.parsed_state_vector:
if sample.state["indicator"] == 1 and sample.state["rescaled_eig"] == 0:
filtered_hhl_statevector[sample.state["time_index_and_group"]] = (
sample.amplitude
)
states = list(filtered_hhl_statevector.keys())
states.sort()
raw_qsol = np.array([filtered_hhl_statevector[s] for s in states])
Let's compare the raw solution to the Hermitian matrix with the classical one
qsol_hermitian = raw_qsol / (min_possible_w)
plt.plot(states, qsol_hermitian)
plt.plot(states, sol_classical_hermitian)
plt.xlabel("states")
plt.ylabel("amplitude")
plt.legend(["hhl result", "original result"])
plt.show()
/Users/ron/Library/Caches/pypoetry/virtualenvs/backend-BloNqwkX-py3.11/lib/python3.11/site-packages/matplotlib/cbook.py:1762: ComplexWarning: Casting complex values to real discards the imaginary part
return math.isfinite(val)
/Users/ron/Library/Caches/pypoetry/virtualenvs/backend-BloNqwkX-py3.11/lib/python3.11/site-packages/matplotlib/cbook.py:1398: ComplexWarning: Casting complex values to real discards the imaginary part
return np.asarray(x, float)
Now we can compare the two solutions in the time domain, after putting back the normalization factor
plt.plot(t, norm_factor * qsol_hermitian[0:N], label="Rabbits HHL")
plt.plot(t, norm_factor * sol_classical_hermitian[0:N], label="Rabbits Classical")
plt.plot(t, norm_factor * qsol_hermitian[N : 2 * N], label="Foxes HHL")
plt.plot(t, norm_factor * sol_classical_hermitian[N : 2 * N], label="Foxes Classical")
plt.xlabel("time [years]")
plt.ylabel("Population [Thousands of individuals]")
plt.legend()
plt.show()
The fidelity of the statevector solution can be calculated as the overlap of the two solutions
fidelity = (
np.abs(
np.dot(
sol_classical_hermitian / np.linalg.norm(sol_classical_hermitian),
qsol_hermitian / np.linalg.norm(qsol_hermitian),
)
)
** 2
)
print("Statevector Solution Fidelity:", fidelity)
Statevector Solution Fidelity: 0.999646266787624