Time-Marching based Quantum Solvers for Time-dependent Linear Differential Equations
Introduction
In the following demonstration we will follow the paper [1]. The notebook was written in collaboration with Prof. Di Fang, the first author of the paper.
Time marching is a method in which in order to solve differential equations in time, the solution vector is integrated through time by small discrete steps, where each timestep depends on some previous timesteps. In this paper, an evolution matrix is applied sequentially on the state and make it to evolve through time, as done in time-dependent hamiltonian simulation.
Now let's deffine the problem first:
- Input: a system of linear homogenous linear equations (ODEs)
\(\(\frac{d}{dt} |\psi(t)\rangle = A(t) |\psi(t)\rangle, \quad |\psi(0)\rangle = |\psi_0\rangle\)\) Notice that \(A\) can vary on time. The matrix \(A\) is assumed to be with bounded variation. The input model of \(A(t)\) is a series of time dependent block encodings, that will be described next.
- Output: a state that is proportional to the solution at time \(T\), \(|\psi(T)\rangle\)
Algorithm Description
The algorithm divides the timline into long timesteps and short timesteps. In each long timestep, some approximation of evolution of the short timesteps is done, such as Truncated Dyson series [2] or Magnus series [3]. These are applied as block-encodings on the state, where the following matrix is block encoded in each long timestep:
The problem is that when this block encoding has some prefactor \(s\) (for example because of the usage of some LCU to block encode the integration), the prefactor of the entire simulation is amplified by \(s\) on each iteration! so that the probability to sample the wanted block decreases exponentially with the number of long timesteps.
This is the main pain-point that the algorithm in the paper is coming to resolve. In the case of hamiltonian simulation, it is possible to wrap each timestep with oblivious amplitdue amplification [4] (see oblivious amplitude amplification), and get rid of the pre-factor. However it is only possible in the case of a unitary block encoding. The authors address this issue by using instead Uniform singular amplitude amplification [5], which is done with the qsvt framework.
Algorithm Implementation using Classiq
We choose an easy artificial example to demonstrate the algorithm. For simplicity, we choose \(A\) which is easy to block-encode. The following matrix can be easily block-encoded using linear pauli rotations:
The matrix is hermitian and diagonal, and it will help us in several aspect: 1. The 1st order Magnus expansion will be exact. 2. The QSVT and QET (Quantum eigenvalue transform) will coincide, and we will use it to exponentiate the block encoding.
We will simulate a 4x4 matrix using 4 timeteps, from \(t=0\) to \(t=2\):
NUM_LONG_SLICES = 4
START_TIME = 0
END_TIME = 2
DIM_SIZE = 2
Classical Simulation
This is how the evolution looks classically:
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
# Parameters
N = 2**DIM_SIZE # Matrix size
t_start = 0
t_end = END_TIME
# Define the time-dependent diagonal matrix A(t)
def A(t):
diagonal_elements = np.cos(np.arange(N) + t) # sin(i * t) for i=0,...,N-1
return np.diag(diagonal_elements)
# Define the ODE dx/dt = A(t)x
def ode_system(t, x):
return A(t) @ x # Matrix-vector multiplication
# Initial condition
x0 = np.ones(N) # Example: start with all ones
# Solve the ODE
solution = solve_ivp(
ode_system, [t_start, t_end], x0, t_eval=np.linspace(t_start, t_end, 100)
)
# Extract solution
t_vals = solution.t
x_vals = solution.y
# Plot the solution
plt.figure(figsize=(8, 4))
for i in range(N):
plt.plot(t_vals, x_vals[i], label=f"x{i}(t)")
classical_final = x_vals[:, -1]
plt.xlabel("Time t")
plt.ylabel("x(t)")
plt.title(r"Solution of $\dot{x} = A(t)x$")
plt.legend()
plt.grid(True)
plt.show()
Time-dependent block encoding
The time-dependent block encoding of \(A(t)\) will be:
So we get, for a given timeslice:
this can be easily accomplished be a sequence of 2 pauli rotations.
from classiq import *
SHORT_INTERVALS_TIME_SIZE = 2
class TimeDependentBE(QStruct):
index: QNum[DIM_SIZE]
time: QNum[SHORT_INTERVALS_TIME_SIZE]
block: QBit
@qfunc
def block_encode_time_dependent_A(a: CReal, b: CReal, qbe: TimeDependentBE):
# a factor 2 is applied on the slopes and offsets as RY rotates at half of the angle
linear_pauli_rotations(
[Pauli.Y], [(b - a) * 2 / (2**qbe.time.size)], [2 * a], qbe.time, qbe.block
)
linear_pauli_rotations([Pauli.Y], [2], [0], qbe.index, qbe.block)
Short-time Evolution
We will use a 1st order Magnus expansion, which is exact in this case.
It will be built in 2 steps:
1. Riemannian Summation of short timesteps
Wrapping the time variable with the haddamard transform, we get exactly a block encoding of the Reimann sum of the input block encoding.
from classiq.qmod.symbolic import logical_and
@qfunc
def short_time_summation(a: CReal, b: CReal, qbe: TimeDependentBE):
"""
Riemann summation
"""
within_apply(
lambda: hadamard_transform(qbe.time),
lambda: block_encode_time_dependent_A(a, b, qbe),
)
# We also define predicates for the usage later on in qsvt
def time_dependent_predicate(qbe: TimeDependentBE):
return logical_and(qbe.block == 0, qbe.time == 0)
@qfunc
def time_dependent_projector(qbe: TimeDependentBE, is_in_block: QBit):
is_in_block ^= time_dependent_predicate(qbe)
2. Block encoding of the summation Exponential
Find polynomials for \(\cosh(ax)\) and \(\sinh(ax)\), In order to combine them to \(e^{ax}\).
For pedagogical reasons, we will do a naive thing and create a polynomial approximation for each one of the odd&even polynomials of: \(P_{cosh} \approx \frac{\cosh(ax)}{e^a}\) and \(P_{sinh} \approx \frac{\sinh(ax)}{e^a}\).
Then combining them with LCU will result with:
\(\(P(x) \approx \frac{e^{ax}}{2e^a}\)\) which is a polynomial that is bounded by \(\frac{1}{2}\).
We could choose \(P_{cosh} \approx \frac{\cosh(ax)}{\cosh(a)}\) and \(P_{sinh} \approx \frac{\sinh(ax)}{\sinh{a}}\), Then LCU with coeffiecients \([\frac{\cosh(a)}{\cosh(a)+\sinh(a)}, \frac{\sinh(a)}{\cosh(a)+\sinh(a)}]\), will get us to:
Which is the best we can get to, and doesn't require amplification. We will still go with the first approach, for demonstrating the singular value amplification. We will want to get rid of this redundant factor 2, which will save us a multiplicative factor of \(O(2^T)\) in the success probability.
import matplotlib.pyplot as plt
import numpy as np
from numpy.polynomial.chebyshev import Chebyshev
from numpy.polynomial.polynomial import Polynomial
def get_cheb_approx(func, poly_degree):
cheb = Chebyshev.interpolate(func, poly_degree, domain=[-1, 1])
return cheb.convert(kind=Polynomial)
def get_sinh_poly(a: float, poly_degree: int):
normalization = np.abs(np.exp(a * 1))
return get_cheb_approx(lambda x: np.sinh(a * x) / normalization, poly_degree)
def get_cosh_poly(a: float, poly_degree: int):
normalization = np.abs(np.exp(a * 1))
return get_cheb_approx(lambda x: np.cosh(a * x) / normalization, poly_degree)
A = 2
DEGREE_EXP = 7
poly_sinh = get_sinh_poly(A, DEGREE_EXP)
poly_cosh = get_cosh_poly(A, DEGREE_EXP - 1)
x = np.linspace(-1, 1, 1000)
plt.plot(
x,
(0.5 * (poly_sinh + poly_cosh)(x)),
label=r"$\approx \frac{\sinh(ax)+\cosh(ax)}{2e^a}$",
)
plt.plot(x, np.exp(A * x) / (2 * np.exp(A)), label=r"$\frac {e^{ax}}{2e^a}$")
plt.legend()
<matplotlib.legend.Legend at 0x12b5cfa90>
Then we transform the polynomials to qsvt phases using the pyqsp
package:
import pyqsp
from pyqsp.angle_sequence import QuantumSignalProcessingPhases
def get_qsvt_phases(poly, plot=False):
np.random.seed(1) # set seed as pyqsp does not allow it, and not always converges
ang_seq = QuantumSignalProcessingPhases(
poly,
signal_operator="Wx",
method="laurent",
measurement="x",
tolerance=0.000001, # relaxing the tolerance to get convergence
)
if plot:
pyqsp.response.PlotQSPResponse(
ang_seq, target=poly, signal_operator="Wx", measurement="x"
)
# change W(x) to R(x), as the phases are in the W(x) conventions
phases = np.array(ang_seq)
phases[1:-1] = phases[1:-1] - np.pi / 2
phases[0] = phases[0] - np.pi / 4
phases[-1] = phases[-1] + (2 * (len(phases) - 1) - 1) * np.pi / 4
# verify conventions. minus is due to exp(-i*phi*z) in qsvt in comparison to qsp
phases = -2 * phases
return phases.tolist()
phases_sinh = get_qsvt_phases(poly_sinh, plot=True)
phases_cosh = get_qsvt_phases(poly_cosh, plot=True)
Lastly, we use the phases within the qsvt_lcu
function, which is an optimized function for implementing a linear combination of 2 qsvt sequences:
class MagnusBE(QStruct):
time_dependent: TimeDependentBE
qsvt_exp_aux: QBit
qsvt_exp_lcu: QBit
@qfunc
def short_time_magnus(a: CReal, b: CReal, qbe_st: MagnusBE):
# compute the coefficient of the expoenent
timeslice_duration = (END_TIME - START_TIME) / NUM_LONG_SLICES
sinh_polynomnial = get_sinh_poly(timeslice_duration, DEGREE_EXP)
cosh_polynomnial = get_cosh_poly(timeslice_duration, DEGREE_EXP - 1)
# apply LCU to combine sinh((b-a)x)/(2exp(b-a)) and cosh((b-a)x)/(2exp(b-a)) to exp((b-a)x)/(2exp(b-a))
within_apply(
lambda: H(qbe_st.qsvt_exp_lcu),
lambda: qsvt_lcu(
get_qsvt_phases(cosh_polynomnial),
get_qsvt_phases(sinh_polynomnial),
time_dependent_projector,
time_dependent_projector,
lambda x: short_time_summation(a, b, x),
qbe_st.time_dependent,
qbe_st.qsvt_exp_aux,
qbe_st.qsvt_exp_lcu,
),
)
def magnus_predicate(qbe: MagnusBE):
return logical_and(
time_dependent_predicate(qbe.time_dependent),
logical_and(qbe.qsvt_exp_aux == 0, qbe.qsvt_exp_lcu == 0),
)
@qfunc
def magnus_projector(qbe: MagnusBE, is_in_block: QBit):
is_in_block ^= magnus_predicate(qbe)
Amplification of a single long timesteps
As the climax of the algorithm, we wrap the magnus evolution by an amplification step. The prefactor of the exponential block-encoding is 2, so we want to approximate the function \(f(x)=2x\) in the interval \([0, \frac{1}{2}]\).
singular value amplification (\(\gamma x\))
We follow the paper's approach, and do a minmax optimization: Complete to an odd function. To approximate an odd target function using an odd polynomial of degree $ d $, the target function can be expressed as:
where $ T_{2k+1}(x) $ are Chebyshev polynomials of odd degrees, and $ c_k $ are unknown coefficients.
To formulate this as a discrete optimization problem, we discretize \([-1, 1]\) using $ M $ grid points:
The coefficient matrix is defined as:
The coefficients can then be found by solving the following convex optimization problem:
subject to:
Here, $ c $ is a relaxation parameter, chosen as:
import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
from numpy.polynomial.chebyshev import Chebyshev
from numpy.polynomial.polynomial import Polynomial
def optimize_amplification_polynomial(gamma, degree, epsilon=0.0001, M=1000):
# Discretize [-1, 1] using M grid points (interpolants)
xj_full = np.cos(np.pi * np.arange(M) / (M - 1)) # Chebyshev nodes on [-1, 1]
# Select grid points for the objective in [0, 1/gamma]
xj_obj = xj_full[np.abs(xj_full) <= 1 / gamma]
# Define the Chebyshev polynomials of odd degrees
k_max = (degree - 1) // 2
T_matrix_full = np.array(
[
[np.cos((2 * k + 1) * np.arccos(x)) for k in range(k_max + 1)]
for x in xj_full
]
)
T_matrix_obj = np.array(
[[np.cos((2 * k + 1) * np.arccos(x)) for k in range(k_max + 1)] for x in xj_obj]
)
# Define optimization variables
c = cp.Variable(k_max + 1) # Coefficients for Chebyshev polynomials
F_values_full = T_matrix_full @ c # Values for constraints
F_values_obj = T_matrix_obj @ c # Values for the objective function
# Relaxed constraint
c_max = max(0.9999, 1 - 0.1 * epsilon)
# Define the optimization problem
objective = cp.Minimize(
cp.max(cp.abs(F_values_obj - (1 - epsilon) * gamma * xj_obj))
)
constraints = [cp.abs(F_values_full) <= c_max]
prob = cp.Problem(objective, constraints)
# Solve the optimization problem
prob.solve()
# Return coefficients, optimal value, and grid points
return c.value, prob.value
# Convert Chebyshev coefficients to monomial basis
def chebyshev_to_monomial(odd_coefficients):
# Create a full Chebyshev coefficient array with zeros for even terms
full_coefficients = np.zeros(
2 * len(odd_coefficients)
) # Double the size for even terms
full_coefficients[1::2] = (
odd_coefficients # Fill only odd indices with given coefficients
)
# Create the Chebyshev polynomial
cheb_poly = Chebyshev(full_coefficients, domain=[-1, 1])
# Convert to monomial basis
monomial_poly = cheb_poly.convert(kind=Polynomial)
return monomial_poly
def plot_amplification_polynomial(gamma, epsilon, poly):
# Generate data for plotting
x_vals = np.linspace(-1, 1, 500)
xj_obj = x_vals[np.abs(x_vals) <= 1 / gamma]
target_vals = (1 - epsilon) * gamma * xj_obj
approx_vals = poly(x_vals)
# Plot the results
plt.figure(figsize=(8, 6))
plt.plot(xj_obj, target_vals, label="Target Function", color="blue", linewidth=2)
plt.plot(
x_vals,
approx_vals,
label="Approximated Polynomial",
color="red",
linestyle="--",
)
plt.xlabel("x")
plt.ylabel("F(x)")
plt.title("Target Function vs. Approximated Polynomial")
plt.legend()
plt.grid(True)
plt.show()
def get_amplification_polynomial(gamma, degree, epsilon=0.0001, plot=False):
# Perform approximation
coefficients, optimal_value = optimize_amplification_polynomial(
gamma, degree, epsilon=epsilon
)
# Convert the approximated polynomial to monomial basis
poly_amp = chebyshev_to_monomial(coefficients)
if plot:
print("Polynomial in monomial basis:", poly_amp)
plot_amplification_polynomial(gamma, epsilon, poly_amp)
return poly_amp
DEGREE_AMP = 7
GAMMA = 2
poly_amp = get_amplification_polynomial(GAMMA, DEGREE_AMP, plot=True)
phases_amp = get_qsvt_phases(poly_amp, plot=True)
Polynomial in monomial basis: 0.0 + 1.7541331417527408·x¹ + 0.0·x² + 4.248403900299232·x³ + 0.0·x⁴ -
16.095040821329768·x⁵ + 0.0·x⁶ + 9.09251379092624·x⁷
Then apply the phases on the magnus block encoding:
class LongSliceBE(QStruct):
magnus: MagnusBE
qsvt_amplification_aux: QBit
@qfunc
def long_slice_evolution(a: CReal, b: CReal, qbe: LongSliceBE):
amplification_polynomial = get_amplification_polynomial(gamma=2, degree=DEGREE_AMP)
qsvt(
get_qsvt_phases(amplification_polynomial),
magnus_projector,
magnus_projector,
lambda x: short_time_magnus(a, b, x),
qbe.magnus,
qbe.qsvt_amplification_aux,
)
def long_slice_predicate(qbe: LongSliceBE):
return logical_and(magnus_predicate(qbe.magnus), qbe.qsvt_amplification_aux == 0)
Long time Evolution
Last thing to do is to sequentially apply the block encodings of each timeslice. In order to have a quantum variable that will be \(|0\rangle\) when all the block encodings are applied to the state, a counter is used. A further amplitude amplification step is possible using the counter, however we won't do it here.
import numpy as np
from classiq.execution import *
class FullBE(QStruct):
time_slice: LongSliceBE
counter: QNum[np.ceil(np.log2(NUM_LONG_SLICES + 1))]
@qfunc
def long_time_integrator_step(a: CReal, b: CReal, qbe_full: FullBE):
long_slice_evolution(a, b, qbe_full.time_slice)
# if in block, decrement the counter
control(
long_slice_predicate(qbe_full.time_slice),
lambda: inplace_add(-1, qbe_full.counter),
)
@qfunc
def long_time_integrator(
T: CReal, num_slices: CInt, qbe_full: FullBE # start from time 0
):
qbe_full.counter ^= num_slices
repeat(
num_slices,
lambda i: long_time_integrator_step(
i * T / num_slices, (i + 1) * T / num_slices, qbe_full
),
)
@qfunc
def main(qbe: Output[FullBE]):
allocate(qbe.size, qbe)
# initial condition: uniform distribution
hadamard_transform(qbe.time_slice.magnus.time_dependent.index)
long_time_integrator(END_TIME, NUM_LONG_SLICES, qbe)
prefereces = Preferences(optimization_level=0, timeout_seconds=500)
execution_preferences = ExecutionPreferences(num_shots=1000000)
qmod = create_model(
main,
preferences=prefereces,
execution_preferences=execution_preferences,
out_file="time_marching",
)
qprog = synthesize(qmod)
show(qprog)
res = execute(qprog).get_sample_result()
Opening: https://platform.classiq.io/circuit/b49040bd-929d-4cd4-81e0-c993d7eb3e21?version=0.63.0
Compare to the naive case: without uniform amplification
Here we don't use the amplification step. We should see that the measured amplitudes will be much smaller than in the amplified case.
@qfunc
def long_time_integrator_step_naive(a: CReal, b: CReal, qbe_full: FullBE):
short_time_magnus(a, b, qbe_full.time_slice.magnus)
# if in block, decrement the counter
control(
magnus_predicate(qbe_full.time_slice.magnus),
lambda: inplace_add(-1, qbe_full.counter),
)
@qfunc
def long_time_integrator_naive(
T: CReal, num_slices: CInt, qbe_full: FullBE # start from time 0
):
qbe_full.counter ^= num_slices
repeat(
num_slices,
lambda i: long_time_integrator_step_naive(
i * T / num_slices, (i + 1) * T / num_slices, qbe_full
),
)
@qfunc
def main(qbe: Output[FullBE]):
allocate(qbe.size, qbe)
# initial condition: uniform distribution
hadamard_transform(qbe.time_slice.magnus.time_dependent.index)
long_time_integrator_naive(END_TIME, NUM_LONG_SLICES, qbe)
qmod_naive = create_model(
main, preferences=prefereces, execution_preferences=execution_preferences
)
qprog_naive = synthesize(qmod_naive)
show(qprog_naive)
res_naive = execute(qprog_naive).get_sample_result()
Opening: https://platform.classiq.io/circuit/8a6c1af8-0cd1-4e54-9807-2cdf0f08a56a?version=0.63.0
Post-processing
def post_process_res_statevector(result):
filtered_samples = [
s
for s in result.parsed_state_vector
if s.state["qbe"]["counter"] == 0 and np.abs(s.amplitude) > 1e-6
]
global_phase = np.exp(1j * np.angle(filtered_samples[0].amplitude))
amplitudes = np.zeros(2**DIM_SIZE)
for sample in filtered_samples:
index = sample.state["qbe"]["time_slice"]["magnus"]["time_dependent"]["index"]
amplitudes[index] = np.real(sample.amplitude / global_phase)
return amplitudes
def post_process_res_samples(result):
filtered_samples = [s for s in result.parsed_counts if s.state["qbe"].counter == 0]
probs = np.zeros(2**DIM_SIZE)
for sample in filtered_samples:
index = sample.state["qbe"].time_slice.magnus.time_dependent.index
probs[index] += sample.shots / result.num_shots
return np.sqrt(probs)
amplitudes_amplified = post_process_res_samples(res)
print("amplified amplitudes:", amplitudes_amplified)
amplitudes_naive = post_process_res_samples(res_naive)
print("naive amplitudes:", amplitudes_naive)
print("classical:", classical_final)
amplified amplitudes: [0.18801862 0.03464102 0.00994987 0.01857418]
naive amplitudes: [0.01131371 0.00244949 0. 0.00141421]
classical: [2.48301135 0.49650918 0.18886598 0.33261086]
And indeed we can see the the naive amplitudes are order of magnitude smaller than the amplified case (this is exactly what we expect for 4 timesteps).
Compare classical and quantum results at the final step:
Verify that the classical and quantum solutions are equivalent:
expected = classical_final / np.linalg.norm(classical_final)
sampled = amplitudes_amplified / np.linalg.norm(amplitudes_amplified)
assert np.linalg.norm(sampled - expected) < 0.1
References
[1]: Fang, Di, Lin, Lin, and Tong, Yu. Time-marching based quantum solvers for time-dependent linear differential equations. Quantum 7, 955 (2023).
[2]: M. Kieferov a, A. Scherer, and D. W. Berry. Simulating the dynamics of timedependent Hamiltonians with a truncated Dyson series. Phys. Rev. A, 99(4), apr 2019
[3]: Magnus Expansion (Wikipedia)
[4]: Berry, Dominic W., et al. "Exponential improvement in precision for simulating sparse Hamiltonians." Proceedings of the forty-sixth annual ACM symposium on Theory of computing. 2014.
[5]: A. Gily en, Y. Su, G. H. Low, and N. Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proc. 51st Annu. ACM SIGACT Symp. Theory Comput., pages 193{204, 2019.