Skip to main content

View on GitHub

Open this notebook in GitHub to run it yourself
Simulating physical and chemical systems was among the original motivations for quantum computing, as first envisioned by Richard Feynman in 1982, and remains one of its most impactful applications. Beyond its foundational goal, this quantum routine now serves as a key subroutine in many higher-level quantum algorithms, such as Quantum Linear Systems Solvers (HHL), and applications such as Quantum Gibbs State Sampling. Time-independent Hamiltonian simulation refers to the task of approximately implementing the unitary evolution operator eiHte^{-iHt} for a Hermitian matrix HH. When access to the Hamiltonian is provided via block-encoding, its time evolution can be realized by applying an appropriate polynomial transformation of the encoded operator, within a desired precision ϵ\epsilon. This polynomial transformation can be achieved via Quantum Signal Value Transform (QSVT) [1], Generalized QSP (GQSP) [2], and a direct block encoding of Chebyshev Polynomials [3] (referred to as the Qubitization technique hereafter). These methods represent the most powerful family of known algorithms for simulating the dynamics of quantum systems whose Hamiltonians can be efficiently block-encoded, having optimal scaling in the time tt and error ϵ\epsilon. Qubitization provides a direct approach that avoids classical preprocessing but generally requires more qubits, may involve amplitude amplification, and includes controlled operations over the block-encoding unitary. QSVT, in contrast, omits such controlled operations, uses two auxiliary qubits, and requires a classical preprocessing stage to determine the signal-processing angles. The GQSP method, like Qubitization, employs controlled block-encoding calls and, like QSVT, relies on classical angle computation, but it requires only one auxiliary qubit and eliminates the need for amplification.
  • Input:
    • A Hermitian operator (Hamiltonian) HH, given through an oracle that implements a block-encoding of H/αH/\alpha, where αH\alpha \ge ||H|| and ||\cdot|| is the spectral norm.
    • The evolution time tt.
    • The target simulation error ϵ\epsilon.
  • Output: A unitary UU that approximates the time evolution eiHte^{-iHt}, such that UeiHt<ϵ||U - e^{-iHt}|| < \epsilon (||\cdot|| is the spectral norm).
Complexity: All three methods require O ⁣(αt+logϵ1log ⁣(e+log(ϵ1)/αt))O\!\left(\alpha t + \frac{\log \epsilon^{-1}}{\log \!\left(e + \log(\epsilon^{-1}) / \alpha t \right)}\right) calls to the block-encoding unitary UHU_H. The methods differ by their classical preprocessing routines, depth, and number of auxiliary qubits.
Keywords: Hamiltonian Simulation, Block Encoding, Quantum Singular Value Transformation (QSVT), Qubitization, Generalized Quantum Signal Processing, Oracle/Query complexity.
A block-encoded Hamiltonian refers to its embedding within a larger unitary matrix. Definition: A (s,m,ϵ)(s, m, \epsilon)-encoding of a 2n×2n2^n\times 2^n matrix HH refers to completing it into a 2n+m×2n+m2^{n+m}\times 2^{n+m} unitary matrix U(s,m,ϵ)HU_{(s,m,\epsilon)-H}: U(s,m,ϵ)H=(H/s),U_{(s,m,\epsilon)-H} = \begin{pmatrix} H/s & * \\ * & * \end{pmatrix}, with some functional error (U(s,m,ϵ)H)0:2n1,0:2n1H/sϵ\left|\left|\left(U_{(s,m,\epsilon)-H}\right)_{0:2^n-1,0:2^n-1}-H/s \right|\right|\leq \epsilon. The idea is that typically we would like to apply operations on non-unitary matrices. Since quantum circuits can only implement unitary operations, block-encoding embeds a generally non-unitary matrix into a larger unitary one. The scaling factor ss ensures that the overall operator remains unitary. For the most basic use of block-encoding, see the first item in the technical notes at the end of this notebook. In all three methods presented in this notebook, we assume to have an exact (s,m,0)(s, m, 0)-encoding of some Hamiltonian as an input, where the output of each method implements an approximated encoding of its Hamiltonian evolution U(s,m,ϵ)HU(s~,m~,ϵ)exp(iHt)=(exp(iHt)/s~).U_{(s,m,\epsilon)-H} \rightarrow U_{(\tilde{s},\tilde{m},\epsilon)-\exp{(-iHt)}} = \begin{pmatrix} \exp{(-iHt)}/\tilde{s} & * \\ * & * \end{pmatrix}. This will be acheived by implementing a block-encoding for the polynomial approximation of a function of the Hamiltonian, P(H)1s~eiHtP(H)\approx \frac{1}{\tilde{s}}e^{-iHt}, with s~\tilde{s} being some scaling factor. Note that to get the approximated unitary of eiHte^{-iHt}, we have to apply amplitude amplification for s~1\tilde{s}\rightarrow 1, see for example the Oblivious Amplitude Amplification noteook, or apply some post-selection. The amplification step is beyond the scope of the current example, and we simply employ projected statevector simulation.
This notebook demonstrates how to implement Hamiltonian simulation with all three methods: GQSP, Qubitization, and QSVT.Each method is explained and defined independently. We start with some preliminary steps, that are common to all three methods. In addition, in the last section we present a summary that compares between the properties of the three methods.

Preliminaries

import time

import matplotlib.pyplot as plt
import numpy as np
import scipy

from classiq import *

Setting a specific Hamiltonian to Evolve

We start with setting some specific hyperparameters for our problem. Those can be easily modified to treat different usecases. We set a specific block-encoded Hamiltonian to evolve, taking a simple example of a Hamiltonian as a sum of Pauli strings, and use the lcu_paulis function to block-encode it. The idea of the Linear Combination of Unitaries (LCU) technique, which also appears in the QSVT and Qubitization methods, is the following: Definition: LCU refers to (αˉ,m,0)(\bar{\alpha}, m, 0)-block-encoding of a matrix given as a sum of unitaries: U(αˉ,m,0)A=(A/αˉ), for A=i=0L1αiUi,αi0U_{(\bar{\alpha},m,0)-A} =\begin{pmatrix} A/\bar{\alpha} & * \\ * & * \end{pmatrix}, \text{ for } A = \sum^{L-1}_{i=0} \alpha_i U_i, \,\,\,\,\, \alpha_i\geq 0 with αˉi=0L1αi\bar{\alpha}\equiv\sum^{L-1}_{i=0}\alpha_i and m=log2(L)m= \lceil\log_2(L) \rceil. The LCU is implemented with the so-called “prepare” and “select” operations. The former prepares a quantum state that corresponds to the probabilities αi/αi\alpha_i /\sum\alpha_i, and the latter acts as a quantum switch operation. See second item at the Technical Details at the end of this notebook, and the LCU tutorial for more details. We now set the input parameters for the specific problem, which include:
  1. The time evolution tt and the error ϵ\epsilon.
  2. The Hamiltonian HH and its block-encoding quantum function.
EVOLUTION_TIME = 22
EPS = 1e-7

HAMILTONIAN = (
    0.4 * Pauli.I(0)
    + 0.1 * Pauli.Z(1)
    + 0.05 * Pauli.X(0) * Pauli.X(1)
    + 0.2 * Pauli.Z(0) * Pauli.Z(1)
)
print(f"The Hamiltonian to evolve: {HAMILTONIAN}")
Output:

The Hamiltonian to evolve: 0.4 + 0.05*Pauli.X(0)*Pauli.X(1) + 0.2*Pauli.Z(0)*Pauli.Z(1) + 0.1*Pauli.Z(1)
  

H=i=0L1αiUi,H = \sum^{L-1}_{i=0} \alpha_i U_i, where UiU_i are Pauli strings. The LCU of the Hamiltonian corresponds to the unitary: UH=(1si=0L1αiUi), with s=αi.U_{H} = \begin{pmatrix} \frac{1}{s}\sum^{L-1}_{i=0} \alpha_i U_i & * \\ * & * \end{pmatrix}, \text{ with } s = \sum\alpha_i. Let us derive some additional parameters for the implementation, such as the block-encoding size and the scaling factor ss.
data_size = HAMILTONIAN.num_qubits
block_size = (
    (len(HAMILTONIAN.terms) - 1).bit_length() if len(HAMILTONIAN.terms) != 1 else 1
)
be_scaling = np.sum(np.abs([term.coefficient for term in HAMILTONIAN.terms]))
Next, we define the block-encoding quantum function, and a Quantum Struct for its variable
class BlockEncodedState(QStruct):
    data: QNum[data_size]
    block: QNum[block_size]


@qfunc
def be_hamiltonian(state: BlockEncodedState):
    lcu_pauli(HAMILTONIAN * (1 / be_scaling), state.data, state.block)
Finally, we set the initial state to evolve and calculate classically the expected evolved state for verifying the quantum methods.
state_to_evolve = np.random.rand(2**data_size)
state_to_evolve = (state_to_evolve / np.linalg.norm(state_to_evolve)).tolist()
matrix = pauli_operator_to_matrix(HAMILTONIAN)
expected_state = scipy.linalg.expm(-1j * matrix * EVOLUTION_TIME) @ state_to_evolve

Setting Up a State Vector Simulator for Execution

Working with block-encoding typically requires post-selection of the block variables being at state
  1. The success of this process can be amplified via a Quantum Amplitude Amplification routine. I`n this notebook, instead, we simply work with a statevector simulator. We define a function that gets execution results and returns a projected state vector.

## fix the execution preferences for this notebook
execution_preferences = ExecutionPreferences(
    num_shots=1,
    backend_preferences=ClassiqBackendPreferences(
        backend_name=ClassiqSimulatorBackendNames.SIMULATOR_STATEVECTOR
    ),
)

def get_projected_state_vector(res) -> np.ndarray:
    """
    This function returns a reduced statevector from execution results.

Expects a 'data' variable, and a 'block' variable to be filtered out when not in the |0> state.
    """
    state_size = 2 ** len(res.output_qubits_map["data"])
    proj_statevector = np.zeros(state_size).astype(complex)

    df = res.dataframe
    # Filter only the successful states.
    filtered_st = df[(df.block == 0) & (np.abs(df.amplitude) > 1e-12)]
    proj_statevector[filtered_st.data] = filtered_st.amplitude
    return proj_statevector

Defining a Utility Function for State Overlap

Below we define a classical post-process function that will be used to verify the quantum results, by comparing them to the expected classical result.
def compare_quantum_classical_states(
    expected_state: np.ndarray, resulted_state: np.ndarray, post_selection_factor: float
):
    """
    Aligns global phase, renormalizes, and computes overlap between the expected classical state
    and the one resulting from the quantum program.

Since we work with a projected statevector, we need to insert the post-selection by hand.

Parameters
    ----------
    expected_state : array_like of complex, the classical (reference) statevector.
    resulted_state : array_like of complex, the simulated statevector, projected onto the block=0 space.
    post_selection_factor : float, the post-selection success probability of the block=0, to be applied uniformly to `resulted_state`.

Returns
    -------
    renormalized_state : the `resulted_state` after global-phase alignment and multiplication by `post_selection_factor`.
    overlap : The absolute value of the normalized inner product between `renormalized_state` and `expected_state`.
    """

    relative_phase = np.angle(expected_state[0] / resulted_state[0])
    resulted_state = resulted_state * np.exp(
        1j * relative_phase
    )  # rotate according to a global phase

    renormalized_state = post_selection_factor * resulted_state
    overlap = (
        np.vdot(renormalized_state, expected_state)
        / np.linalg.norm(renormalized_state)
        / np.linalg.norm(expected_state)
    )

    return renormalized_state, abs(overlap)

The Jacobi–Anger Expansion

The core of all the three methods is the polynomial approximation of the evolution, this transformation relies on the Jacobi-Anger expansion [4]. The most general form of the Jacobi–Anger expansion gives:
eitcos(x)=k=ikJk(t)eikx,(1)eitsin(x)=k=Jk(t)eikx,(2) \begin{align} e^{it\cos(x)} &= \sum_{k=-\infty}^{\infty} i^k J_{k}(t) e^{ikx}, && (1)\\ e^{it\sin(x)} &= \sum_{k=-\infty}^{\infty} J_{k}(t) e^{ikx}, && (2) \end{align}
from which, we can also get the expansion for the sine and cosine functions using Chebyshev polynomials:
cos(xt)=J0(t)+2k=1d/2(1)kJ2k(t)T2k(x),(3)sin(xt)=2k=0d/2(1)kJ2k+1(t)T2k+1(x),(4) \begin{align} \cos(xt) &= J_0(t) + 2\sum_{k=1}^{d/2} (-1)^k J_{2k}(t)\, T_{2k}(x),&& (3)\\ \sin(xt) &= 2\sum_{k=0}^{d/2} (-1)^k J_{2k+1}(t)\, T_{2k+1}(x), && (4) \end{align}
where Ji(x)J_i(x) and Ti(x)T_i(x) are the Bessel function and Chebyshev polynomial of order ii, respectively. The infinite series can be trimmed at some degree dd, giving an approximation for the functions, whose relation to the error bound ϵ\epsilon is
d=O ⁣(t+log(1/ϵ)log ⁣(e+log(1/ϵ)t)).(5) d = O\!\left(t + \frac{\log(1/\epsilon)}{\log\!\Big(e+\frac{\log(1/\epsilon)}{t}\Big)}\right).\quad (5)
We can examine the approximation for a specific example:
from classiq.applications.qsp.qsp import (
    poly_jacobi_anger_cos,
    poly_jacobi_anger_degree,
    poly_jacobi_anger_sin,
)

degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME)
cos_coeffs = poly_jacobi_anger_cos(degree, EVOLUTION_TIME)
sin_coeffs = poly_jacobi_anger_sin(degree, EVOLUTION_TIME)

xs = np.linspace(0, 1, 100)
cos_approx = np.polynomial.Chebyshev(cos_coeffs)(xs)
sin_approx = np.polynomial.Chebyshev(sin_coeffs)(xs)

plt.plot(xs, np.cos(EVOLUTION_TIME * xs), "-r", linewidth=3, label="cos")
plt.plot(xs, cos_approx, "--k", linewidth=2, label="approx. cos")
plt.plot(xs, np.sin(EVOLUTION_TIME * xs), "-y", linewidth=3, label="sin")
plt.plot(xs, sin_approx, "--b", linewidth=2, label="approx sin")
plt.ylabel(r"$\cos(x)\,\,\, , \sin(x)$", fontsize=16)
plt.xlabel(r"$x$", fontsize=16)
plt.yticks(fontsize=16)
plt.xticks(fontsize=16)
plt.legend(loc="lower left", fontsize=14)
plt.xlim(-0.1, 1.1);
output

Defining a Walk Operator

This part is relevant to the Qubitization and GQSP methods. These two methods assumes that the block-encoding unitary UHU_H is also Hermitian, a generalization is given at the third item in the Technical Details section. Given the block-encoding U(s,m,0)HU_{(s,m,0)-H}, we can define the following unitary operator (usually called the Szegedy quantum walk operator [5]): WΠ0mU(s,m,0)H,W\equiv \Pi_{|0\rangle_m} U_{(s,m,0)-H}, where Π0m\Pi_{|0\rangle_m} is a reflection operator about the block state
  1. We start with defining a walk operator for our specific example.
For the reflection, we use the reflect_about_zero quantum function from the Classiq open library.
from classiq.qmod.symbolic import pi


@qfunc
def my_reflect_about_zero(qba: QNum):
    control(qba == 0, lambda: phase(pi))
    phase(pi)


@qfunc
def walk_operator(
    be_qfunc: QCallable[BlockEncodedState], state: BlockEncodedState
) -> None:
    be_qfunc(state)
    my_reflect_about_zero(state.block)
This unitary function has the following important properties (see Section 7 in Ref. [6] ) :
  1. Its powers correspond to a (1,m,0)(1,m,0)-encoding of the Chebyshev polynomials:
Wk=(Tk(H/s))=U(1,m,0)Tk(H/s),(6)W^k = \begin{pmatrix} T_k(H/s) & * \\ * & * \end{pmatrix}=U_{(1,m,0)-T_k(H/s)}, \quad (6) with TkT_k being the k-th Chebyshev polynomial. 2. Its specturm has a nice relation to the spectrum of the block-encoded Hamiltonian: {eigenvalues:}e{±iarccos(λ/s)},{witheigenvectors:}φ{±}{λ}{1}{{2}}(v{λ}0m±i{λ}),(7) \text\{eigenvalues: \} e^\{\pm i \arccos(\lambda/s)\}, \text\{ with eigenvectors: \} |\varphi^\{\pm\}_\{\lambda\}\rangle \equiv \frac\{1\}\{\sqrt\{2\}\}\left(|v_\{\lambda\}\rangle |0\rangle_m \pm i|\perp_\{\lambda\}\rangle\right), \quad (7) where vλ|v_\lambda\rangle is an eigenstate of the Hamiltonian HH with an eigenvalue λ\lambda.

Verifying the Block-Encoding

As a final sanity check of the preliminary definitions, and before moving to the more complex Hamiltonian simulation implementation, let us quickly verify the Hamiltonian block-encoding. For this, we define a model that applies UHU_H on the state we would like to evolve, and verify that the resulting state, after post-selection, gives (H/αˉ)b(H/\bar{\alpha})\vec{b}.
@qfunc
def main(data: Output[QNum[data_size]], block: Output[QNum[block_size]]):
    state = BlockEncodedState()
    allocate(state)

    inplace_prepare_amplitudes(state_to_evolve, 0.0, state.data)
    be_hamiltonian(state)
    bind(state, [data, block])


qprog_be = synthesize(main)
show(qprog_be)
Output:

Quantum program link: https://platform.classiq.io/circuit/3A4nDcULEnRYUFXSyuMm9qCtn82
  

Screenshot 2025-10-16 at 15.16.18.png
with ExecutionSession(qprog_be, execution_preferences=execution_preferences) as es:
    res_be = es.sample()
We use the predefined function for the post-selection, and the utility function for examining the result with respect to the expected one:
state_result_be = get_projected_state_vector(res_be)
expected_state_be = matrix @ state_to_evolve

renormalized_be, overlap_be = compare_quantum_classical_states(
    expected_state_be, state_result_be, be_scaling
)
print("expected state:", expected_state_be)
print("resulting state after rescaling:", renormalized_be)
assert np.linalg.norm(renormalized_be - expected_state_be) < EPS
print("=" * 40)
print("overlap between expected and resulting state:", overlap_be)
Output:
expected state: [0.11042346+0.j 0.24170338+0.j 0.10507277+0.j 0.03828092+0.j]
  resulting state after rescaling: [0.11042346+5.81038846e-18j 0.24170338-1.54545508e-16j
   0.10507277-3.55300537e-16j 0.03828092-4.18445705e-18j]
  ========================================
  overlap between expected and resulting state: 0.9999999999999998
  



Hamiltonian Simulation with GQSP

Comment: The current implementation assumes that the block encoding unitary is also a Hermitian matrix (see third item at the Technical Notes below). The GQSP method applies a polynomial transformation on a unitary, see GQSP notebook. The idea is to work with the walk operator, applying for it the polynomial transformation P(z)=k=ddikJk(st)zk,P(z) = \sum_{k=-d}^{d} i^k J_k(-st) z^{k}, which approximates eistcos(θ)e^{-ist\cos(\theta)} according to the Jacobi-Anger expansion, see Eq. (1) above. To see why it works, we use the second property of the walk operator in Eq. (7), to write W=λeiarccos(λ/s)φλ+φλ++eiarccos(λ/s)φλφλ,W = \sum_{\lambda} e^{i \arccos(\lambda/s)} |\varphi^{+}_{\lambda}\rangle \langle \varphi^{+}_{\lambda}| + e^{-i \arccos(\lambda/s)} |\varphi^{-}_{\lambda}\rangle \langle \varphi^{-}_{\lambda}|, where λ\lambda are the eigenvalues of the Hamiltonian HH. The polynomial transformation works on each eigenvalue and gives P(W)=λP(eiarccos(λ/s))φλ+φλ++P(eiarccos(λ/s))φλφλ=λeiλt(φλ+φλ++φλφλ).P(W) = \sum_{\lambda} P(e^{i \arccos(\lambda/s)}) |\varphi^{+}_{\lambda}\rangle \langle \varphi^{+}_{\lambda}| + P(e^{-i \arccos(\lambda/s)})|\varphi^{-}_{\lambda}\rangle \langle \varphi^{-}_{\lambda}| = \sum_{\lambda} e^{-i \lambda t}\left( |\varphi^{+}_{\lambda}\rangle \langle \varphi^{+}_{\lambda}| + |\varphi^{-}_{\lambda}\rangle \langle \varphi^{-}_{\lambda}|\right). When this unitary is applied on some state ψ0m|\psi\rangle|0\rangle_m we get, approximately, P(W)ψ0meiHtψ0m.P(W)|\psi\rangle|0\rangle_m \approx e^{-iHt}|\psi\rangle|0\rangle_m. Applying a GQSP requires calculating a series of GQSP angles, which rotate an extra qubit. For a stable convergence of this classical preprocess, we need to take some scaling factor βgqspP(z)\beta_{\rm gqsp} P(z) with βgqsp<1\beta_{\rm gqsp}<1. Below we take βgqsp=0.99\beta_{\rm gqsp}=0.99. We use the gqsp function from the open-library and the classical gqsp_phases. The resulting unitary of the GQSP routine is a (βgqsp1,m+1,ϵ)(\beta_{\rm gqsp}^{-1}, m+1,\epsilon)-encoding for the Hamiltonian simulation: U(βgqsp1,m+1,ϵ)exp(iHt)=(βgqspexp(iHt)).U_{(\beta_{\rm gqsp}^{-1}, m+1,\epsilon)-\exp{(-iHt)}} = \begin{pmatrix} \beta_{\rm gqsp}\exp{(-iHt)} & * \\ * & * \end{pmatrix}. where mm is the block size of the Hamiltonian block-encoding.

Implementation

We find the GQSP angles, and define a Quantum Struct for the Hamiltonian evolution function.
from classiq.applications.qsp.qsp import (
    gqsp_phases,
    poly_jacobi_anger_error,
    poly_jacobi_anger_exp_cos,
)

GQSP_SCALE = 0.99
t0 = time.perf_counter()
gqsp_degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME * be_scaling)

jacobi_anger_poly_expcos = GQSP_SCALE * poly_jacobi_anger_exp_cos(
    gqsp_degree, -EVOLUTION_TIME * be_scaling
)
jacobi_anger_phases_expcos = gqsp_phases(jacobi_anger_poly_expcos)
classical_preprocess_time_gqsp = time.perf_counter() - t0
We define a Quantum Struct for the GQSP block-encoding:
class GQSPBlock(QStruct):
    block_ham: QNum[block_size]
    block_gqsp: QBit


class GQSPState(QStruct):
    data: QNum[data_size]
    block: GQSPBlock
Next, we construct a quantum function that gets the block-encoding function as a parameter. A negative power is needed, since the approximation of eitcos(x)e^{-it\cos(x)} in Eq. (1) is a Laurent polynomial that includes negative powers as well.
@qfunc
def gqsp_hamiltonian_evolution(
    be_qfunc: QCallable[BlockEncodedState],
    state: GQSPState,
):
    gqsp(
        u=lambda: walk_operator(be_qfunc, [state.data, state.block.block_ham]),
        aux=state.block.block_gqsp,
        phases=jacobi_anger_phases_expcos,
        negative_power=gqsp_degree,
    )
The code in the rest of this section builds a model that applies the gqsp_hamiltonian_evolution function on the randomly prepared vector (ψ,0)(\vec{\psi},\vec{0}), synthesizes it, executes the resulting quantum program, and verifies the results.
@qfunc
def main(data: Output[QNum[data_size]], block: Output[QNum[block_size + 1]]):
    state = GQSPState()
    allocate(state)

    inplace_prepare_amplitudes(state_to_evolve, 0.0, state.data)
    gqsp_hamiltonian_evolution(be_hamiltonian, state)

    bind(state, [data, block])

# Synthesize
qprog_gqsp = synthesize(main)
show(qprog_gqsp)
Output:

Quantum program link: https://platform.classiq.io/circuit/3A4nNfZoX1XEDnoL6sA1naEPVx0
  

Screenshot 2025-10-16 at 15.18.21.png
# Execute and construct the projected statevector
with ExecutionSession(qprog_gqsp, execution_preferences) as es:
    result_gqsp = es.sample()

state_result_gqsp = get_projected_state_vector(result_gqsp)
exp_scaling_factor_gqsp = 1 / GQSP_SCALE
For the verification, we call the pre-defined function for calculating the overlap:
renormalized_state_gqsp, overlap_gqsp = compare_quantum_classical_states(
    expected_state, state_result_gqsp, exp_scaling_factor_gqsp
)
print("expected state:", expected_state)
print("resulting state after rescaling:", renormalized_state_gqsp)
assert np.linalg.norm(renormalized_state_gqsp - expected_state) < EPS
print("=" * 40)
print("overlap between expected and resulting state:", overlap_gqsp)
Output:
expected state: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.5850341j
   -0.04346684+0.02111774j]
  resulting state after rescaling: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.58503411j
   -0.04346684+0.02111774j]
  ========================================
  overlap between expected and resulting state: 1.0000000000000002
  



Hamiltonian Simulation with QSVT

The QSVT is a general technique for applying block-encoding of matrix polynomials via quantum signal processing. Refer to Ref. [1] and the QSVT notebook for more information about this generic and important subject. The QSVT supports polynomials with a well defined parity. Thus, we can apply two QSVT blocks- one for approximating the cos(xt)\cos(xt) function and one for the sin(xt)\sin(xt) function- and then, for the finale, we construct an LCU to get the block-encoding of the sum eixt=cos(xt)isin(xt)e^{-ixt} = \cos(xt)-i\sin(xt). For a realization of such a model on real hardware, see Ref. [7]. The QSVT routine requires pre-calculation of rotation angles on an extra qubit. For the stability of this classical preprocess, it is essential to work with βcoscos(xt)\beta_{\cos} \cos(xt) and βsinsin(xt)\beta_{\sin} \sin(xt), where βcos,βsin<1\beta_{\cos} , \beta_{\sin} <1 . Below we take βcos=βsin=0.99\beta_{\cos}= \beta_{\sin}=0.99. To apply the LCU of the even and odd QSVT routine, we call the qsvt_lcu function from the open-library, that performs a non-trivial, optimized select operation for the two QSVT calls, as described in the figure below. Overall, the LCU of the two unitaries: U(βcos1,m+1,ϵ)cos(Ht)iU(βsin1,m+1,ϵ)sin(Ht)U_{(\beta_{\cos}^{-1},m+1,\epsilon)-\cos(Ht)} - iU_{(\beta_{\sin}^{-1},m+1,\epsilon)-\sin(Ht)}, gives a (2βcos1,m+2,ϵ)(2\beta_{\cos}^{-1}, m+2,\epsilon)-encoding for the Hamiltonian simulation: U((2βcos1,m+2,ϵ))exp(iHt)=(βcos2exp(iHt)),U_{((2\beta_{\cos}^{-1}, m+2,\epsilon))-\exp{(iHt)}} = \begin{pmatrix} \frac{\beta_{\cos}}{2}\exp{(iHt)} & * \\ * & * \end{pmatrix}, where mm is the block size of the Hamiltonian block-encoding. Screenshot 2025-10-13 at 12.07.04.png

Implementation

We start by calculating the rotation angles for the sine and cosine polynomial approximations, adding a factor of 0.990.99.
from classiq.applications.qsp import qsvt_phases

t0 = time.perf_counter()

qsvt_degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME * be_scaling)
COS_SCALE = SIN_SCALE = 0.99
poly_even = COS_SCALE * poly_jacobi_anger_cos(qsvt_degree, EVOLUTION_TIME * be_scaling)
phases_cos = qsvt_phases(poly_even)

poly_odd = SIN_SCALE * poly_jacobi_anger_sin(qsvt_degree, EVOLUTION_TIME * be_scaling)
phases_sin = qsvt_phases(poly_odd)

classical_preprocess_time_qsvt = time.perf_counter() - t0

assert np.abs(len(phases_sin) - len(phases_cos)) <= 1
Next, we use Classiq’s prepare_select function, which allows a flexible implementation of the select operation in LCU. In our case we take this operation to be the qsvt_lcu function. The coefficients (1/2,i/2)(1/2,-i/2) are chosen to get the desired exp(ix)\exp(-ix) polynomial.
class QSVTBlock(QStruct):
    block_ham: QNum[block_size]
    block_qsvt: QBit
    block_lcu: QBit


class QSVTState(QStruct):
    data: QNum[data_size]
    block: QSVTBlock


@qfunc
def qsvt_hamiltonian_evolution(
    phases_cos: list[float],
    phases_sin: list[float],
    be_qfunc: QCallable[BlockEncodedState],
    state: QSVTState,
):
    def projector_cnot(q: QBit):
        q ^= state.block.block_ham == 0

    prepare_select(
        coefficients=[1 / 2, -1j / 2],
        select=lambda block_lcu: qsvt_lcu(
            phases_cos,
            phases_sin,
            projector_cnot,
            projector_cnot,
            lambda: be_qfunc([state.data, state.block.block_ham]),
            state.block.block_qsvt,
            block_lcu,
        ),
        block=state.block.block_lcu,
    )
The code in the rest of this section builds a model that applies the qsvt_hamiltonian_evolution function on the randomly prepared vector (ψ,0)(\vec{\psi},\vec{0}), synthesizes it, executes the resulting quantum program, and verifies the results.
@qfunc
def main(data: Output[QNum[data_size]], block: Output[QNum[block_size + 2]]):

    state = QSVTState()
    allocate(state)
    inplace_prepare_amplitudes(state_to_evolve, 0.0, state.data)
    qsvt_hamiltonian_evolution(phases_cos, phases_sin, be_hamiltonian, state)
    bind(state, [data, block])

qprog_qsvt = synthesize(main, constraints=Constraints(optimization_parameter="width"))
show(qprog_qsvt)
Output:

Quantum program link: https://platform.classiq.io/circuit/3A4nV1PhPZ44fmiQvSQi0UGXewO
  

Screenshot 2025-10-16 at 15.26.55.png
with ExecutionSession(qprog_qsvt, execution_preferences) as es:
    results_qsvt = es.sample()

state_result_qsvt = get_projected_state_vector(results_qsvt)
exp_scaling_factor_qsvt = 2 * (1 / COS_SCALE)
For the verification, we call the pre-defined function for calculating the overlap:
renormalized_state_qsvt, overlap_qsvt = compare_quantum_classical_states(
    expected_state, state_result_qsvt, exp_scaling_factor_qsvt
)
print("expected state:", expected_state)
print("resulting state after rescaling:", renormalized_state_qsvt)
assert np.linalg.norm(renormalized_state_qsvt - expected_state) < EPS
print("=" * 40)
print("overlap between expected and resulting state:", overlap_qsvt)
Output:
expected state: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.5850341j
   -0.04346684+0.02111774j]
  resulting state after rescaling: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.5850341j
   -0.04346684+0.02111774j]
  ========================================
  overlap between expected and resulting state: 1.0000000000000002
  



Hamiltonian Simulation with Qubitization

Comment: The current implementation assumes that the block encoding unitary is also a Hermitian matrix (see third item in the Technical Notes below). From the first property of the walk operator, Eq. (6) , we can simply combine the Jacobi–Anger expansion with the LCU technique to get the block-encoding for the Hamiltonian simulation [8]. Namely, we have an ϵ\epsilon-approximation of exp(iHt)i=0dβiTi(x)\exp(-iHt)\approx \sum^d_{i=0} \beta_{i} T_{i}(x), for which you can perform the following encoding: U(βˉ,m~,ϵ)exp(iHt)=(exp(iHt)/βˉ)=(1βˉk=0dβkU(1,m,0)Tk(H))=(1βˉk=0dβkTk(Ht))=(1βˉk=0dWk),U_{(\bar{\beta},\tilde{m},\epsilon)-\exp{(-iHt)}} = \begin{pmatrix} \exp{(-iHt)}/\bar{\beta} & * \\ * & * \end{pmatrix}= \begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} \beta_{k} U_{(1,m,0)-T_k(H)} & * \\ * & * \end{pmatrix}= \begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} \beta_{k} T_{k}(Ht) & * \\ * & * \end{pmatrix} =\begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} W^k & * \\ * & * \end{pmatrix}, where m~=m+log2(d+1)\tilde{m}=m+\lceil \log_2(d+1) \rceil, where mm is the block size of the Hamiltonian block-encoding (recalling that WkW^k are block-encodings themselves, with block size mm).

Implementation

First, we use the predefined function to calculate the coefficients βi\beta_i for approximating the sine and cosine functions. Recall that we must rescale the time by the block-encoding scaling factor for HH.
t0 = time.perf_counter()

cheb_degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME * be_scaling)
poly_cos = poly_jacobi_anger_cos(cheb_degree, EVOLUTION_TIME * be_scaling)
poly_sin = poly_jacobi_anger_sin(cheb_degree, EVOLUTION_TIME * be_scaling)

L = max(len(poly_odd), len(poly_even))
odd = np.pad(poly_sin, (0, L - len(poly_sin)))
even = np.pad(poly_cos, (0, L - len(poly_cos)))
# we put a (-) sign as we want to simulate e^(-iHt) and not e^(iHt)
exp_coeffs = even - 1j * odd

classical_preprocess_time_chec_lcu = time.perf_counter() - t0
We need to build an LCU of the unitaries {Wk}\{ W^k\}, with the following parameters:
exp_block_size = (len(exp_coeffs) - 1).bit_length() if len(exp_coeffs) != 1 else 1
print(f"The block size of the block-encoded Hamiltonian evolution is: {exp_block_size}")
exp_be_scaling = np.sum(np.abs(exp_coeffs))
print(
    f"The scaling factor for the block-encoded Hamiltonian evolution is: {exp_be_scaling}"
)
Output:

The block size of the block-encoded Hamiltonian evolution is: 6
  The scaling factor for the block-encoded Hamiltonian evolution is: 5.626479051283921
  

In principle, we could use the lcu function from the open-library, however, since the unitaries are just a series of powers of the same unitary, we can devise a better design to the select operator, as depicted in the figure below (this is similar to the construction of the QPE algorithm). Note that in this construction, the select operator runs over all powers 0,1,,2l10,1,\dots, 2^{l}-1, with l=2log2(d+1).l=2^{\lceil \log_2(d+1) \rceil}. Screenshot 2025-10-12 at 14.44.44.png We build this specified select in the following function:
@qfunc
def select_powered_unitaries(u: QCallable, block: QArray):
    repeat(block.len, lambda i: control(block[i], lambda: power(2**i, lambda: u())))
Finally, we define a Quantum Struct for the Qubitization based block encoding, and build an lcu_cheb function that applies the LCU. We use Classiq’s prepare_select function, which allows a flexible implementation of the select operation in LCU. In our case we take this operation to be the select_powered_unitaries function.
class QubitizationBlock(QStruct):
    block_ham: QNum[block_size]
    block_exp: QArray[exp_block_size]


class QubitizationState(QStruct):
    data: QNum[data_size]
    block: QubitizationBlock


@qfunc
def lcu_cheb(
    coefs: list[float], be_qfunc: QCallable[BlockEncodedState], state: QubitizationState
):
    prepare_select(
        coefficients=coefs,
        select=lambda lcu_block: select_powered_unitaries(
            lambda: walk_operator(be_qfunc, [state.data, state.block.block_ham]),
            lcu_block,
        ),
        block=state.block.block_exp,
    )
The code in the rest of this section builds a model that applies the lcu_cheb function on the randomly prepared vector (ψ,0)(\vec{\psi},\vec{0}), synthesizes it, executes the resulting quantum program, and verifies the results.
@qfunc
def main(
    data: Output[QNum[data_size]], block: Output[QNum[block_size + exp_block_size]]
):
    state = QubitizationState()
    allocate(state)
    inplace_prepare_amplitudes(state_to_evolve, 0.0, state.data)

    lcu_cheb(
        exp_coeffs,
        be_hamiltonian,
        state,
    )
    bind(state, [data, block])
Synthesize the quantum model and execute it:
qprog_cheb_lcu = synthesize(main, preferences=Preferences(optimization_level=1))
show(qprog_cheb_lcu)
Output:

Quantum program link: https://platform.classiq.io/circuit/3A4nbSHmU95GRpyZEgy7MJRiGnI
  

Screenshot 2025-10-16 at 15.33.22.png
with ExecutionSession(qprog_cheb_lcu, execution_preferences) as es:
    results_cheb_lcu = es.sample()

state_result_cheb_lcu = get_projected_state_vector(results_cheb_lcu)
exp_scaling_factor_cheb_lcu = exp_be_scaling
For the verification, we call the pre-defined function for calculating the overlap:
renormalized_state_cheb_lcu, overlap_cheb_lcu = compare_quantum_classical_states(
    expected_state, state_result_cheb_lcu, exp_scaling_factor_cheb_lcu
)
print("expected state:", expected_state)
print("resulting state after rescaling:", renormalized_state_cheb_lcu)
assert np.linalg.norm(renormalized_state_cheb_lcu - expected_state) < EPS
print("=" * 40)
print("overlap between expected and resulting state:", overlap_cheb_lcu)
Output:
expected state: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.5850341j
   -0.04346684+0.02111774j]
  resulting state after rescaling: [-0.15737099-0.01308622j  0.72273008-0.32779191j -0.02601136-0.5850341j
   -0.04346684+0.02111774j]
  ========================================
  overlap between expected and resulting state: 1.0000000000000002
  



Summary

def get_qprog_data(qprog):
    return qprog.data.width, qprog.transpiled_circuit.count_ops["cx"]

widths = []
cx_counts = []

for qprog in [qprog_gqsp, qprog_qsvt, qprog_cheb_lcu]:
    widths.append(get_qprog_data(qprog)[0])
    cx_counts.append(get_qprog_data(qprog)[1])

exp_scaling = [
    exp_scaling_factor_gqsp,
    exp_scaling_factor_qsvt,
    exp_scaling_factor_cheb_lcu,
]

eff_cx_counts = [
    int(np.ceil(counts * scale)) for counts, scale in zip(cx_counts, exp_scaling)
]

exp_block_sizes = [block_size + 1, block_size + 2, block_size + exp_block_size]

methods = ["GQSP", "QSVT", "Cheb. LCU"]

preprocess_times = [
    classical_preprocess_time_gqsp,
    classical_preprocess_time_qsvt,
    classical_preprocess_time_chec_lcu,
]

state_distance = [
    np.linalg.norm(expected_state - state)
    for state in [
        renormalized_state_gqsp,
        renormalized_state_qsvt,
        renormalized_state_cheb_lcu,
    ]
]

import pandas as pd

df = pd.DataFrame(
    {
        "methods": methods,
        "block size": exp_block_sizes,
        "scaling factor ": exp_scaling,
        "width": widths,
        "cx counts": cx_counts,
        "effective cx counts": eff_cx_counts,
        "preprocess time [sec]": preprocess_times,
        "accuracy": state_distance,
    }
)
df
methodsblock sizescaling factorwidthcx countseffective cx countspreprocess time [sec]accuracy
0GQSP31.0101017836484490.0771029.714870e-11
1QSVT42.0202027195339460.1175312.969317e-11
2Cheb. LCU85.626479126551368600.0009852.973734e-11
As we can see, the QSVT gives better results in terms of CX counts, however, its scaling factor means that in practice, its depth/CX-counts must be doubled in comparison to the GQSP result. For a scaling factor of 2 we need to apply 1 iteration of Oblivious Amplitude amplification, whereas for a factor of 4 we need to call it twice. In other words, measuring the block being at state 0 is 2 times less likely in the QSVT method, compared to the GQSP on. This issue is taken into account in the “effective CX counts” measure presented in the table. Even with this fact, the QSVT gives better results in terms of gate count, using one extra qubit. The Chebyshev LCU approach (Qubitization) is less effective in all parameters, except it requires no heavy preprocessing for calculating angles. We note that all three methods have the same asymptotic scaling, and the differences appeared in the tables are due to the detailed implementations of the current example.

References

[1]: Martyn, J. M., Rossi, Z. M., Tan, A. K., & Chuang, I. L. Grand unification of quantum algorithms. PRX Quantum 2, 040203 (2021). [2]: Motlagh, D, and Nathan W. Generalized quantum signal processing. PRX Quantum 5, 020368 (2024). [3]: Berry, D. W., Childs, A. M., & Kothari, R. Hamiltonian simulation with nearly optimal dependence on all parameters. In Proceedings of the 56th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 792–809 (2015). [4]: Jacobi–Anger Expansion (Wikipedia) [5]: Szegedy, M. Quantum speed-up of Markov chain based algorithms. In 45th Annual IEEE Symposium on Foundations of Computer Science, pp. 32–41 (2004). [6]: Lin, L. Lecture notes on quantum algorithms for scientific computation. arXiv:2201.08309 [quant-ph] (2022). [7]: Kikuchi, Y., McKeever, C., Coopmans, L., et al. Realization of quantum signal processing on a noisy quantum computer. npj Quantum Information 9, 93 (2023). [8]: Childs, A. M., Kothari, R., & Somma, R. D. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing 46, 1920–1950 (2017).

Technical Notes

Recap on Block-Encoding matrices

The basic use of the block-encoding unitary is the following. Let us say we have a (s,m,0)(s, m, 0)-encoding of a matrix AA U(s,m,0)A=(A/s),U_{(s, m, 0)-A} = \begin{pmatrix} A/s & * \\ * & * \end{pmatrix}, where the dimension of AA is 2n×2n2^n\times 2^n. Applying this unitary on the state ψn0m=ψn(100)=(ψn00),|\psi\rangle_n|0\rangle_m =|\psi\rangle_n \otimes \begin{pmatrix} 1 \\ 0\\ \vdots \\ 0 \end{pmatrix} = \begin{pmatrix} |\psi\rangle_n \\ 0\\ \vdots \\ 0 \end{pmatrix}, yields U(s,m,0)A(ψn0m)=(1sAψn00)+(0)=1sAψn0m+l0clϕnlm.U_{(s, m, 0)-A} \left(|\psi\rangle_n|0\rangle_m \right) = \begin{pmatrix} \frac{1}{s}A|\psi\rangle_n \\ 0\\ \vdots \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ *\\ \vdots \\ * \end{pmatrix} = \frac{1}{s}A|\psi\rangle_n|0\rangle_m +\sum_{l\neq 0} c_l |\phi\rangle_n|l\rangle_m . Thus, if we measure 0m|0\rangle_m on the second variable, we know the first variable is the state 1sAψn\frac{1}{s}A|\psi\rangle_n. In terms of measurement, we can post-select on 0m|0\rangle_m to measure the desired state. We can amplify the success of this operation, given by 1sAψn|\frac{1}{s}A|\psi\rangle_n|, with Quantum Amplitude Amplification.

Recap on LCU

The task of LCU is to block-encode a sum of unitary functions: U(αˉ,m,0)A=(A/αˉ), for A=i=0L1αiUi,αi0.U_{(\bar{\alpha},m,0)-A} =\begin{pmatrix} A/\bar{\alpha} & * \\ * & * \end{pmatrix}, \text{ for } A = \sum^{L-1}_{i=0} \alpha_i U_i, \,\,\,\,\, \alpha_i\geq 0. This operation is implemented by the so-called prepare and select operations. The prepare function is simply a state preparation function on the block variable: PREPAREI(i=0L1αi/αˉi0+i=0L1j=1L1gijij),\mathrm{PREPARE} \equiv \mathcal{I} \otimes \left(\sum^{L-1}_{i=0}\sqrt{\alpha_i/\bar{\alpha}} |i\rangle \langle 0| +\sum^{L-1}_{i=0}\sum^{L-1}_{j=1}g_{ij}|i\rangle \langle j|\right), where gijg_{ij} are some uninteresting coefficients, and I\mathcal{I} is the identity operation working on the data space. The select operation acts as a quantum switch, applying the operation UiU_i on the data variable if the block variable is at state i|i\rangle: SELECTiUi(ii).\mathrm{SELECT} \equiv \sum_{i}U_i \otimes \left(|i\rangle \langle i|\right). The LCU operation is acheived by PREPARESELECTPREPARE\mathrm{PREPARE}\cdot \mathrm{SELECT}\cdot \mathrm{PREPARE}^{\dagger}. One can check that: (I(i=0L1αi/αˉi0+i=0L1j=1L1gijij))(iUi(ii))(I(i=0L1αi/αˉ0i+i=0L1j=1L1gijji))==(i=0L1(αi/αˉ)Ui)00+(i=1L1j=1L1Gijij),\begin{align} \left(\mathcal{I} \otimes \left(\sum^{L-1}_{i=0}\sqrt{\alpha_i/\bar{\alpha}} |i\rangle \langle 0| +\sum^{L-1}_{i=0}\sum^{L-1}_{j=1}g_{ij}|i\rangle \langle j|\right)\right) \left( \sum_{i'}U_{i'} \otimes \left(|i'\rangle \langle i'|\right) \right) \left(\mathcal{I} \otimes \left(\sum^{L-1}_{i''=0}\sqrt{\alpha_{i''}/\bar{\alpha}} |0\rangle \langle i''| +\sum^{L-1}_{i''=0}\sum^{L-1}_{j''=1}g^{*}_{i''j''}|j''\rangle \langle i''|\right)\right) =\\ = \left(\sum^{L-1}_{i=0}(\alpha_i/\bar{\alpha}) U_{i} \right) \otimes |0\rangle \langle 0| + \left(\sum^{L-1}_{i=1}\sum^{L-1}_{j=1}G_{ij}|i\rangle \langle j|\right), \end{align} with GijG_{ij} some non-interesting operators.

Generalizing to non-Hermitian UHU_H

The GQSP and Qubitization methods assumes that the block-encoding unitary is also Hermitian. This assumption sets the two propetries of the walk operator, Eqs. (6) and (7), on which those two methods are based upon. In the case of non-Hermitian unitary block-encoding, the cited property of the walk operator WW does not hold. However, a similar property holds for W~U(s,m,0)HTΠ0mU(s,m,0)HΠ0m\tilde{W}\equiv U_{(s,m,0)-H}^T\Pi_{|0\rangle_m} U_{(s,m,0)-H}\Pi_{|0\rangle_m} (See Sec. 7.4 in Ref. [6].) We can work with this operator, and with its equivalent properties that generalize Eqs. (6) and (7).