Skip to content

Hamiltonian Simulation with Block-Encoding

View on GitHub

In this tutorial we demonstrate how to implement Hamiltonian simulation for block-encoded Hamiltonians, with two different techniques: Quantum Singular Eigenvalues Transform (QSVT) and Qubitization. We specialize to a Linear Combination of Unitaries (LCU) block-encoding for the Hamiltonian, however, the implementation of the Hamiltonian simulation can be easily modified to other block-encoding implementations.

The tutorial is organized as follows:

  1. Short introduction that includes some basic and relevant definitions.
  2. Definition of several classical and quantum functions that are useful for the Hamiltonian block-encoding.
  3. Definition of a specific Hamiltonian.
  4. Implementation of a Hamiltonian simulation using QSVT.
  5. Implementation of a Hamiltonian simulation using Qubitization.

The two last sections are independent.

import matplotlib.pyplot as plt
import numpy as np
import scipy

from classiq import *
from classiq.execution import (
    ClassiqBackendPreferences,
    ClassiqSimulatorBackendNames,
    ExecutionPreferences,
)
from classiq.qmod.symbolic import pi

Introduction

Block-Encoding

Definition: A \((s, m, \epsilon)\)-encoding of a \(2^n\times 2^n\) matrix \(H\) refers to completing it into a \(2^{n+m}\times 2^{n+m}\) unitary matrix \(U_{(s,m,\epsilon)-H}\):

\[U_{(s,m,\epsilon)-H} = \begin{pmatrix} H/s & * \\ * & * \end{pmatrix},\]

with some functional error \(\left|\left(U_{(s,m,\epsilon)-H}\right)_{0:2^n-1,0:2^n-1}-H/s \right|\leq \epsilon\). For the most basic usage of block-encoding see first item in the technical notes at the end of this tutorial.

Definition: LCU refers to \((\bar{\alpha}, m, 0)\)-block encoding of a matrix given as a sum of unitaries:

\[U_{(\bar{\alpha},m,0)-A} =\begin{pmatrix} A/\bar{\alpha} & * \\ * & * \end{pmatrix}, \text{ for } A = \sum^{L-1}_{i=0} \alpha_i U_i, \,\,\,\,\, \alpha_i\geq 0\]

with \(\bar{\alpha}\equiv\sum^{L-1}_{i=0}\alpha_i\) and \(m= \lceil\log_2(L) \rceil\). See the LCU tutorial for more details.

In this tutorial we start with an exact \((s, m, 0)\)-encoding of some Hamiltonian, and implement an approximated encoding of its Hamiltonian evolution

\[U_{(\tilde{s},\tilde{m},\epsilon)-\exp{(iHt)}} = \begin{pmatrix} \exp{(iHt)}/\tilde{s} & * \\ * & * \end{pmatrix},\]

using QSVT and Qubitization approaches. Both implementations are based on polynomial approximation of the function \(f(x)=e^{ix}\), in particular, we will perform the block-encoding of \(\cos(Ht)\) and \(\sin(Ht)\) and then construct an LCU for \(e^{iHt}=\cos(Ht)+i\sin(Ht)\). Therefore, the LCU methodology is employed at least twice, once for the block-encoding of \(H\), and the second for block-encoding the sum of sine and cosine.

The Jacobi–Anger Expansion

The Jacobi–Anger expansion approximates the sine and cosine functions with Chebyshev polynomials as follows [1]: \begin{eqnarray} \cos(xt) &=& J_0(t) + 2\sum^{d}{k=1} (-1)^k J(x)\ \sin(xt) &=& 2\sum^{d}}(t) T_{2k{k=0} (-1)^k J(x), \end{eqnarray} where }(t) T_{2k+1\(J_i(x)\) and \(T_i(x)\) are the Bessel function and Chebychev Polynomial of order \(i\), respectivaly, and the degree \(d\) is related to the approximation error \(\epsilon\) as

\[d = O\left(t - \frac{\log\epsilon}{1+\log\left(e-\frac{\log(\epsilon)}{t}\right)}\right).\]

We start with defining a function that gets \(\epsilon\) and the evolution time \(t\), and returns the Chebyshev coefficients of the sine and cosine functions, using the scipy package.

from scipy.special import eval_chebyt, jv


def get_cheb_coef(epsilon, t):
    poly_degree = int(
        np.ceil(
            t
            + np.log(epsilon ** (-1)) / np.log(np.exp(1) + np.log(epsilon ** (-1)) / t)
        )
    )
    cos_coef = [jv(0, t)] + [
        2 * jv(2 * k, t) * (-1) ** k for k in range(1, poly_degree // 2 + 1)
    ]
    sin_coef = [
        -2 * jv(2 * k - 1, t) * (-1) ** k for k in range(1, poly_degree // 2 + 1)
    ]
    return cos_coef, sin_coef

We can visualize the approximation for a specific example

EVOLUTION_TIME = 10
EPS = 0.1

cos_coef, sin_coef = get_cheb_coef(EPS, EVOLUTION_TIME)

xs = np.linspace(0, 1, 100)
approx_cos = sum([cos_coef[k] * eval_chebyt(2 * k, xs) for k in range(len(cos_coef))])
approx_sin = sum(
    [sin_coef[k] * eval_chebyt(2 * k + 1, xs) for k in range(len(sin_coef))]
)
plt.plot(xs, np.cos(EVOLUTION_TIME * xs), "-r", linewidth=3, label="cos")
plt.plot(xs, approx_cos, "--k", linewidth=2, label="approx. cos")
plt.plot(xs, np.sin(EVOLUTION_TIME * xs), "-y", linewidth=3, label="sin")
plt.plot(xs, approx_sin, "--b", linewidth=2, label="appeox sin")
plt.ylabel(r"$\cos(x)\,\,\, , \sin(x)$", fontsize=16)
plt.xlabel(r"$x$", fontsize=16)
plt.yticks(fontsize=16)
plt.xticks(fontsize=16)
plt.legend(loc="lower left", fontsize=14)
plt.xlim(-0.1, 1.1)
(-0.1, 1.1)

png

Building LCU for Pauli Strings

We focus on Hamiltonians that are given in the Pauli basis, i.e., as a sum of unitaries

\[H = \sum^{L-1}_{i=0} \alpha_i U_i,\]

where \(U_i\) are Pauli strings. Below we define some quantum and classical functions relevant for LCU.

Classical Pre-process Functions

The LCU is implemented with the so-called "prepare" and "select" operations. The former prepare a quantum state that corresponds to the probabilities \(\alpha_i /\sum\alpha_i\). We define a function that gets the list of \(L\) coefficients and returns the probabilities to be loaded as part of the LCU.

def get_normalized_lcu_coef(lcu_coef):

    normalization_factor = sum(lcu_coef)
    prepare_prob = [c / normalization_factor for c in lcu_coef]
    coef_size = int(np.ceil(np.log2(len(prepare_prob))))
    prepare_prob += [0] * (2**coef_size - len(prepare_prob))

    print("The size of the block encoding:", coef_size)
    print("The normalized coefficients:", prepare_prob)
    print("The normalization factor:", normalization_factor)

    return normalization_factor, coef_size, prepare_prob

Quantum Functions

Below we define a quantum function lcu_paulis that implements LCU of Pauli strings, its arguments are:

  • pauli_terms_list: the list of Pauli strings

  • probs: the normalized coefficients \(\alpha_i/\sum\alpha_i\),

  • data: the quantum variable on which the Hamiltonian operates.

  • block: the quantum "prepare" variable for the block encoding.

We define and use a subroutine apply_pauli_term that applies a single pauli string. For convenience, we restrict the the case where all the coefficients in the LCU are positive ones (see the second technical comment at the end of this notebook).

@qfunc
def apply_pauli_term(pauli_string: PauliTerm, x: QArray[QBit]):
    repeat(
        count=x.len,
        iteration=lambda index: switch(
            pauli_string.pauli[index],
            [
                lambda: IDENTITY(x[pauli_string.pauli.len - index - 1]),
                lambda: X(x[pauli_string.pauli.len - index - 1]),
                lambda: Y(x[pauli_string.pauli.len - index - 1]),
                lambda: Z(x[pauli_string.pauli.len - index - 1]),
            ],
        ),
    )


@qfunc
def lcu_paulis(
    pauli_terms_list: CArray[PauliTerm],
    probs: CArray[CReal],
    block: QNum,
    data: QArray[QBit],
):
    within_apply(
        lambda: inplace_prepare_state(probs, 0.0, block),
        lambda: repeat(
            count=pauli_terms_list.len,
            iteration=lambda i: control(
                block == i, lambda: apply_pauli_term(pauli_terms_list[i], data)
            ),
        ),
    )

Classical Post-process Functions

Working with block-encoding typically requires post-selection of the block variables being at state 0 (see first technical note at the end of this tutorial). The success of this process can be amplified via a Quantum Amplitude Amplification routine. This issue is beyond the scope of this tutorial, instead, we simply work with a statevector simulator. We define a function that gets execution results and returns a projected state vector.

## fix the execution preferences for this tutorial
Execution_Prefs = ExecutionPreferences(
    num_shots=1,
    backend_preferences=ClassiqBackendPreferences(
        backend_name=ClassiqSimulatorBackendNames.SIMULATOR_STATEVECTOR
    ),
)
def get_projected_state_vector(
    execution_result, measured_var: str, projections: dict, state_size
) -> np.ndarray:
    """
    This function returns a reduced statevector from execution results.
    measured var: the name of the reduced variable
    projections: on which values of the other variables to project, e.g., {"ind": 1}
    state_size: measured state vector size
    """

    def _get_var(var: str, state):
        keys = var.split(".")
        val = state
        for k in keys:
            val = val[k]
        return val

    res = execution_result[0].value
    proj_statevector = np.zeros(state_size).astype(complex)
    for sample in res.parsed_state_vector:
        if all(
            _get_var(key, sample.state) == projections[key]
            for key in projections.keys()
        ):
            proj_statevector[
                int(_get_var(measured_var, sample.state))
            ] += sample.amplitude
    return proj_statevector

Taking a Specific Example

For simplicity, we now take a specific Hamiltonian with which we continue to the next sections.

\[H = 0.4 \cdot I\otimes I +0.1\cdot I\otimes Z + 0.3 \cdot X \otimes X + 0.2\cdot Z\otimes Z\]
HAMILTONIAN = [
    PauliTerm(pauli=[Pauli.I, Pauli.I], coefficient=0.4),
    PauliTerm(pauli=[Pauli.I, Pauli.Z], coefficient=0.1),
    PauliTerm(pauli=[Pauli.X, Pauli.X], coefficient=0.05),
    PauliTerm(pauli=[Pauli.Z, Pauli.Z], coefficient=0.2),
]

Next, we need to calculate the normalized coefficients for the LCU, and the corresponding normalization factor:

lcu_pauli_coef = [p.coefficient for p in HAMILTONIAN]
normalization_ham, lcu_size_ham, prepare_probs_ham = get_normalized_lcu_coef(
    lcu_pauli_coef
)
The size of the block encoding: 2
The normalized coefficients: [0.5333333333333333, 0.13333333333333333, 0.06666666666666667, 0.26666666666666666]
The normalization factor: 0.75

Verification of the Hamiltonian Block Encoding

Before moving to the more complex Hamiltonian simulation implementation, it is useful to verify our Hamiltonian block-encoding. For this, we define a model in which we apply \(U_H\) on some random vector state of size \(2^n\cdot 2^m\), \((\vec{b},\vec{0})\), and verify that the resulting state, after post-selection, gives \((H/\bar{\alpha})\vec{b}\).

data_size = len(HAMILTONIAN[0].pauli)
b = np.random.rand(2**data_size)
b = (b / np.linalg.norm(b)).tolist()
@qfunc
def main(data: Output[QNum], block: Output[QNum]):
    allocate(lcu_size_ham, block)
    prepare_amplitudes(b, 0.0, data)
    lcu_paulis(HAMILTONIAN, prepare_probs_ham, block, data)


qmod = create_model(main, execution_preferences=Execution_Prefs)
qprog = synthesize(qmod)
show(qprog)
res = execute(qprog).result()

We use our pre-defined function for the post-selection:

state_result = get_projected_state_vector(res, "data", {"block": 0.0}, len(b))

For comparing to the expected result, we define classical functions that help us to get the matrix in the standard basis:

PAULI_MATRICES_DICT = {
    Pauli.I: np.array([[1, 0], [0, 1]], dtype=np.complex128),
    Pauli.Z: np.array([[1, 0], [0, -1]], dtype=np.complex128),
    Pauli.X: np.array([[0, 1], [1, 0]], dtype=np.complex128),
    Pauli.Y: np.array([[0, -1j], [1j, 0]], dtype=np.complex128),
}


def pauli_list_to_mat(pauli_list: list) -> np.ndarray:
    real_matrix = 0
    for term in pauli_list:
        single_str = PAULI_MATRICES_DICT[term.pauli[0]]
        for pauli in term.pauli[1:]:
            single_str = np.kron(single_str, PAULI_MATRICES_DICT[pauli])
        real_matrix += term.coefficient * single_str

    assert np.allclose(
        np.transpose(np.conjugate(real_matrix)), real_matrix
    ), "matrix not Hermitian"
    assert np.allclose(np.imag(real_matrix), 0.0), "matrix is not real valued"
    return np.real(real_matrix)
matrix = pauli_list_to_mat(HAMILTONIAN)

The quantum state that returns from execution is given up to some global phase. For comparing to the expected state we run the following lines:

expected_state = matrix / normalization_ham @ b
relative_phase = np.angle(expected_state[0] / state_result[0])
state_result = state_result * np.exp(1j * relative_phase)
print("The resulting state:", np.real(state_result))
print("The expected state:", expected_state)
assert np.allclose(np.real(state_result), expected_state)
The resulting state: [0.23733692 0.12441376 0.16368752 0.34381012]
The expected state: [0.23733692 0.12441376 0.16368752 0.34381012]

Hamiltonian Simulation with Qubitization

Given our block-encoding \(U_{(s,m,0)-H}\) we can define the following operator (usually called Szegedy quanutm walk-operator [5]):

\[W\equiv -\Pi_{|0\rangle_m} U_{(s,m,0)-H},\]

where \(\Pi_{|0\rangle_m}\) is a reflection operator about the block state. This unitary function has the important property that its powers correspond to a \((1,m,0)\)-encoding of the Chebychev polynomials:

\[W^k = \begin{pmatrix} T_k(H) & * \\ * & * \end{pmatrix}=U_{(1,m,0)-T_k(H)},\]

with \(T_k\) being the k-th Chebychev polynomial. For more details see Sec. 7 in Ref. [2] (the property holds only for Hermitian block-encoding, for generalization see comment 3 below). We can thus simply combine the Jacobi–Anger expansion with the LCU technique to get the block-encoding for the Hamiltonain simulation. Namely, we have an \(\epsilon\)-approximation of \(\exp(iHt)\approx \sum^d_{i=0} \beta_{i} T_{i}(x)\), for which we can perform the following encoding

\[U_{(\bar{\beta},\tilde{m},\epsilon)-\exp{(iHt)}} = \begin{pmatrix} \exp{(iHt)}/\bar{\beta} & * \\ * & * \end{pmatrix}= \begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} \beta_{k} U_{(1,m,0)-T_k(H)} & * \\ * & * \end{pmatrix}= \begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} \beta_{k} T_{k}(Ht) & * \\ * & * \end{pmatrix} =\begin{pmatrix} \frac{1}{\bar{\beta}}\sum^d_{k=0} W^k & * \\ * & * \end{pmatrix},\]

where \(\tilde{m}=m+\lceil \log_2(d+1) \rceil\) (recall that \(W\) are block-encoding themeselves with block size \(m\)).

First, let us define a walk-operator for our specific example. For the reflection we can use the reflect_about_zero quantum function from Classiq's open library.

@qfunc
def my_walk_operator(block: QArray[QBit], data: QArray[QBit]) -> None:
    lcu_paulis(HAMILTONIAN, prepare_probs_ham, block, data)
    reflect_about_zero(block)
    RY(2 * pi, block[0])  # for the minus sign

Next, we use our pre-defined function to calculate the coefficients \(\beta_i\) for approximating the sine and cosine functions. Recall that we want to approximate \(e^{iHt}\), whereas our Hamiltonian is encoded with a normalization factor \(\bar{\alpha}\). Therefore, we must rescale the time by this factor.

normalized_time = normalization_ham * EVOLUTION_TIME

cos_coef, sin_coef = get_cheb_coef(EPS, normalized_time)

combined_sin_cos_coef = []
for k in range(len(cos_coef) - 1):
    combined_sin_cos_coef.append(cos_coef[k])
    combined_sin_cos_coef.append(sin_coef[k])
combined_sin_cos_coef.append(cos_coef[-1])
if len(sin_coef) == len(cos_coef):
    combined_sin_cos_coef.append(sin_coef[-1])

Given the coefficients, we should now calculate the probabilities for the LCU prepare variable. These must be positive numbers. Moreover, for the odd terms, which corresponds to the sine function, we should add a factor of \(i\). Those factors should be added to the unitary operations \(W^k\). To take this into account, for each term we define a "generalized sign" \(\sigma_k\) such that \(\beta_k = e^{\frac{\pi}{2}\sigma_k i} |\beta_k|\) with \(\sigma_k \in \{0,1,2,3\}\) that corresponds to the factors \(1,i,-1,-i\) respectively.

signs_cheb_coef = np.sign(combined_sin_cos_coef).tolist()
generalized_signs = [
    (1 - signs_cheb_coef[s]) + (s) % 2 for s in range(len(signs_cheb_coef))
]
positive_cheb_lcu_coef = np.abs(combined_sin_cos_coef)

Now that we have our positive coefficients, we can use the pre-defined pre-process function to get the probabilities for the LCU:

normalization_exp, lcu_size_exp, prepare_probs_exp = get_normalized_lcu_coef(
    positive_cheb_lcu_coef
)
The size of the block encoding: 4
The normalized coefficients: [0.06646302105481754, 0.06750041778530919, 0.11492593070021927, 0.12879424749209276, 0.011890532706545038, 0.14147748237907418, 0.17674611046555386, 0.14131629436581197, 0.08704430568396189, 0.04437822442664004, 0.019463432939974195, 0, 0, 0, 0, 0]
The normalization factor: 4.007336014122894

We define a quantum function for a generic LCU of Chebyshev polynomials, which gets the prepare probabilities, the generalized signs, and a walk operator as a quantum callable.

@qfunc
def lcu_cheb(
    coef: CArray[CReal],
    generalized_signs: CArray[CInt],
    walk_operator: QCallable[QNum, QArray],
    walk_block: QNum,
    walk_data: QArray,
    cheb_block: QNum,
):

    within_apply(
        lambda: inplace_prepare_state(coef, 0.0, cheb_block),
        lambda: repeat(
            generalized_signs.len,
            lambda k: control(
                cheb_block == k,
                lambda: (
                    U(0, 0, 0, pi / 2 * generalized_signs[k], walk_data[0]),
                    power(k, lambda: walk_operator(walk_block, walk_data)),
                ),
            ),
        ),
    )

The code in the rest of this section builds a model that applies the lcu_cheb function on the randomly prepared vector \((\vec{b},\vec{0})\); synthesizes it; executes the resulting quantum program; and verifies the results.

@qfunc
def main(ham_block: Output[QNum], data: Output[QNum], exp_block: Output[QNum]):
    allocate(lcu_size_exp, exp_block)
    allocate(lcu_size_ham, ham_block)
    prepare_amplitudes(b, 0.0, data)
    lcu_cheb(
        prepare_probs_exp,
        generalized_signs,
        lambda x, y: my_walk_operator(x, y),
        ham_block,
        data,
        exp_block,
    )


qmod = create_model(main, execution_preferences=Execution_Prefs)
write_qmod(qmod, "hamiltonian_simulation_qubitization", decimal_precision=12)

We synthesize the quantum model and execute it

qprog = synthesize(qmod)
show(qprog)
results = execute(qprog).result()
state_result = get_projected_state_vector(
    results, "data", {"exp_block": 0.0, "ham_block": 0.0}, len(b)
)
expected_state = (
    1 / normalization_exp * scipy.linalg.expm(1j * matrix * EVOLUTION_TIME) @ b
)
relative_phase = np.angle(expected_state[0] / state_result[0])
state_result = state_result * np.exp(
    1j * relative_phase
)  # rotate according to a global phase
print("expected state:", expected_state)
print("resulting state:", state_result)
assert np.linalg.norm(state_result - expected_state) < EPS
expected state: [ 0.04908573+0.08310984j  0.08368183+0.13370508j -0.13560328-0.02883374j
  0.03020667-0.08912324j]
resulting state: [ 0.04862036+0.0823219j   0.08185711+0.12983253j -0.13585603-0.02546379j
  0.03211653-0.0924592j ]
print(
    "overlap between expected and resulting state:",
    np.abs(np.vdot(state_result, expected_state))
    * normalization_exp
    / np.linalg.norm(state_result),
)
overlap between expected and resulting state: 0.9996884609316635

Hamiltonian Simulation with QSVT

The QSVT is a general technique for applying block encoding of matrix polynomials, via quantum signal processing. We refer to Ref. [3] and the QSVT notebook for more information about this generic and important subject. In the QSVT approach we repeatedly apply two operations: \(U_H\) application and rotated reflections around the block space, applied on an extra qubit. The value of the rotation angles can be found using the pyqsp package.

The QSVT gives block-encoding of polynomials with a well defined-parity. Thus, we have to apply two QSVT blocks, one for approximating the \(\cos(xt)\) function and one for the \(\sin(xt)\) function, and then, for the finale, construct an LCU to get the block-encoding the sum: \(e^{ixt} = \cos(xt)+i\sin(xt)\). For a realziation of such model on real hardware see Ref. [4].

We start with calulating the rotation angles for the sine and cosine polynomial approximations. As in the Qubitization method, we need to normalize the evolution time \(t\). In addition, the pysqp adds a factor \(1/2\) to the sine and cosine polynomials.

import pyqsp
from pyqsp.angle_sequence import Polynomial, QuantumSignalProcessingPhases

pg_cos = pyqsp.poly.PolyCosineTX()
pcoefs_cos, scale_cos = pg_cos.generate(
    epsilon=EPS, tau=normalized_time, return_scale=True
)
poly_cos = Polynomial(pcoefs_cos)

pg_sin = pyqsp.poly.PolySineTX()
pcoefs_sin, scale_sin = pg_sin.generate(
    epsilon=EPS, tau=normalized_time, return_scale=True
)
poly_sin = Polynomial(pcoefs_sin)
12.104183283573647
R=6
[PolyCosineTX] rescaling by 0.5.
12.104183283573647
R=6
[PolySineTX] rescaling by 0.5.
ang_seq_cos = QuantumSignalProcessingPhases(
    poly_cos,
    signal_operator="Wx",
    method="laurent",
    measurement="x",
    tolerance=0.00001,  # relaxing the tolerance to get convergence
)

ang_seq_sin = QuantumSignalProcessingPhases(
    poly_sin,
    signal_operator="Wx",
    method="laurent",
    measurement="x",
    tolerance=0.00001,  # relaxing the tolerance to get convergence
)


# change W(x) to R(x), as the phases are in the W(x) conventions see Eq.()
def convert_phases_to_Rx_convention(ang_seq):

    phases = np.array(ang_seq)
    phases[1:-1] = phases[1:-1] - np.pi / 2
    phases[0] = phases[0] - np.pi / 4
    phases[-1] = phases[-1] + (2 * (len(phases) - 1) - 1) * np.pi / 4

    # verify conventions. minus is due to exp(-i*phi*z) in qsvt in comparison to qsp
    phases = -2 * phases

    return phases.tolist()

The list of angles to use within the qsvt function are given by:

phases_cos = convert_phases_to_Rx_convention(ang_seq_cos)
phases_sin = convert_phases_to_Rx_convention(ang_seq_sin)

Next, we use Classiq's qsvt function to get the QSVT of the sine and cosine block encoding. This open library function has the following arguments:

  • phases: the series of rotations according the quantum signal processing.

  • proj_cnot_1, proj_cnot_2: the projection operations for the block space- in our case these are identical, defined below as the identify_block.

  • u: the block-encoding quantum function- our the lcu_paulis function.

  • qvar: the combined variables of the block encoding function.

  • qsvt_aux: the extra qubit for the QSVT rotations.

We define a qsvt function for our specific Hamiltonian and approximation. Passing it the series of angles phases_cos or phases_sin generates the block-encodings:

\[U_{(2,m+1,\epsilon)-\cos(Ht)} = \begin{pmatrix} \cos(iHt)/2 & * \\ * & * \end{pmatrix}, \,\,\,\,\,\, U_{(2,m+1,\epsilon)-\sin(Ht)} = \begin{pmatrix} \sin(iHt)/2 & * \\ * & * \end{pmatrix}\]

where the extra qubit and factor \(1/2\) in the encoding comes from the QSVT.

class BlockEncodedState(QStruct):
    block: QNum[lcu_size_ham, False, 0]
    data: QNum[np.ceil(np.log2(len(b))), False, 0]


@qfunc
def identify_block(state: BlockEncodedState, qubit: QBit):
    qubit ^= state.block == 0


@qfunc
def block_encode_hamiltonian(state: BlockEncodedState):
    lcu_paulis(
        HAMILTONIAN,
        prepare_probs_ham,
        state.block,
        state.data,
    )


# defining a qsvt for the specific example
@qfunc
def my_qsvt(phases: CArray[CReal], qsvt_aux: QBit, state: BlockEncodedState):
    qsvt(
        phase_seq=phases,
        proj_cnot_1=identify_block,
        proj_cnot_2=identify_block,
        u=block_encode_hamiltonian,
        qvar=state,
        aux=qsvt_aux,
    )

Finally, to get the desired Hamiltonian simulation we apply an LCU with the two unitaries:

\[U_{(2,m+1,\epsilon)-\cos(Ht)} + iU_{(2,m+1,\epsilon)-\sin(Ht)},\]

The normalized LCU coefficients are thus \((1/2,1/2)\), giving an \((4, m+2,\epsilon)\)-encoding for the Hamiltonian simulation:

\[U_{((4, m+2,\epsilon))-\exp{(iHt)}} = \begin{pmatrix} \frac{1}{4}\exp{(iHt)} & * \\ * & * \end{pmatrix}.\]

The code in the rest of this section builds a model that applies this block-encoding on the randomly prepared \((\vec{b},\vec{0})\); synthesizes it; executes the resulting quantum program; and verifies the results.

@qfunc
def main(
    qsvt_aux: Output[QBit], block_exp: Output[QBit], state: Output[BlockEncodedState]
):
    allocate(1, qsvt_aux)
    allocate(1, block_exp)
    allocate(state.size, state)
    inplace_prepare_amplitudes(b, 0.0, state.data)
    within_apply(
        lambda: H(block_exp),
        lambda: (
            control(
                block_exp == 0,  # cosine
                lambda: my_qsvt(phases_cos, qsvt_aux, state),
            ),
            control(
                block_exp == 1,  # sine
                lambda: (
                    U(0, 0, 0, pi / 2, qsvt_aux),  # for the i factor
                    my_qsvt(phases_sin, qsvt_aux, state),
                ),
            ),
        ),
    )
qmod = create_model(main, execution_preferences=Execution_Prefs)
write_qmod(qmod, "hamiltonian_simulation_qsvt", decimal_precision=12)
qprog = synthesize(qmod)
show(qprog)
results = execute(qprog).result()
state_result = get_projected_state_vector(
    results,
    "state.data",
    {"state.block": 0.0, "qsvt_aux": 0.0, "block_exp": 0.0},
    len(b),
)
expected_state = 1 / 4 * scipy.linalg.expm(1j * matrix * EVOLUTION_TIME) @ b
relative_phase = np.angle(expected_state[0] / state_result[0])
state_result = state_result * np.exp(
    1j * relative_phase
)  # rotate according to a global phase
print("expected state:", expected_state)
print("resulting state:", state_result)
assert np.linalg.norm(state_result - expected_state) < EPS
expected state: [ 0.04917575+0.08326226j  0.0838353 +0.13395029j -0.13585198-0.02888662j
  0.03026207-0.08928669j]
resulting state: [ 0.0491616 +0.0832383j   0.08385896+0.1338651j  -0.13595705-0.02883753j
  0.03031341-0.08926023j]
print(
    "overlap between expected and resulting state:",
    np.abs(np.vdot(state_result, expected_state)) * 4 / np.linalg.norm(state_result),
)
overlap between expected and resulting state: 0.9999998243682983

Technical Notes

  1. The basic usage of the block-encoding unitary: let say we have a \((s, m, 0)\)-encoding of a matrix \(A\)
\[U_{(s, m, 0)-A} = \begin{pmatrix} A/s & * \\ * & * \end{pmatrix},\]

where the dimension of \(A\) is \(2^n\times 2^n\). If we apply this unitary on the state

\[|\psi\rangle_n|0\rangle_m =|\psi\rangle_n \otimes \begin{pmatrix} 1 \\ 0\\ \vdots \\ 0 \end{pmatrix} = \begin{pmatrix} |\psi\rangle_n \\ 0\\ \vdots \\ 0 \end{pmatrix},\]

we get

\[U_{(s, m, 0)-A} \left(|\psi\rangle_n|0\rangle_m \right) = \begin{pmatrix} \frac{1}{s}A|\psi\rangle_n \\ 0\\ \vdots \\ 0 \end{pmatrix} + \begin{pmatrix} 0 \\ *\\ \vdots \\ * \end{pmatrix} = \frac{1}{s}A|\psi\rangle_n|0\rangle_m +\sum_{l\neq 0} c_l |\phi\rangle_n|l\rangle_m .\]

Thus, if we measure \(|0\rangle_m\) on the second variable we know the first variable is the state \(\frac{1}{s}A|\psi\rangle_n\). In terms of measurment, we shall post-select on \(|0\rangle_m\) to measure the desired state. The success of this operation, which is given by \(|\frac{1}{s}A|\psi\rangle_n|\), can be amplified with Quantum Amplitude Amplification.

  1. Generalization to Pauli strings with non-positive coefficients: We assumed that all the coeffecients \(\alpha_i\) in the Hamiltonain decomposition to Pauli strings are positive. This assumption can be easiy relaxed. Non-positive or complex coeffecients should be enter into the definition of \(U_i\). Thus, one can modify the get_cheb_coef to return the "generalized signs" and add phases to the Pauli strings accordingly, as was done in the Qubitization method above.

  2. Generalization to non-Hermitian block-encoding: In this tutorial we took an Hamiltonian block-encoding which is Hermitian, that is, the unitary \(U_{(s,m,0)-H}\) itself is Hermitian. In the case of non-Hermitian unitary block-encoding, the cited property of the walk operator \(W\) does not hold. However, similar property holds for \(\tilde{W}\equiv U_{(s,m,0)-H}^T\Pi_{|0\rangle_m} U_{(s,m,0)-H}\Pi_{|0\rangle_m}\), see Sec. (7.4) in Ref. [2].

[1]: Jacobi-Anger Expansion.

[2]: Lin, L. "Lecture Notes on Quantum Algorithms for Scientific Computation ." arXiv:2201.08309 [quant-ph]).

[3]: Martyn JM, Rossi ZM, Tan AK, Chuang IL. Grand unification of quantum algorithms. PRX Quantum. 2021 Dec 3;2(4):040203.

[4]: Kikuchi, Y., Mc Keever, C., Coopmans, L. et al. "Realization of quantum signal processing on a noisy quantum computer". npj Quantum Inf 9, 93 (2023).

[5]: M. Szegedy, "Quantum speed-up of Markov chain based algorithms," In 45th Annual IEEE symposium on foundations of computer science, pages 32–41, 2004.