Hamiltonian Simulation with Quantum Signal Processing and Qubitization
View on GitHub
Open this notebook in GitHub to run it yourself
Simulating physical and chemical systems was among the original motivations for quantum computing, as first envisioned by Richard Feynman in 1982, and remains one of its most impactful applications. Beyond its foundational goal, this quantum routine now serves as a key subroutine in many higher-level quantum algorithms, such as Quantum Linear Systems Solvers (HHL), and applications such as Quantum Gibbs State Sampling.Time-independent Hamiltonian simulation refers to the task of approximately implementing the unitary evolution operator e−iHt for a Hermitian matrix H. When access to the Hamiltonian is provided via block-encoding, its time evolution can be realized by applying an appropriate polynomial transformation of the encoded operator, within a desired precision ϵ. This polynomial transformation can be achieved via Quantum Signal Value Transform (QSVT)[1], Generalized QSP (GQSP)[2], and a direct block encoding of Chebyshev Polynomials [3] (referred to as the Qubitization technique hereafter). These methods represent the most powerful family of known algorithms for simulating the dynamics of quantum systems whose Hamiltonians can be efficiently block-encoded, having optimal scaling in the time t and error ϵ.Qubitization provides a direct approach that avoids classical preprocessing but generally requires more qubits, may involve amplitude amplification, and includes controlled operations over the block-encoding unitary. QSVT, in contrast, omits such controlled operations, uses two auxiliary qubits, and requires a classical preprocessing stage to determine the signal-processing angles. The GQSP method, like Qubitization, employs controlled block-encoding calls and, like QSVT, relies on classical angle computation, but it requires only one auxiliary qubit and eliminates the need for amplification.
Input:
A Hermitian operator (Hamiltonian) H, given through an oracle that implements a block-encoding of H/α, where α≥∣∣H∣∣ and ∣∣⋅∣∣ is the spectral norm.
The evolution time t.
The target simulation error ϵ.
Output: A unitary U that approximates the time evolution e−iHt, such that ∣∣U−e−iHt∣∣<ϵ (∣∣⋅∣∣ is the spectral norm).
Complexity: All three methods require O(αt+log(e+log(ϵ−1)/αt)logϵ−1) calls to the block-encoding unitary UH. The methods differ by their classical preprocessing routines, depth, and number of auxiliary qubits.Keywords: Hamiltonian Simulation, Block Encoding, Quantum Singular Value Transformation (QSVT), Qubitization, Generalized Quantum Signal Processing, Oracle/Query complexity.
A block-encoded Hamiltonian refers to its embedding within a larger unitary matrix.Definition: A (s,m,ϵ)-encoding of a 2n×2n matrix H refers to completing it into a 2n+m×2n+m unitary matrix U(s,m,ϵ)−H:U(s,m,ϵ)−H=(H/s∗∗∗),with some functional error (U(s,m,ϵ)−H)0:2n−1,0:2n−1−H/s≤ϵ.The idea is that typically we would like to apply operations on non-unitary matrices.Since quantum circuits can only implement unitary operations, block-encoding embeds a generally non-unitary matrix into a larger unitary one.The scaling factor s ensures that the overall operator remains unitary.For the most basic use of block-encoding, see the first item in the technical notes at the end of this notebook.In all three methods presented in this notebook, we assume to have an exact (s,m,0)-encoding of some Hamiltonian as an input, where the output of each method implements an approximated encoding of its Hamiltonian evolutionU(s,m,ϵ)−H→U(s~,m~,ϵ)−exp(−iHt)=(exp(−iHt)/s~∗∗∗).This will be acheived by implementing a block-encoding for the polynomial approximation of a function of the Hamiltonian, P(H)≈s~1e−iHt, with s~ being some scaling factor.Note that to get the approximated unitary of e−iHt, we have to apply amplitude amplification for s~→1, see for example the Oblivious Amplitude Amplification noteook, or apply some post-selection.The amplification step is beyond the scope of the current example, and we simply employ projected statevector simulation.
This notebook demonstrates how to implement Hamiltonian simulation with all three methods: GQSP, Qubitization, and QSVT.Each method is explained and defined independently. We start with some preliminary steps, that are common to all three methods. In addition, in the last section we present a summary that compares between the properties of the three methods.
We start with setting some specific hyperparameters for our problem.Those can be easily modified to treat different usecases.
We set a specific block-encoded Hamiltonian to evolve, taking a simple example of a Hamiltonian as a sum of Pauli strings, and use the lcu_paulis function to block-encode it.The idea of the Linear Combination of Unitaries (LCU) technique, which also appears in the QSVT and Qubitization methods, is the following:Definition: LCU refers to (αˉ,m,0)-block-encoding of a matrix given as a sum of unitaries:U(αˉ,m,0)−A=(A/αˉ∗∗∗), for A=i=0∑L−1αiUi,αi≥0with αˉ≡∑i=0L−1αi and m=⌈log2(L)⌉.The LCU is implemented with the so-called “prepare” and “select” operations.The former prepares a quantum state that corresponds to the probabilities αi/∑αi, and the latter acts as a quantum switch operation.See second item at the Technical Details at the end of this notebook, and the LCU tutorial for more details.We now set the input parameters for the specific problem, which include:
The time evolution t and the error ϵ.
The Hamiltonian H and its block-encoding quantum function.
The Hamiltonian to evolve: 0.4 + 0.05*Pauli.X(0)*Pauli.X(1) + 0.2*Pauli.Z(0)*Pauli.Z(1) + 0.1*Pauli.Z(1)
H=i=0∑L−1αiUi,where Ui are Pauli strings.The LCU of the Hamiltonian corresponds to the unitary:UH=(s1∑i=0L−1αiUi∗∗∗), with s=∑αi.Let us derive some additional parameters for the implementation, such as the block-encoding size and the scaling factor s.
data_size = HAMILTONIAN.num_qubitsblock_size = ( (len(HAMILTONIAN.terms) - 1).bit_length() if len(HAMILTONIAN.terms) != 1 else 1)be_scaling = np.sum(np.abs([term.coefficient for term in HAMILTONIAN.terms]))
Next, we define the block-encoding quantum function, and a Quantum Struct for its variable
Working with block-encoding typically requires post-selection of the block variables being at state
The success of this process can be amplified via a Quantum Amplitude Amplification routine. I`n this notebook, instead, we simply work with a statevector simulator. We define a function that gets execution results and returns a projected state vector.
## fix the execution preferences for this notebookexecution_preferences = ExecutionPreferences( num_shots=1, backend_preferences=ClassiqBackendPreferences( backend_name=ClassiqSimulatorBackendNames.SIMULATOR_STATEVECTOR ),)
def get_projected_state_vector(res) -> np.ndarray: """ This function returns a reduced statevector from execution results.Expects a 'data' variable, and a 'block' variable to be filtered out when not in the |0> state. """ state_size = 2 ** len(res.output_qubits_map["data"]) proj_statevector = np.zeros(state_size).astype(complex) df = res.dataframe # Filter only the successful states. filtered_st = df[(df.block == 0) & (np.abs(df.amplitude) > 1e-12)] proj_statevector[filtered_st.data] = filtered_st.amplitude return proj_statevector
Below we define a classical post-process function that will be used to verify the quantum results, by comparing them to the expected classical result.
def compare_quantum_classical_states( expected_state: np.ndarray, resulted_state: np.ndarray, post_selection_factor: float): """ Aligns global phase, renormalizes, and computes overlap between the expected classical state and the one resulting from the quantum program.Since we work with a projected statevector, we need to insert the post-selection by hand.Parameters ---------- expected_state : array_like of complex, the classical (reference) statevector. resulted_state : array_like of complex, the simulated statevector, projected onto the block=0 space. post_selection_factor : float, the post-selection success probability of the block=0, to be applied uniformly to `resulted_state`.Returns ------- renormalized_state : the `resulted_state` after global-phase alignment and multiplication by `post_selection_factor`. overlap : The absolute value of the normalized inner product between `renormalized_state` and `expected_state`. """ relative_phase = np.angle(expected_state[0] / resulted_state[0]) resulted_state = resulted_state * np.exp( 1j * relative_phase ) # rotate according to a global phase renormalized_state = post_selection_factor * resulted_state overlap = ( np.vdot(renormalized_state, expected_state) / np.linalg.norm(renormalized_state) / np.linalg.norm(expected_state) ) return renormalized_state, abs(overlap)
The core of all the three methods is the polynomial approximation of the evolution, this transformation relies on the Jacobi-Anger expansion [4].The most general form of the Jacobi–Anger expansion gives:
where Ji(x) and Ti(x) are the Bessel function and Chebyshev polynomial of order i, respectively.The infinite series can be trimmed at some degree d, giving an approximation for the functions, whose relation to the error bound ϵ is
d=Ot+log(e+tlog(1/ϵ))log(1/ϵ).(5)
We can examine the approximation for a specific example:
This part is relevant to the Qubitization and GQSP methods. These two methods assumes that the block-encoding unitary UH is also Hermitian, a generalization is given at the third item in the Technical Details section.Given the block-encoding U(s,m,0)−H, we can define the following unitary operator (usually called the Szegedy quantum walk operator [5]):W≡Π∣0⟩mU(s,m,0)−H,where Π∣0⟩m is a reflection operator about the block state
We start with defining a walk operator for our specific example.
For the reflection, we use the reflect_about_zero quantum function from the Classiq open library.
This unitary function has the following important properties (see Section 7 in Ref. [6] ) :
Its powers correspond to a (1,m,0)-encoding of the Chebyshev polynomials:
Wk=(Tk(H/s)∗∗∗)=U(1,m,0)−Tk(H/s),(6)with Tk being the k-th Chebyshev polynomial.
2.Its specturm has a nice relation to the spectrum of the block-encoded Hamiltonian:{eigenvalues:}e{±iarccos(λ/s)},{witheigenvectors:}∣φ{±}{λ}⟩≡1{}{{2}}(∣v{λ}⟩∣0⟩m±i∣⊥{λ}⟩),(7)
where ∣vλ⟩ is an eigenstate of the Hamiltonian H with an eigenvalue λ.
As a final sanity check of the preliminary definitions, and before moving to the more complex Hamiltonian simulation implementation, let us quickly verify the Hamiltonian block-encoding.For this, we define a model that applies UH on the state we would like to evolve, and verify that the resulting state, after post-selection, gives (H/αˉ)b.
Comment: The current implementation assumes that the block encoding unitary is also a Hermitian matrix (see third item at the Technical Notes below).The GQSP method applies a polynomial transformation on a unitary, see GQSP notebook.The idea is to work with the walk operator, applying for it the polynomial transformationP(z)=k=−d∑dikJk(−st)zk,which approximates e−istcos(θ) according to the Jacobi-Anger expansion, see Eq. (1) above.To see why it works, we use the second property of the walk operator in Eq. (7), to writeW=λ∑eiarccos(λ/s)∣φλ+⟩⟨φλ+∣+e−iarccos(λ/s)∣φλ−⟩⟨φλ−∣,where λ are the eigenvalues of the Hamiltonian H.The polynomial transformation works on each eigenvalue and givesP(W)=λ∑P(eiarccos(λ/s))∣φλ+⟩⟨φλ+∣+P(e−iarccos(λ/s))∣φλ−⟩⟨φλ−∣=λ∑e−iλt(∣φλ+⟩⟨φλ+∣+∣φλ−⟩⟨φλ−∣).When this unitary is applied on some state ∣ψ⟩∣0⟩m we get, approximately,P(W)∣ψ⟩∣0⟩m≈e−iHt∣ψ⟩∣0⟩m.Applying a GQSP requires calculating a series of GQSP angles, which rotate an extra qubit.For a stable convergence of this classical preprocess, we need to take some scaling factor βgqspP(z) with βgqsp<1.Below we take βgqsp=0.99.
We use the gqsp function from the open-library and the classical gqsp_phases.The resulting unitary of the GQSP routine is a (βgqsp−1,m+1,ϵ)-encoding for the Hamiltonian simulation:U(βgqsp−1,m+1,ϵ)−exp(−iHt)=(βgqspexp(−iHt)∗∗∗).where m is the block size of the Hamiltonian block-encoding.
Next, we construct a quantum function that gets the block-encoding function as a parameter. A negative power is needed, since the approximation of e−itcos(x) in Eq. (1) is a Laurent polynomial that includes negative powers as well.
The code in the rest of this section builds a model that applies the gqsp_hamiltonian_evolution function on the randomly prepared vector (ψ,0), synthesizes it, executes the resulting quantum program, and verifies the results.
The QSVT is a general technique for applying block-encoding of matrix polynomials via quantum signal processing.Refer to Ref. [1] and the QSVT notebook for more information about this generic and important subject.The QSVT supports polynomials with a well defined parity. Thus, we can apply two QSVT blocks- one for approximating the cos(xt) function and one for the sin(xt) function- and then, for the finale, we construct an LCU to get the block-encoding of the sum e−ixt=cos(xt)−isin(xt).For a realization of such a model on real hardware, see Ref. [7].The QSVT routine requires pre-calculation of rotation angles on an extra qubit.For the stability of this classical preprocess, it is essential to work with βcoscos(xt) and βsinsin(xt), where βcos,βsin<1.Below we take βcos=βsin=0.99.To apply the LCU of the even and odd QSVT routine, we call the qsvt_lcu function from the open-library, that performs a non-trivial, optimized select operation for the two QSVT calls, as described in the figure below.Overall, the LCU of the two unitaries: U(βcos−1,m+1,ϵ)−cos(Ht)−iU(βsin−1,m+1,ϵ)−sin(Ht), gives a (2βcos−1,m+2,ϵ)-encoding for the Hamiltonian simulation:U((2βcos−1,m+2,ϵ))−exp(iHt)=(2βcosexp(iHt)∗∗∗),where m is the block size of the Hamiltonian block-encoding.
Next, we use Classiq’s prepare_select function, which allows a flexible implementation of the select operation in LCU. In our case we take this operation to be the qsvt_lcu function.The coefficients (1/2,−i/2) are chosen to get the desired exp(−ix) polynomial.
The code in the rest of this section builds a model that applies the qsvt_hamiltonian_evolution function on the randomly prepared vector (ψ,0), synthesizes it, executes the resulting quantum program, and verifies the results.
Comment: The current implementation assumes that the block encoding unitary is also a Hermitian matrix (see third item in the Technical Notes below).From the first property of the walk operator, Eq. (6) , we can simply combine the Jacobi–Anger expansion with the LCU technique to get the block-encoding for the Hamiltonian simulation [8]. Namely, we have an ϵ-approximation of exp(−iHt)≈∑i=0dβiTi(x), for which you can perform the following encoding:U(βˉ,m~,ϵ)−exp(−iHt)=(exp(−iHt)/βˉ∗∗∗)=(βˉ1∑k=0dβkU(1,m,0)−Tk(H)∗∗∗)=(βˉ1∑k=0dβkTk(Ht)∗∗∗)=(βˉ1∑k=0dWk∗∗∗),where m~=m+⌈log2(d+1)⌉, where m is the block size of the Hamiltonian block-encoding (recalling that Wk are block-encodings themselves, with block size m).
First, we use the predefined function to calculate the coefficients βi for approximating the sine and cosine functions.Recall that we must rescale the time by the block-encoding scaling factor for H.
t0 = time.perf_counter()cheb_degree = poly_jacobi_anger_degree(EPS, EVOLUTION_TIME * be_scaling)poly_cos = poly_jacobi_anger_cos(cheb_degree, EVOLUTION_TIME * be_scaling)poly_sin = poly_jacobi_anger_sin(cheb_degree, EVOLUTION_TIME * be_scaling)L = max(len(poly_odd), len(poly_even))odd = np.pad(poly_sin, (0, L - len(poly_sin)))even = np.pad(poly_cos, (0, L - len(poly_cos)))# we put a (-) sign as we want to simulate e^(-iHt) and not e^(iHt)exp_coeffs = even - 1j * oddclassical_preprocess_time_chec_lcu = time.perf_counter() - t0
We need to build an LCU of the unitaries {Wk}, with the following parameters:
exp_block_size = (len(exp_coeffs) - 1).bit_length() if len(exp_coeffs) != 1 else 1print(f"The block size of the block-encoded Hamiltonian evolution is: {exp_block_size}")exp_be_scaling = np.sum(np.abs(exp_coeffs))print( f"The scaling factor for the block-encoded Hamiltonian evolution is: {exp_be_scaling}")
Output:
The block size of the block-encoded Hamiltonian evolution is: 6 The scaling factor for the block-encoded Hamiltonian evolution is: 5.626479051283921
In principle, we could use the lcu function from the open-library, however, since the unitaries are just a series of powers of the same unitary, we can devise a better design to the select operator, as depicted in the figure below (this is similar to the construction of the QPE algorithm).Note that in this construction, the select operator runs over all powers 0,1,…,2l−1, with l=2⌈log2(d+1)⌉.We build this specified select in the following function:
Finally, we define a Quantum Struct for the Qubitization based block encoding, and build an lcu_cheb function that applies the LCU. We use Classiq’s prepare_select function, which allows a flexible implementation of the select operation in LCU. In our case we take this operation to be the select_powered_unitaries function.
The code in the rest of this section builds a model that applies the lcu_cheb function on the randomly prepared vector (ψ,0), synthesizes it, executes the resulting quantum program, and verifies the results.
As we can see, the QSVT gives better results in terms of CX counts, however, its scaling factor means that in practice, its depth/CX-counts must be doubled in comparison to the GQSP result.For a scaling factor of 2 we need to apply 1 iteration of Oblivious Amplitude amplification, whereas for a factor of 4 we need to call it twice. In other words, measuring the block being at state 0 is 2 times less likely in the QSVT method, compared to the GQSP on.This issue is taken into account in the “effective CX counts” measure presented in the table.Even with this fact, the QSVT gives better results in terms of gate count, using one extra qubit.The Chebyshev LCU approach (Qubitization) is less effective in all parameters, except it requires no heavy preprocessing for calculating angles. We note that all three methods have the same asymptotic scaling, and the differences appeared in the tables are due to the detailed implementations of the current example.
The basic use of the block-encoding unitary is the following.Let us say we have a (s,m,0)-encoding of a matrix AU(s,m,0)−A=(A/s∗∗∗),where the dimension of A is 2n×2n.Applying this unitary on the state∣ψ⟩n∣0⟩m=∣ψ⟩n⊗10⋮0=∣ψ⟩n0⋮0,yieldsU(s,m,0)−A(∣ψ⟩n∣0⟩m)=s1A∣ψ⟩n0⋮0+0∗⋮∗=s1A∣ψ⟩n∣0⟩m+l=0∑cl∣ϕ⟩n∣l⟩m.Thus, if we measure ∣0⟩m on the second variable, we know the first variable is the state s1A∣ψ⟩n. In terms of measurement, we can post-select on ∣0⟩m to measure the desired state. We can amplify the success of this operation, given by ∣s1A∣ψ⟩n∣, with Quantum Amplitude Amplification.
The task of LCU is to block-encode a sum of unitary functions:U(αˉ,m,0)−A=(A/αˉ∗∗∗), for A=i=0∑L−1αiUi,αi≥0.This operation is implemented by the so-called prepare and select operations.The prepare function is simply a state preparation function on the block variable:PREPARE≡I⊗(i=0∑L−1αi/αˉ∣i⟩⟨0∣+i=0∑L−1j=1∑L−1gij∣i⟩⟨j∣),where gij are some uninteresting coefficients, and I is the identity operation working on the data space.The select operation acts as a quantum switch, applying the operation Ui on the data variable if the block variable is at state ∣i⟩:SELECT≡i∑Ui⊗(∣i⟩⟨i∣).The LCU operation is acheived by PREPARE⋅SELECT⋅PREPARE†.One can check that:(I⊗(i=0∑L−1αi/αˉ∣i⟩⟨0∣+i=0∑L−1j=1∑L−1gij∣i⟩⟨j∣))(i′∑Ui′⊗(∣i′⟩⟨i′∣))I⊗i′′=0∑L−1αi′′/αˉ∣0⟩⟨i′′∣+i′′=0∑L−1j′′=1∑L−1gi′′j′′∗∣j′′⟩⟨i′′∣==(i=0∑L−1(αi/αˉ)Ui)⊗∣0⟩⟨0∣+(i=1∑L−1j=1∑L−1Gij∣i⟩⟨j∣),with Gij some non-interesting operators.
The GQSP and Qubitization methods assumes that the block-encoding unitary is also Hermitian.This assumption sets the two propetries of the walk operator, Eqs. (6) and (7), on which those two methods are based upon. In the case of non-Hermitian unitary block-encoding, the cited property of the walk operator W does not hold. However, a similar property holds for W~≡U(s,m,0)−HTΠ∣0⟩mU(s,m,0)−HΠ∣0⟩m (See Sec. 7.4 in Ref. [6].) We can work with this operator, and with its equivalent properties that generalize Eqs. (6) and (7).