Quantum Phase Estimation (QPE) for Solving Molecular Energies
Quantum Phase Estimation (QPE) is a key algorithm in quantum computing, allowing one to estimate the phase (or eigenvalue) of an eigenvector of a unitary operation. For a given Hamiltonian \(H\), and an eigenvalue \({|\psi\rangle}\), the output of the algorithm is \(\epsilon\) where
\(U{|\psi\rangle} = e^{2\pi i\epsilon}{|\psi\rangle} , U = e^{2\pi iH}\) .
Therefore, by measuring the phase accumulated, the QPE algorithm allows calculating the energies relating to the chosen initial state. When using QPE for chemistry problems, it is common to search for the lowest energy of a given molecule. As the molecule can be written in the form of a Hamiltonian (Hermitian matrix representing the energetic forces of the structure), one only needs to insert the ground eigenvector in order to obtain the minimal energy value using QPE. However, obtaining the ground state is not a trivial problem. In order to overcome it, it is sufficient to use a state with big overlap with the ground state.
We define a state \({|v\rangle}\) which will be chosen as the algorithm's initial state. Let us define {\(\psi_i\)} to be the set of (unknown) eigenvalues of \(H\). Generally, any vector can be rewritten as a superposition of any basis set, thus -
\({|v\rangle} = \sum_i a_i{|\psi_i\rangle}\)
and -
\(U{|v\rangle} = \sum_i a_i e^{2\pi i\epsilon_i}{|\psi_i\rangle}\).
where \({\epsilon_i}\) are the eigenvalues of \(H\), i.e. the span of energies relating to the molecule. Using execution with enough shots, one will obtain this set of \(\epsilon_i\), i.e., a subset of the Hamiltonian's eigenvalues. As we are specifically interested in \(\epsilon_0\), the ground state of \(H\), it is important that there is a large overlap between \({\psi_0}\) and \({|v\rangle}\) so the probability to measure \({\epsilon_0}\) is high, i.e.
\(P(\epsilon_0) = |\langle v|\psi_0\rangle|^2 > \zeta\).
How large is \(\zeta\)? After execution, we will obtain a set of \({E_i}\). If we have done 1000 shots of execution, and \(P(\epsilon_0)>1\%\), we should sample \(\epsilon_0\) about 10 times.
A common choice for \({|v\rangle}\) (the initial state) is the Hartree-Fock (HF) state, which typically has a large overlap with the ground state. However, other guesses for the initial state are possibly good or even better fit, and choosing the right initial state is a sort of art and an active field of research.
For further reading about QPE we recommend [1].
What are the benefits of using QPE algorithm for finding a molecule's ground state?
The two most prominent methods to solve ground energy for molecules are quantum variational algorithm (VQE) and QPE. Those promise better scalability compared to classical counterparts as the molecules becomes more complex, with larger number of electrons, referring to a physical problem with more degrees of freedom.
The number of parameters in VQE is closely related to the number of electrons. This may create inherent difficulty achieving high-precision calculations through sampling statistical estimators, and perhaps even not converge for very large systems. On the other hand, the number of parameters in QPE is a flexible value which is directly related to the resolution of the problem, but is not bounded with the number of electrons.
Furthermore, it is known that advanced quantum algorithms based on QPE can perform electronic structure calculations in sub-exponential time with accuracy that rivals exact diagonalization methods. This guarantee of simultaneously achieving high accuracy, efficiency, and generality is a feat that is believed to be impossible for classical algorithms. For those reasons, VQE is applicable in the near term (NISQ) era, while QPE is suited for fault-tolerant design.
In this tutorial, we follow the QPE algorithm steps as follows:
-
Define a molecule and convert it into a Hamiltonian.
-
Prepare the Hamiltonian for QPE: including normalization and trimming of negligible terms.
-
Construct a quantum model: initializing the state for the HF state and leveraging the
qpe_flexible
function. -
Executing the circuit to find the related phases and analyzing the results to find the ground state.
## Imports
import matplotlib.pyplot as plt
import numpy as np
from classiq import *
# for chemistry
from classiq.applications.chemistry import Molecule, MoleculeProblem
from classiq.applications.combinatorial_helpers.pauli_helpers.pauli_utils import (
pauli_operator_to_hamiltonian,
)
Defining a molecule with Classiq
In this tutorial we choose to work on the LiH molecule:
# build your molecule
molecule_LiH = Molecule(atoms=[("H", (0.0, 0.0, 0.0)), ("Li", (0.0, 0.0, 1.596))])
# Example of some other molecules:
molecule_H2 = Molecule(atoms=[("H", (0.0, 0.0, 0)), ("H", (0.0, 0.0, 0.735))])
molecule_O2 = Molecule(atoms=[("O", (0.0, 0.0, 0)), ("O", (0.0, 0.0, 1.16))])
molecule_H2O = Molecule(
atoms=[("O", (0.0, 0.0, 0.0)), ("H", (0, 0.586, 0.757)), ("H", (0, 0.586, -0.757))]
)
molecule_BeH2 = Molecule(
atoms=[("Be", (0.0, 0.0, 0.0)), ("H", (0, 0, 1.334)), ("H", (0, 0, -1.334))]
)
molecule = molecule_LiH
# define your molecule problem
gs_problem = MoleculeProblem(
molecule=molecule,
basis="sto3g",
mapping="jordan_wigner", #'bravyi_kitaev'
z2_symmetries=True,
freeze_core=True,
)
operator = gs_problem.generate_hamiltonian()
gs_problem = gs_problem.update_problem(operator.num_qubits)
mol_hamiltonian = pauli_operator_to_hamiltonian(operator.pauli_list)
problem_size = len(mol_hamiltonian[0].pauli)
print(
f"The Hamiltonian is defined on {problem_size} qubits, and contains {len(mol_hamiltonian)} Pauli strings"
)
The Hamiltonian is defined on 6 qubits, and contains 231 Pauli strings
Finally, we calculate the ground state energy as a reference solution to our quantum solver (note that the Hamiltonian obtained from the Molecule object excludes the nuclear repulsion energy).
mat = hamiltonian_to_matrix(mol_hamiltonian)
w, v = np.linalg.eig(mat)
classical_sol = np.real(min(w))
print(f"Expected energy (without nuclear repulsion energy): {classical_sol} Ha")
Expected energy (without nuclear repulsion energy): -1.0789776863285236 Ha
Preparing the Molecule for QPE
Trimming the Hamiltonian
As we can see, the Hamiltonian may contain a large number of terms. In many cases we can compress the Hamiltonian by trimming small terms.
coeffs = [term.coefficient for term in mol_hamiltonian]
plt.semilogy(np.sort(np.abs(coeffs))[::-1], "o")
plt.ylabel(r"$|\alpha_i|$", fontsize=16)
plt.xlabel(r"$i$", fontsize=16)
plt.tick_params(axis="both", labelsize=16)
plt.title(
r"Sorted coefficients size for Hamiltonian $H = \sum \alpha_i P_i$ ($P_i$ Pauli string)"
);
We define some threshold and trim the Hamiltonian accordingly:
def trim_hamiltonian(hamiltonian, threshold):
return [
PauliTerm(pauli=term.pauli, coefficient=term.coefficient)
for term in hamiltonian
if np.abs(term.coefficient) > threshold
]
THRESHOLD = 0.03
trimmed_mol_hamiltonian = trim_hamiltonian(mol_hamiltonian, THRESHOLD)
print(f"Length of trimmed Hamiltonian: {len(trimmed_mol_hamiltonian)}")
Length of trimmed Hamiltonian: 49
Normalizing the Hamiltonian for QPE
Since we are working with QPE, the ground state energy is inferred as a phase, thus, we shall normalize the Hamiltonian such that its eigenvalues are in \(\left[-\frac{1}{2},\frac{1}{2}\right)\). This is done by finding a bound on the maximal absolute value of eigenvalues \(\tilde{\lambda}_{\max}\), and normalizing the Hamiltonian by \(2\tilde{\lambda}_{\max}\). A simple bound is given by the sum of Pauli coefficients of the Hamiltonian.
def normalize_hamiltonian(hamiltonian):
approx_lambda_max = sum(np.abs(term.coefficient) for term in hamiltonian)
normalization = 2 * approx_lambda_max
normalized_mol_hamiltonian = [
PauliTerm(pauli=term.pauli, coefficient=term.coefficient / (normalization))
for term in hamiltonian
]
return normalization, normalized_mol_hamiltonian
normalization, normalized_mol_hamiltonian = normalize_hamiltonian(
trimmed_mol_hamiltonian
)
print(f"The normalization value of the Hamiltonian is {normalization}")
The normalization value of the Hamiltonian is 17.61546220154492
In our QPE models we will define the phase
variable as a fixed point quantum number in \([-1/2,1/2)\): QNum[QPE_SIZE,SIGNED,QPE_SIZE]
. Thus, we need to postprocess the algorithm result by multiplying back the normalization factor (see Measurement and Anlysis section below).
Designing the Quantum Model
Defining a powered Hamiltonian simulation
We will now create a quantum model of the QPE algorithm using the Classiq platform. In particular, we will use the open library qpe_flexible function (see notebook as well).
We need to approximate the Hamiltonian simulation \(e^{2\pi i H}\), for this we will use Classiq built-in implementation for Suzuki Trotter formulas. For a given Suzuki Trotter order \(o\), we can specify a repetitions parameter \(r\), which controls the level of approximation. The literature provides some lower bounds for \(r\) as a function of the operator error \(\epsilon\) (defined by the dimond norm [3]). For example, Eq. (14) in Ref. [2] states that Suzuki Trotter formula of order 2 approximates \(e^{i \sum \alpha_m P_m t}\) up to an error \(\epsilon\), given \(r\) repetitions that satisfies:
where \(\gamma_2 \equiv \sum_{l,m,n} |\alpha_m\alpha_n\alpha_l| \left |\left[P_l,\left[P_m, P_n\right]\right]\right|_\infty\). In particular, we can see that the number of repetitions grows as \(t^{3/2}\).
In QPE, we apply powered Hamiltonian simulation:
and each power should be approximated with Suzuki Trotter with an appropriate order and repetitions parameters, keeping the same error per QPE iteration. We can thus use the bound above to define a powered Suzuki Trotter qfunc
for our specific molecule.
First, we first define some classical auxiliary functions that helps us to evaluate the right-hand-side in Eq. (1):
import itertools
def multiply_single_qubit(pa, pb):
"""
Multiply two single-qubit Pauli operators pa*pb.
Returns (phase, pc) where:
- phase in {1, -1, 1j, -1j}
- pc is the resulting single-qubit Pauli.
"""
table = {
(Pauli.I, Pauli.I): (1, Pauli.I),
(Pauli.I, Pauli.X): (1, Pauli.X),
(Pauli.I, Pauli.Y): (1, Pauli.Y),
(Pauli.I, Pauli.Z): (1, Pauli.Z),
(Pauli.X, Pauli.I): (1, Pauli.X),
(Pauli.X, Pauli.X): (1, Pauli.I),
(Pauli.X, Pauli.Y): (1j, Pauli.Z), # X*Y = iZ
(Pauli.X, Pauli.Z): (-1j, Pauli.Y), # X*Z = -iY
(Pauli.Y, Pauli.I): (1, Pauli.Y),
(Pauli.Y, Pauli.X): (-1j, Pauli.Z), # Y*X = -iZ
(Pauli.Y, Pauli.Y): (1, Pauli.I),
(Pauli.Y, Pauli.Z): (1j, Pauli.X), # Y*Z = iX
(Pauli.Z, Pauli.I): (1, Pauli.Z),
(Pauli.Z, Pauli.X): (1j, Pauli.Y), # Z*X = iY
(Pauli.Z, Pauli.Y): (-1j, Pauli.X), # Z*Y = -iX
(Pauli.Z, Pauli.Z): (1, Pauli.I),
}
return table[(pa, pb)]
def multiply_pauli_lists(pA_list, pB_list):
"""
Multiply two multi-qubit Pauli lists A and B.
Returns PauliTerm(pauli, coefficient), where:
- coefficient in {1, -1, 1j, -1j}
- pauli is a Pauli list representing AB.
"""
if len(pA_list) != len(pB_list):
raise ValueError("Pauli lists must have the same length.")
phase_total = 1 + 0j # start with complex 1
product = []
for pA, pB in zip(pA_list, pB_list):
phase_local, pC = multiply_single_qubit(pA, pB)
phase_total *= phase_local
product.append(pC)
return PauliTerm(pauli=product, coefficient=phase_total)
def commutator_infinity_norm(pA_list, pB_list):
"""
Compute the 'infinity-norm of [A,B]' . The commutation relation between two Pauli lists
can be inferred by counting the number of qubit positions i where A[i] != B[i],
ignoring position with A[i] or B[i] equal to Pauli.I.
Then:
- If the count is even, A and B commute => norm = 0
- If the count is odd, A and B anticommute => norm = 2
"""
if len(pA_list) != len(pB_list):
raise ValueError("Pauli strings must have the same length.")
# Count positions where A and B are both in {X,Y,Z}, differ, and none is I
difference_count = 0
for pA, pB in zip(pA_list, pB_list):
if pA != Pauli.I and pB != Pauli.I and pA != pB:
difference_count += 1
return 2 * (difference_count % 2)
def calculate_gamma_2(hamiltonian):
"""
Compute the $\gamma_2$ value appearing in the bound for Suzuki Trotter of order 2
"""
gamma_2 = 0
for triplet in itertools.combinations(range(len(hamiltonian)), 3):
terms = [hamiltonian[index] for index in triplet]
factor = np.abs(
(terms[0].coefficient) * (terms[1].coefficient) * (terms[2].coefficient)
)
inner_commutator = commutator_infinity_norm(terms[1].pauli, terms[0].pauli)
if inner_commutator != 0:
outer_commutator = 2 * commutator_infinity_norm(
terms[2].pauli,
multiply_pauli_lists(terms[0].pauli, terms[1].pauli).pauli,
)
gamma_2 += factor * outer_commutator
return gamma_2
In QPE, the powers of the Hamiltonian simulation grows exponentially with the phase variable size. Let us examine the number of repetitions needed per QPE iteration, according to the bound above for QPE of size 7.
QPE_SIZE = 7
qpe_powers = 2 ** np.arange(QPE_SIZE)
print(
f"""The powers of the Hamiltonian simulation along a QPE routine of size {QPE_SIZE}:
{qpe_powers}"""
)
The powers of the Hamiltonian simulation along a QPE routine of size 7:
[ 1 2 4 8 16 32 64]
These powers enter as an evolution coefficient for the Hamiltonian simulation (see Eq. (2) above). Using the theoretical bound we find
EPS = 0.1
gamma_2_LiH = calculate_gamma_2(normalized_mol_hamiltonian)
theoretical_r0 = np.sqrt(2**5 * gamma_2_LiH / (3 * EPS)) * (2 * np.pi) ** (3 / 2)
print(
f"""The theoretical bounds for the repetitions for QPE size {QPE_SIZE}, keeping an error {EPS} per QPE iteration are:
{np.ceil(theoretical_r0*qpe_powers**(3/2))}"""
)
The theoretical bounds for the repetitions for QPE size 7, keeping an error 0.1 per QPE iteration are:
[ 4. 11. 30. 85. 240. 677. 1915.]
We note that applying a naive QPE, i.e., assuming a single unitary approximated with Suzuki Trotter, \(e^{iHt} \approx {\rm ST}(H, o, r ,t)\), and simply taking its powers, gives \(\left(e^{iHt}\right)^p \approx \left({\rm ST}(H, o, r ,t)\right)^{p} = {\rm ST}(H, o, pr ,pt)\).
print(
f"""The repetitions for QPE size {QPE_SIZE}, taking a naive QPE, per QPE iteration:
{np.ceil(theoretical_r0*qpe_powers)}"""
)
The repetitions for QPE size 7, taking a naive QPE, per QPE iteration:
[ 4. 8. 15. 30. 60. 120. 240.]
While this naive QPE results in a shallower circuit, compared to taking repetitions according to the theoretical bounds (due to smaller values of repetitions), it is unclear whether it keeps the same operator error per phase bit.
In practice, the bounds given in the literature are quite loose. In this tutorial we thus take a more experimental approach. We assume that the scaling of the bound with the evolution time \(t\) is similar to Eq. (1), but take a smaller prefactor.
experimental_r0 = 0.05
print(
f"""The experimental repetitions for QPE size {QPE_SIZE}, per QPE iteration are:
{np.ceil(experimental_r0*qpe_powers**(3/2))}"""
)
The experimental repetitions for QPE size 7, per QPE iteration are:
[ 1. 1. 1. 2. 4. 10. 26.]
Now, we use this approach to define our powered Suzuki Trotter function, for the specific Hamiltonian at hand.
from classiq.qmod.symbolic import ceiling as ceiling_qmod, pi
@qfunc
def powered_st2_for_LiH(p: CInt, state: QArray[QBit]):
suzuki_trotter(
pauli_operator=normalized_mol_hamiltonian,
evolution_coefficient=-2 * pi * p,
order=2,
repetitions=ceiling_qmod(experimental_r0 * p ** (3 / 2)),
qbv=state,
)
Defining and synthesizing the phase estimation model
@qfunc
def main(
state: Output[QArray[QBit]], phase: Output[QNum[QPE_SIZE, SIGNED, QPE_SIZE]]
) -> None:
allocate(problem_size, state)
molecule_hartree_fock(molecule_problem_to_qmod(gs_problem), state)
allocate(QPE_SIZE, phase)
qpe_flexible(lambda p: powered_st2_for_LiH(p, state), phase)
qmod = create_model(
main,
preferences=Preferences(timeout_seconds=600),
out_file="qpe_for_molecules",
)
qprog = synthesize(qmod)
Measurment and Analysis
First we execute on the default simulator:
res = execute(qprog).result_value()
Next, we draw a histogram for the energies, by taking the output of the phase
variable and multypling back the normalization facor.
phase_counts = res.parsed_counts_of_outputs("phase")
num_shots = res.num_shots
energy_results = {
sample.state["phase"] * normalization: sample.shots / num_shots
for sample in phase_counts
}
plt.plot(energy_results.keys(), energy_results.values(), "o")
max_prob_energy = max(energy_results, key=energy_results.get)
print(f"\nEnergy with maximal probability: {max_prob_energy} Ha")
print(f"Precision: {(2**(-QPE_SIZE))* normalization} Ha")
print(f"Classical solution:, {classical_sol} Ha")
plt.xlabel("Energy (Ha)", fontsize=16)
plt.ylabel("P(Energy)", fontsize=16)
plt.tick_params(axis="both", labelsize=16)
plt.title("Energy Histogram from QPE");
Energy with maximal probability: -1.1009663875965574 Ha
Precision: 0.13762079844956968 Ha
Classical solution:, -1.0789776863285236 Ha
Recall that we are looking for a signal from the smallest eigenvalue, under the assumption that the initial state has some overlap with the ground state. Below we estimate the energy as the first peak of the histogram, such that the corresponding probability is larger than ASSUMED_OVERLAP
*0.4 (0.4 is the case for ASSUMED_OVERLAP=1).
Note that this is a very rough and simplistic analysis of the QPE algorithm result. One can utilize more complex tools of spectral analysis, such as Gaussian mixtures, etc. Additional assumptions, such as difference between adjacent eigenvalues or number of overlapping eigenstates, can facilitate the analysis further.
from scipy.signal import find_peaks
ASSUMED_OVERLAP = 0.05
def estimate_energy(data_dict, assumed_overlap):
max_prob = assumed_overlap * 0.4
data = tuple(data_dict.items())
data_sorted = sorted(
data, key=lambda x: x[0]
) # sort the data according to the energy value
probs_sorted = [data[1] for data in data_sorted]
maxima = find_peaks(probs_sorted, height=max_prob)[0]
print(f"Number of maxima: {maxima.size}")
if maxima.size == 0 and np.all(np.array(probs_sorted) <= max_prob):
print(
"""No probabilities above threshold were found, try to increase the assumed_overlap.
Returning energy with max probability"""
)
return max(data_dict, key=data_dict.get)
elif maxima.size == 0: # strictly increasing or decreasing function
return max(data_dict, key=data_dict.get)
else:
print(
f"maxima over the threshold at {[data_sorted[maxima[k]][0] for k in range(maxima.size)]} Ha"
)
return data_sorted[maxima[0]][0]
measured_energy = estimate_energy(energy_results, ASSUMED_OVERLAP)
print(f"\nLowest eigenvalue: {measured_energy} Ha")
print(f"Precision: {(2**(-QPE_SIZE))* normalization} Ha")
print(f"Classical solution:, {classical_sol} Ha")
Number of maxima: 1
maxima over the threshold at [-1.1009663875965574] Ha
Lowest eigenvalue: -1.1009663875965574 Ha
Precision: 0.13762079844956968 Ha
Classical solution:, -1.0789776863285236 Ha
References
[1]: Michael A. Nielsen and Isaac L. Chuang. 2011. Quantum Computation and Quantum Information: 10th Anniversary Edition, Cambridge University Press, New York, NY, USA.
[2]: M. Hagan and N. Wiebe. Composite Quantum Simulations, Quantum 7, 1881 (2023).