Skip to main content

Documentation Index

Fetch the complete documentation index at: https://prod-mint.classiq.io/llms.txt

Use this file to discover all available pages before exploring further.

View on GitHub

Open this notebook in GitHub to run it yourself

Overview

Classical shadow tomography [1], introduced by Huang, Kueng, and Preskill (2020), is a measurement protocol for predicting many properties of an unknown quantum state from a small number of measurements. Rather than reconstructing the full density matrix, it builds a compact classical estimator, the classical shadow, that supports efficient prediction of a large set of observables. The algorithm treats the following problem:
  • Input: Access to copies of an unknown nn-qubit state ρ\rho, and a target set of MM observables {O1,,OM}\{O_1, \dots, O_M\}.
  • Output: Estimates o^iOi=tr(Oiρ)\hat{o}_i \approx \langle O_i \rangle = \mathrm{tr}(O_i \rho) for every ii, to additive error ε\varepsilon, with a probability of at least 1δ1-\delta.
Complexity: The sample cost, which is the number of required measurements (or copies of ρ\rho), scales as O(log(M/δ)maxiOishadow2/ϵ2)\mathcal{O}(\log(M/\delta) \max_i ||O_i ||^2_{\text{shadow}}/\epsilon^2). This scaling is logarithmic in the number of target observables, MM. Here Oishadow||O_i ||_{\text{shadow}} is the shadow norm which depends on both the specific method used to construct the classical shadow and the observables of interest, {Oi}\{O_i\}. When Pauli measurements are utilized to build the classical shadow it scales as O(4k)\mathcal{O}(4^k), for kk-local Pauli observables, while, when the algorithm involves random-Clifford measurements, the classical shadow scales as the Hilbert-Schmidt norm O(OHS2)\mathcal{O}(\|O\|_\mathrm{HS}^2). In comparison, the naive approach of measuring each time a different observable, would require (using Hoeffding inequality) O(Mlog(M/δ)/ϵ2)\mathcal{O}(M\log(M/\delta)/\epsilon^2) measurements.
Keywords: Quantum state tomography, Randomized measurements, Shadow norm, Median-of-means estimator.

Introduction

\renewcommand{\ket}[1]{\left|{#1}\right\rangle} \renewcommand{\bra}[1]{\left\langle{#1}\right|} Predicting properties of unknown quantum states is an essential task in quantum computing. By conducting many measurements in different bases on copies of the state a method called Quantum State Tomography [1] fully reconstructs the quantum state. Naturally, such complete characterization requires a number of state copies which scales exponentially, with the number of qubits nn and quadratically with the error, O(22n/ϵ2)\mathcal{O}(2^{2n}/\epsilon^2). As a result, this procedure is unfeasible for a system of more than a few qubits. Nevertheless, for many applications a complete characterization of the state, or equivalently, evaluation of the expectation value of an exponential number of observables is unnecessary. When restricting to MM observables, one can characterize only a part of the Hilbert space and achieve better complexity. Aaronson introduced shadow tomography [2] which constructs a compact classical representation of the quantum state, coined as classical shadow. The shadow allows predicting observables simultaneously without completely reconstructing the state. This could be done using O~(ε4log4Mn)\widetilde{\mathcal{O}}\left(\varepsilon^{-4} \cdot \log^4 M \cdot n\right) copies of the state. However, this approach still requires a lot of quantum memory to store the copies and long circuits to make the predictions. Based on this idea Huang, et al. introduced classical shadow tomography [3], a procedure that required only O(log(M))\mathcal{O}(\log(M)) copies of the state and independent of the dimension of the system to approximate the classical shadow. The copies could also be stored efficiently in classical memory. Meaning that one can estimate exponentially many observables using only logarithmically many measurements in the number of observables. These can be used to estimate fidelity, the entanglement entropy, or find entanglement witnesses. The procedure consists of two steps:
  1. Data acquisition - constructing snapshots of the quantum state, and construction of a classical representation of the quantum state, i.e, the classical shadow.
  2. Prediction
  • Given MM observables O1,,OMO_1, \dots, O_M, estimate their expectation values Oi=tr(Oiρ)\langle O_i \rangle = \text{tr}(O_i\rho).
The Classical Shadow Procedure (from Ref. [1]):
Classical-shadow tomography pipeline: random unitaries, measurements, classical representation, prediction

Procedure

Data Acquisition

One step of the data acquisition procedure consists of the following:
  • Select a random unitary, UU, transformation from a predefined, tomographically complete ensemble.
  • Apply the transformation to the quantum state of interest:
ρUρU\rho \rightarrow U\rho U^\dagger
  • Measure the transformed quantum state in the computational basis and obtain a bit string of measurements b{0,1}n|b\rangle \in \{0, 1\}^n.
  • Store the classical snapshot\textit{classical snapshot} of the state by classically performing the reverse operation UbbUU^\dagger |b\rangle \langle b| U and storing (efficiently) it in classical memory.
The average mapping (in terms of unitary choice and measurement result) from ρ\rho to its snapshot can be viewed as a measurement channel: M(ρ)=E[UbbU]=EUUb{0,1}nbUρUbUbbU  ,\mathcal{M}(\rho) = \mathbb{E}[U^\dagger |b\rangle \langle b| U] = \mathbb{E}_{U\sim{\cal{U}}}\sum_{b\in \{0,1 \}^n} \langle b| U \rho U^\dagger|b \rangle U^\dagger |b\rangle \langle b|U ~~, ‘Average mapping’, means that the true state is obtained from the estimated state in expectation: E[ρ^]=ρ  .\mathbb{E}[\hat{\rho}] = \rho~~. When the unitary ensemble (which is averaged over) is tomographically complete, so the quantum state can be fully reconstructed from the measurements, the measurement channel can be inverted. This allows obtaining the original state: ρ=E[M1(UbbU)]  .\rho = \mathbb{E}[\mathcal{M}^{-1}(U^\dagger |b \rangle \langle b| U)]~~. To take advantage of this, this procedure is repeated NN times, yielding an array of classical shadow\textit{classical shadow} of ρ\rho: S(ρ;N)={ρ^1,,ρ^N}  ,\textbf{S}(\rho; N) = \{\hat{\rho}_1,\dots, \hat{\rho}_N \}~~, where ρ^i=M1(UibibiUi)  ,  for all  i[1,N]  .\hat{\rho}_i = \mathcal{M}^{-1}(U_i^\dagger |b_i\rangle \langle b_i| U_i)~~,~~\text{for all} ~~i\in[1,N]~~.

Prediction

Although the observables can be predicted using an empirical mean, the authors use a median-of-means approach in order to mitigate outliers:
  • Divide the shadow into KK equally sized parts, and take the mean of each one to construct KK estimators of ρ\rho
ρ^(k)=1N/Ki=(k1)N/K+1kN/Kρ^i(1)\hat{\rho}_{(k)} = \frac{1}{\left\lfloor N / K \right\rfloor} \sum_{i=(k-1)\left\lfloor N / K \right\rfloor + 1}^{k \left\lfloor N / K \right\rfloor} \hat{\rho}_i \tag{1}
  • Then, for each observable (for i=1i =1 to MM), take the median of the means, where Oi^\hat{O_i} is the estimation of the observable:
Oi^(N,K)=median{tr(Oiρ^1),...,tr(Oiρ^K)}(2)\hat{O_i}(N, K) = \text{median}\{\text{tr} (O_i \hat{\rho}_1), ..., \text{tr} (O_i \hat{\rho}_K)\} \tag{2} Huang et al. proved that the classical shadow protocol allows predicting MM target functions tr(Oiρ)\text{tr}(O_i \rho) within error ϵ\epsilon with O(log(M)maxiOishadow2/ϵ2)  ,(3)\mathcal{O}(\log(M) \max_i ||O_i ||^2_{\text{shadow}}/\epsilon^2)~~, \tag{3} where the shadow norm Oishadow2\|O_i \|^2_{\text{shadow}} affects the sample complexity and the accuracy of prediction. An important ingredient in the protocol is the ability to sample random unitaries UU. Formally, one needs to sample from the Haar measure. However, sampling exactly from this measure requires exponential circuit size in the number of qubits. So in practice, one often uses other unitary ensembles, such as the Clifford group, which efficiently reproduces the first (mean) and second (variance) moments of Haar randomness (formally, the Clifford group forms an exact unitary 3-design). We consider two ensembles:
  1. Random Pauli measurements: each qubit is rotated by an independently sampled single-qubit Clifford gate, U=U1UnU = U_1 \otimes \dots \otimes U_n.
This is equivalent to random Pauli measurements on each qubit. In this case the inverse measurement channel factorizes across the qubits and reads \mathcal\{M\}^\{-1\}\!\bigl(U^\{\dagger\}\,|\hat b\rangle\!\langle\hat b|\,U\bigr) \;=\; \bigotimes_\{j=1\}^\{n\} \Bigl(\, 3\,U_j^\{\dagger\}\,|\hat b_j\rangle\!\langle\hat b_j|\,U_j \;-\; \mathbb\{I\}\,\Bigr)~~ , \tag\{4\} and the shadow norm of a kk-local observable scales as O{{shadow}}{2}  =  3{k},{(boundedby}4{k}O{}{2}{ingeneral)}  , \|O\|_\{\mathrm\{shadow\}\}^\{2\} \;=\; 3^\{\,k\}\,, \qquad \text\{(bounded by \} 4^\{k\}\,\|O\|_\{\infty\}^\{2\}\text\{ in general)\}~~, i.e., it scales exponentially with the locality (weight) of the observable. This channel can be implemented with a circuit of depth one, but the sample complexity (number of measurements) scales exponentially with the weight of the observables.
  1. The nn-qubit Clifford group Cl(2n)\mathrm{Cl}(2^n).
Here the average measurement channel is the depolarizing channel [4] M(ρ)=D1/(2n+1)(ρ)\mathcal{M}(\rho) = \mathcal{D}_{1/(2^n+1)}(\rho), and the inverse of the measurement channel reads {M}{1} ⁣(U{}b^ ⁣b^U)  =  (2{n}+1)U{}b^ ⁣b^U    {I}  .\mathcal\{M\}^\{-1\}\!\bigl(U^\{\dagger\}\,|\hat b\rangle\!\langle\hat b|\,U\bigr) \;=\; (2^\{n\}+1)\,U^\{\dagger\}\,|\hat b\rangle\!\langle\hat b|\,U \;-\; \mathbb\{I\}~~. The shadow norm is equivalent to the Hilbert–Schmidt norm of the traceless part of the observable, O0=Otr(O)I/2nO_0 = O - \mathrm{tr}(O)\,\mathbb{I}/2^{n}, with the two-sided bound {tr} ⁣(O0{2})    O0{{shadow}}{2}    3{tr} ⁣(O0{2})  ,\mathrm\{tr\}\!\bigl(O_0^\{2\}\bigr) \;\le\; \|O_0\|_\{\mathrm\{shadow\}\}^\{2\} \;\le\; 3\,\mathrm\{tr\}\!\bigl(O_0^\{2\}\bigr)~~, so the shadow norm is independent of the system size nn (up to the constant 3) — efficient for global observables such as fidelity. Implementation of random nn-qubit Clifford circuits scales well and estimates global observables efficiently, but requires many more (n2/log(n)n^2/\log(n)) entangling gates to implement. In both cases, the snapshots can be stored efficiently using the stabilizer formalism. For pedagogical clarity this notebook represents Clifford unitaries as dense 2n×2n2^{n}\times 2^{n} matrices, so the inverse-channel application and observable estimator cost O(22n)\mathcal{O}(2^{2n}) per snapshot (after the row-of-UU optimization used below). The same transformations can be carried out in O(n2)\mathcal{O}(n^{2}) per snapshot via the symplectic-tableau representation of Clifford circuits due to Aaronson and Gottesman [5] — the regime in which the protocol’s O(logM)\mathcal{O}(\log M) sample-complexity advantage becomes a real wall-clock speedup. To demonstrate the classical shadow procedure, we will build a classical shadow, fully reconstruct a state, and predict an observable.

Building a Classical Shadow with Classiq

In this example, we will construct a classical shadow of the Φ\Phi^- bell state, utilizing the random Pauli and random Clifford measurement channels. We begin by describing the random Pauli measurement channel in the first section and continue by analyzing the Clifford measurement channel. For each measurement channel, we first introduce the quantum functions required to implement the quantum part of the shadow protocol. Then we present the classical processing functions, which construct the classical shadow. Utilizing the shadows, the full state is constructed and compared to the ground truth. Generally, state construction even with the classical shadow is inefficient, nevertheless, the construction demonstrates the strength of the method. Following, we utilize the classical shadow representation to estimate the expectation value of O=ZZO = Z\otimes Z, and finally, we evaluate the error bound, lower bounding the number of measurements (sample complexity) required to achieve an accurate estimation with high probability. The Pauli measurement channel (with 400400 samples) produced a closer state reconstruction, but did not achieve an accurate estimation of the expectation value of ZZZ\otimes Z. In contrast, the Clifford measurement channel (with only 5050 samples) achieved machine precision for the expectation value estimation, but produced a state with larger distance from the target Bell state (this result is not coincidental but stems from the fact that the Bell state is a stabilizer state and ZZZ\otimes Z is one of its +1+1 stabilizers).
import numpy as np

from classiq import *

np.random.seed(552)
NUM_QUBITS = 2

Classical Shadow with Pauli Measurement Channel

We begin by preparing the Φ\Phi^{-} Bell state and then rotating each qubit so that it corresponds to a randomly chosen Pauli measurement basis. The tomographically complete set {H,HS,I}\{H,\,HS,\,I\} corresponds to measurements in the XX, YY, ZZ bases respectively. A unitary ensemble of tensor products of single-qubit Clifford circuits is easy to implement on NISQ hardware, performs well when estimating local observables, but scales poorly (see discussion above). Rather than baking each random basis choice into a fresh circuit at synthesis time, which forces one synthesis per snapshot and dominates the wall-clock time, we express the per-qubit rotation as RY(θ)RZ(ϕ)R_Y(\theta)\,R_Z(\phi) and pass (θj,ϕj)(\theta_j,\phi_j) as classical execution-time parameters. The program is then synthesized once and executed in a single ExecutionSession.batch_sample call that submits all snapshots together. The mapping from basis index to angles is:
basisgate (up to global phase)θ\thetaϕ\phi
XXHHπ/2\pi/2π\pi
YYHSHSπ/2\pi/23π/23\pi/2
ZZII0000
The angles ϕ\phi and θ\theta are chosen from this set. The mapping is encoded by the BASIS_ANGLES numpy array.
BASIS_ANGLES = np.array(
    [
        [np.pi / 2, np.pi],
        [np.pi / 2, 3 * np.pi / 2],
        [0.0, 0.0],
    ],
    dtype=float,
)

@qfunc
def main(
    thetas: CArray[CReal, NUM_QUBITS],
    phis: CArray[CReal, NUM_QUBITS],
    qarr: Output[QArray],
) -> None:
    """
    Prepares the |Phi^-> Bell state and applies the parametric per-qubit
    basis-change rotation.

The rotation angles are bound at execution time,
    one parameter set per snapshot, so a single synthesized program is
    reused across the entire shadow.

The circuit realizes a measurement in the X, Y, or Z basis (up to global phase).

Because the angles are bound at execution time, the circuit is independent
    of the random basis choice and only needs to be synthesized once.
    """
    prepare_bell_state(1, qarr)
    # performing a basis change by applying RY and RZ rotations with the angles specified in thetas and phis
    for i in range(NUM_QUBITS):
        RZ(phis[i], qarr[i])
        RY(thetas[i], qarr[i])

qprog = synthesize(main)
show(qprog)
Below, calculate_shadow_pauli() synthesizes the parametric program once, samples the random measurement bases, and submits all snapshots in a single sample call. The snapshots contain the measurement results (bits) and unitary operations are stored as basis identifiers, in ids list.
def calculate_shadow_pauli(num_snapshots, num_qubits):
    """
    Computes a Pauli-shadow of size `num_snapshots` from a single synthesized
    program.

The per-qubit basis-change angles are sampled in Python and
    submitted in one `sample` call.

    Args:
        num_snapshots (int): Size of the shadow, also the number of measurements.
        num_qubits (int): Number of qubits in the system.

    Returns:
        snapshots (list[list[int]]): per-snapshot measurement bits
            (`num_qubits` long each).
        ids (list[list[int]]): per-snapshot basis identifiers in {0, 1, 2}
            (X, Y, Z) for each qubit.
    """
    # 1) Sample the random measurement bases in Python (no need to bake them
    #    into separate circuits).
    basis_id_array = np.random.randint(3, size=(num_snapshots, num_qubits))
    ids = [[int(b) for b in row] for row in basis_id_array]

    # 2) Translate each random basis row into RY/RZ angles via BASIS_ANGLES.
    param_batch = [
        {
            "thetas": BASIS_ANGLES[basis_id_array[i], 0].tolist(),
            "phis": BASIS_ANGLES[basis_id_array[i], 1].tolist(),
        }
        for i in range(num_snapshots)
    ]

    # 3) Execute every snapshot in one cloud round-trip via `sample`.

With a
    #    list of parameter sets, `sample` returns one DataFrame per element;
    #    with `num_shots=1`, each frame has exactly one row whose `qarr`
    #    entry is the measured bitstring as a list of ints.
    result_dfs = sample(
        qprog,
        parameters=param_batch,
        num_shots=1,
        random_seed=int(np.random.randint(1e6)),
    )
    snapshots = [list(df["qarr"].iloc[0]) for df in result_dfs]

    return snapshots, ids

num_snapshots = 400
snapshots, ids = calculate_shadow_pauli(num_snapshots, NUM_QUBITS)
print(f"first ten snapshots: {snapshots[:10]}")
print(f"first ten ids: {ids[:10]}")
Output:

Submitting job to simulator
  Job: https://platform.classiq.io/jobs/75d525ba-3ae8-48e2-85e7-26c946edd6fe
  

Output:
first ten snapshots: [[1, 1], [1, 0], [1, 1], [0, 0], [0, 0], [0, 1], [1, 0], [1, 1], [0, 1], [0, 1]]
  first ten ids: [[2, 0], [2, 1], [2, 2], [2, 2], [0, 2], [2, 0], [0, 0], [2, 0], [1, 2], [0, 1]]
  

This is the end of the quantum part of the procedure - the snapshots are all we need to extract information about the state. We will now demonstrate the following uses of these snapshots:
  • Full Tomography: reconstruct the state completely (not just certain pieces of information) as an instructive example.
This application doesn’t offer an advantage over quantum state tomography, requiring exponential resources.
  • Estimating Observables: estimating the expectation values of observables by reconstructing only parts of the state.
This is one of the applications where classical shadow offers an advantage.

State Reconstruction

The two functions below perform a full state reconstruction, given enough snapshots. To reconstruct the state, we take the measurement results in snapshots and apply the inverse of the operations stored in ids. It turns out that since each of our possible unitaries is a tensor product of randomly selected single-qubit Clifford gates, we can apply inverse of the channel Eq. (4) ρ^=j=1n(3UjbjbjUjI)  .\begin{equation} \hat{\rho} = \bigotimes_{j=1}^{n} ( 3 U_j^\dagger |b_j\rangle \langle b_j| U_j - \mathbb{I})~~. \end{equation} In snapshot_reconstruction_pauli() below, we apply this equation to get the density matrix of the reconstructed state. reconstruction_pauli() averages these density matrices to get an accurate estimation of the initial state ρ\rho.
def snapshot_reconstruction_pauli(snapshot, snapshot_ids, num_qubits):
    """
    Reconstructs the state density matrix from one snapshot.

Helper for `reconstruction_pauli()`.

    Args:
        snapshot (list[int]): Element of snapshots; `num_qubits` long.
        snapshot_ids (list[int]): Element of the `ids` list returned by `calculate_shadow_pauli`.
        num_qubits (int): Number of qubits in the system.

    Returns:
        final_state (np.ndarray): Snapshot density matrix reconstruction.

    """
    # Density matrices of computational basis states
    zero_state = np.array([[1, 0], [0, 0]])
    one_state = np.array([[0, 0], [0, 1]])

    # S, H, I matrices
    S_matrix = np.array([[1, 0], [0, 1j]], dtype=complex)
    H_matrix = (1 / np.sqrt(2)) * np.array([[1, 1], [1, -1]])
    I_matrix = np.array([[1, 0], [0, 1]])

    # Unitaries applied in circuit.
    unitaries = [H_matrix, H_matrix @ S_matrix, I_matrix]

    final_state = [1]
    for i in range(num_qubits):
        state = zero_state if snapshot[i] == 0 else one_state
        U = unitaries[int(snapshot_ids[i])]

        # Applying Eq. (S44) from Huang, et al.
        local_state = 3 * (U.conjugate().transpose() @ state @ U) 

- I_matrix
        final_state = np.kron(final_state, local_state)

    return final_state

def reconstruction_pauli(in_snapshots, in_ids, num_qubits):
    """
    Performs a full reconstruction of the quantum state density matrix, where the quantum state is that
    prepared in `prep_state()`.

    Args:
        in_snapshots (list): List of snapshots returned by `calculate_shadow_pauli` (list of ints in {0, 1}).
        in_ids (list): List of unitary identifiers returned by `calculate_shadow_pauli` (list of ints in {0, 1, 2}).
        num_qubits (int): Number of qubits in the system.

    Returns:
        reconstructed_state (np.ndarray): Full state density matrix reconstruction (2d numpy complex array).
    """
    # Average over snapshots.
    reconstructed_state = np.zeros((2**num_qubits, 2**num_qubits), dtype=complex)
    for i in range(num_snapshots):
        result = snapshot_reconstruction_pauli(in_snapshots[i], in_ids[i], num_qubits)
        reconstructed_state += result

    return reconstructed_state / num_snapshots

reconstructed_state = reconstruction_pauli(snapshots, ids, NUM_QUBITS)
print(f"Full state density matrix: \n {np.round(reconstructed_state, decimals=6)}\n")

bell_state_density_matrix = np.array(
    [[0.5, 0, 0, -0.5], [0, 0, 0, 0], [0, 0, 0, 0], [-0.5, 0, 0, 0.5]]
)
print(f"Bell State Density Matrix: \n {bell_state_density_matrix}\n")
Output:

Full state density matrix: 
   [[ 0.424375+0.j       -0.005625+0.075j    -0.03    +0.001875j
    -0.5175  -0.016875j]
   [-0.005625-0.075j     0.090625+0.j       -0.01125 -0.050625j
    -0.01875 +0.001875j]
   [-0.03    -0.001875j -0.01125 +0.050625j  0.004375+0.j
     0.050625-0.00375j ]
   [-0.5175  +0.016875j -0.01875 -0.001875j  0.050625+0.00375j
     0.480625+0.j      ]]

  Bell State Density Matrix: 
   [[ 0.5  

0.   0.  -0.5]
   [ 

0.   0.   

0.   0. ]
   [ 

0.   0.   

0.   0. ]
   [-0.5  

0.   0.   0.5]]
  

With 200 snapshots the output density matrix looks similar to the Φ\Phi^- bell state density matrix. To quantitatively compare the reconstructed state to the true state, we will use the cell below to compute the ‘distance’ (the Frobenius norm) between the two states.
print(f"Comparing to: \n {bell_state_density_matrix}\n")
# distance = norm(ref_state_density_matrix, reconstructed_state)
difference = bell_state_density_matrix - reconstructed_state
distance = np.linalg.norm(difference, "fro")
print(f"Distance: {np.round(distance, decimals=6)}")
Output:

Comparing to: 
   [[ 0.5  

0.   0.  -0.5]
   [ 

0.   0.   

0.   0. ]
   [ 

0.   0.   

0.   0. ]
   [-0.5  

0.   0.   0.5]]

  Distance: 0.199679
  

The accuracy of the reconstruction increases with the number of snapshots. Test different values for num_snapshots to see for yourself. The execution time scales linearly with the number of snapshots. Although completely reconstructing a state using the classical shadow procedure is an instructive example, the procedure doesn’t offer much advantage when doing so. Classical shadow shine when estimating many observables.

Estimating Observables

In this example, we will assume that our observables are only made up of Pauli I,X,Y,ZI, X, Y, Z operators. That is, O=jnσjO = \bigotimes_j^n \sigma_j, where nn is the number of qubits. Recall that to estimate an observable, we compute the median of observable estimators. Each estimator is the product of the observable and the mean of the estimated states in the median-of-means division (Eq. (2)). For each of tr(Oρ^i)\text{tr} (O \hat{\rho}_i), we must compute the estimated states from the snapshots in the shadow. For this, we will apply Eq. (4) tr(Oρ^i)=tr(n=1jσj(3Ujb^jb^jUjI))\text{tr}(O \hat{\rho}_i) = \text{tr}\left(\bigotimes_{n=1}^{j} \sigma_j (3 U_j^\dagger |\hat{b}_j\rangle \langle \hat{b}_j| U_j - \mathbb{I})\right) =j=1n tr(σj(3Ujb^jb^jUjI))  .=\prod_{j=1}^{n} \ \text{tr}\left( \sigma_j (3 U_j^\dagger |\hat{b}_j\rangle \langle \hat{b}_j| U_j - \mathbb{I})\right)~~. When the basis matches the observable, e.g. σj=X\sigma_j = X, and we measured in the XX basis (applied HH in the circuit), the trace evaluates to 33 when b^j=0|\hat{b}_j\rangle = |0\rangle and 3-3 when b^j=1|\hat{b}_j\rangle = |1\rangle. If σj=I\sigma_j = I, the trace evaluates to 11. Otherwise, the trace evaluates to 00, making the product 00.
def estimate_observable_pauli(in_snapshots, in_ids, observable, div, num_qubits):
    """
    Estimates the expectation value of an observable using a classical shadow.

    Args:
        in_snapshots (list[int]): List of snapshots returned by `calculate_shadow_pauli`.
        in_ids (list[int]): List of unitary identifiers returned by `calculate_shadow_pauli`.
        observable (SparsePauliOp): Observable to be estimated.
        div (int): Number of divisions for median-of-means.
        num_qubits (int): Number of qubits in the system.

    Returns:
        expval (float): Esimated expectation value.
    """
    sum = 0
    in_snapshots = np.array(in_snapshots)
    in_ids = np.array(in_ids)

    # Loop takes care of observables that are sums.
    for element in observable.terms:
        means = []
        ops = []  # operators in observable

        # Pauli.I = 0, Pauli.X = 1, Pauli.Y = 2, Pauli.Z = 3
        for i in range(num_qubits):
            if i < len(element.paulis):
                ops.append(
                    element.paulis[i].pauli - 1
                )  # Subtracting 1 to stay consistent with format of ids, where X, Y, Z correspond to 0, 1, 

2.
            else:
                ops.append(-1)  # Add I operator if no operator present.

        ops = np.array(ops)

        for i in range(0, num_snapshots, num_snapshots // div):
            # Divide in_snapshots and in_ids into div chunks.
            in_snapshots_div, in_ids_div = (
                in_snapshots[i : i + num_snapshots // div],
                in_ids[i : i + num_snapshots // div],
            )

            prods = []
            for j in range(num_snapshots // div):
                # Terms that will be in the product.
                terms = np.ones_like(ops)

                # Create masks for identity, matching, and non-matching operators.
                non_id_mask = ops != -1
                match_mask = ops == in_ids_div[j]
                mismatch_mask = (~match_mask) & non_id_mask

                # Handle matches.
                terms[match_mask & (in_snapshots_div[j] == 0)] = 3.0
                terms[match_mask & (in_snapshots_div[j] == 1)] = -3.0

                # Handle mismatches.
                terms[mismatch_mask] = 0.0
                prods.append(np.prod(terms))

            means.append(np.mean(prods))
        sum += np.median(means) * element.coefficient
    return sum
In this example, we will estimate the expectation value of the observable ZZZ \otimes Z. Due to the low number of snapshots (as we will see in the next section), the estimation may not be very accurate, and will vary with different sets of snapshots.
observable = Pauli.Z(0) * Pauli.Z(1)
estimate = estimate_observable_pauli(
    snapshots, ids, observable, div=2, num_qubits=NUM_QUBITS
)
print(f"Estimated <ZZ> (Pauli shadow): {estimate:.4f}")
print(f"Error relative to ground truth value of 1.0: {1.0 - estimate:.4f}")
Output:

Estimated <ZZ> (Pauli shadow): 0.8100
  Error relative to ground truth value of 1.0: 0.1900
  

To compute the number of snapshots we need to accurately predict MM observables with probability 1δ1-\delta that the error is ϵ\epsilon, we use Eq. S13 from Ref. 3: K=2log(2Mδ) and N=34ϵ2max1iMOitr(Oi)2nIshadow2  .\begin{equation} \tag{5} K = 2\log(\frac{2M}{\delta}) \ \text{and} \ N = \frac{34}{\epsilon^2} \max_{1\le i \le M} \| O_i - {\frac{\text{tr} (O_i)} {2^n}}\mathbb{I} \| _{\text{shadow}}^2 ~~.\end{equation} For the random-Pauli ensemble, the shadow norm of a kk-local Pauli string PP is Pshadow2  =  3k,(bounded by 4kP2 in general),\|P\|_{\text{shadow}}^2 \;=\; 3^{\,k}\,, \qquad \text{(bounded by } 4^{k}\,\|P\|_\infty^2 \text{ in general),} so the locality factor 3k3^{\,k} dominates the sample cost. For a sum of Pauli terms O=acaPaO = \sum_a c_a P_a, the triangle inequality gives the upper bound Oshadow    aca3weight(Pa)/2  .\|O\|_{\text{shadow}} \;\le\; \sum_a |c_a|\,3^{\,\text{weight}(P_a)/2}~~. NKNK snapshots are sufficient to accurately predict MM observables, using the median-of-means algorithm with KK divisions such that oi^(N,K)tr(Oiρ)ϵ    1iM   with probability1δ  .|\hat{o_i}(N, K) - \text{tr} (O_i \rho)| \le \epsilon \ \ \forall \ \ 1 \le i \le M \ \ \text{ with probability} \ge 1-\delta~~. The error_bound_pauli() function below implements this equation for the case of an ensemble consisting of Pauli basis measurements: it computes the per-term locality kk and uses the shadow-norm bound 3k3^{\,k} above. For other ensembles the shadow norm has a different form (e.g. the Hilbert-Schmidt bound for the random-Clifford ensemble).
def error_bound_pauli(error, observables, delta):
    """
    Calculates the minimum shadow size that guarantees a given error and
    probability of failure for a unitary ensemble consisting of Pauli basis
    measurements.

Implements Eq. (5) with the random-Pauli shadow-norm bound for a sum of
    Pauli terms,

        || sum_a c_a P_a ||_shadow^2  <=  ( sum_a |c_a| * 3^(weight(P_a)/2) )^2,

    where `weight(P_a)` is the number of non-identity factors in the Pauli
    string `P_a`.

The 3^k locality factor is essential: without it the bound
    underestimates the required samples by 3^k for a k-local observable.

    Args:
        error (float): Desired error.
        observables (list[SparsePauliOp]): List of SparsePauliOps.
        delta (float): Probability of failure.

    Returns:
        samples (tuple(int, int)): (N*K, K) -- total snapshots required and
        the number of median-of-means divisions K.
    """
    M = len(observables)
    K = 2 * np.log(2 * M / delta)

    shadow_norms = []
    for obs in observables:
        bound = 0.0
        for term in obs.terms:
            weight = sum(1 for p in term.paulis if p.pauli != 0)
            bound += abs(term.coefficient) * np.sqrt(3**weight)
        shadow_norms.append(bound**2)

    N = 34 * max(shadow_norms) / error**2

    return int(np.ceil(N * K)), int(K)

observables = [observable]
required_snapshots = error_bound_pauli(0.2, observables, 0.01)
print(
    f"Number of samples required: {required_snapshots[0]}, Number of divisions: {required_snapshots[1]}"
)
Output:

Number of samples required: 81065, Number of divisions: 10
  

The error bound predicts that to estimate ZZZ \otimes Z with error 0.20.2 and failure probability 0.010.01 we need on the order of 8×104\sim 8 \times 10^{4} snapshots: the locality factor 32=93^{2}=9 multiplies the per-observable sample count compared to the operator-norm-only bound. With only 400400 snapshots split into div=2 chunks of 200 shots each, the standard error is 32/4000.15\sqrt{3^{2}/400}\approx 0.15.

Classical shadow with random Clifford measurements

We now repeat the protocol with the second ensemble described in the introduction: the full nn-qubit Clifford group Cl(2n)\mathrm{Cl}(2^n). Instead of applying an independent single-qubit Clifford on each qubit, we sample a single uniformly-random Cl(2n)\mathrm{Cl}(2^n) element UU and apply it globally to the bell state, then measure in the computational basis. As discussed earlier, the average measurement channel is the depolarizing channel M(ρ)=D1/(2n+1)(ρ)\mathcal{M}(\rho) = \mathcal{D}_{1/(2^n+1)}(\rho), and the inverse-channel snapshot reads ρ^  =  M1 ⁣(Ub^ ⁣b^U)  =  (2n+1)Ub^ ⁣b^U    I.\hat{\rho} \;=\; \mathcal{M}^{-1}\!\bigl(U^\dagger\,|\hat b\rangle\!\langle\hat b|\,U\bigr) \;=\; (2^{\,n} + 1)\,U^\dagger\,|\hat b\rangle\!\langle\hat b|\,U \;-\; \mathbb{I}. This Clifford track is independent of the Pauli-shadow workflow above. Two practical changes from the Pauli protocol:
  1. We sample the random Clifford classically as a depth-2020 sequence of the generator set {H, S, CNOT}\{H,\ S,\ \mathrm{CNOT}\} (applied to randomly chosen qubits).
This depth is past the 3-design mixing time (the circuit depth required to approximate a random Clifford unitary). The same gate sequence is then traced into the Classiq circuit using the native H, S, and CX qfuncs.
  1. We record only the sampled gate sequence per snapshot in clifford_sequences — a list of (label, q1, q2) tuples.
The matrix UU is built on demand inside the reconstruction / observable-estimator routines via sequence_to_unitary. Note: As mentioned in the beginning, in contrast to the present implementation, there is no need to create the unitary matrices, corresponding to the random Clifford circuits, explicitly (see sequence_to_unitary). The unitary operations can be calculated efficiently (O(n2)\mathcal{O}(n^2)) employing the stabilizer formalism [5].
# Single-qubit gate matrices used only for the lazy matrix materialization below.
_H1 = (1.0 / np.sqrt(2.0)) * np.array([[1, 1], [1, -1]], dtype=complex)
_S1 = np.array([[1, 0], [0, 1j]], dtype=complex)
_I1 = np.eye(2, dtype=complex)


def _embed_1q(g: np.ndarray, q: int, num_qubits: int) -> np.ndarray:
    """Embed a single-qubit gate `g` at qubit `q` in the n-qubit Hilbert space.

Tensor order matches `qarr`: qubit 0 is the MSB / leftmost factor."""
    out = np.array([[1.0]], dtype=complex)
    for i in range(num_qubits):
        out = np.kron(out, g if i == q else _I1)
    return out


def _cnot_matrix(ctrl: int, tgt: int, num_qubits: int) -> np.ndarray:
    """Dense CNOT matrix on `(ctrl, tgt)` in the full n-qubit space, MSB convention."""
    dim = 2**num_qubits
    u = np.zeros((dim, dim), dtype=complex)
    for x in range(dim):
        bits = [(x >> (num_qubits - 1 - i)) & 1 for i in range(num_qubits)]
        new_bits = bits[:]
        if bits[ctrl] == 1:
            new_bits[tgt] = 1 - new_bits[tgt]
        new_x = sum(new_bits[i] << (num_qubits - 1 - i) for i in range(num_qubits))
        u[new_x, x] = 1.0
    return u


def sample_clifford_circuit(num_qubits: int, depth: int = 20):
    """Sample a depth-`depth` random gate sequence from {H, S, CNOT}.

    Returns:
        seq (list[tuple]): each entry is `(label, q1, q2)` with
            `label in {"H", "S", "CX"}` and `q2 = -1` for single-qubit gates.

For n = 2 a depth of ~20 is past the 3-design mixing time of random
    Clifford circuits, so the resulting distribution is close enough to Haar
    on the moments we need.
    """
    seq = []
    for _ in range(depth):
        kind = np.random.randint(0, 3)
        if kind < 2:
            q = int(np.random.randint(0, num_qubits))
            label = "H" if kind == 0 else "S"
            seq.append((label, q, -1))
        else:
            c, t = np.random.choice(num_qubits, size=2, replace=False)
            seq.append(("CX", int(c), int(t)))
    return seq


def sequence_to_unitary(seq, num_qubits: int) -> np.ndarray:
    """Materialize a `(label, q1, q2)` gate sequence as the dense
    `2**num_qubits x 2**num_qubits` Clifford unitary.
    """
    U = np.eye(2**num_qubits, dtype=complex)
    for label, q1, q2 in seq:
        if label == "H":
            U = _embed_1q(_H1, q1, num_qubits) @ U
        elif label == "S":
            U = _embed_1q(_S1, q1, num_qubits) @ U
        else:
            U = _cnot_matrix(q1, q2, num_qubits) @ U
    return U

def calculate_shadow_clifford(num_snapshots: int, num_qubits: int):
    """Collect `num_snapshots` single-shot snapshots under random Cl(2^n) unitaries.

Each iteration synthesizes a fresh program (a new random Clifford gate
    sequence is sampled at trace-time), then samples one shot via `sample`.

The sampled gate sequence is captured in a closure-local `sequences`
    list rather than a global, so successive calls don't interfere.

    Args:
        num_snapshots (int): Number of single-shot snapshots to collect.
        num_qubits (int): Number of qubits in the system.

    Returns:
        snapshots (list[list[int]]): per-snapshot measured bitstrings.
        sequences (list[list[tuple]]): per-snapshot Clifford gate sequence
            (each a list of `(label, q1, q2)` tuples) — same length and
            order as `snapshots`.
    """
    sequences = []

    @qfunc
    def clifford_application(state: QArray) -> None:
        """Sample a random Cl(2^n) gate sequence and apply it directly using
        Classiq's native `H`, `S`, and `CX` qfuncs.

The sampled sequence is
        appended to the enclosing `sequences` list (no module-level state)."""
        seq = sample_clifford_circuit(num_qubits)
        sequences.append(seq)
        for label, q1, q2 in seq:
            if label == "H":
                H(state[q1])
            elif label == "S":
                S(state[q1])
            else:
                CX(state[q1], state[q2])

    @qfunc
    def main(qarr: Output[QArray]) -> None:
        """Bell state, followed by a random global Clifford applied as native gates."""
        prepare_bell_state(1, qarr)
        clifford_application(qarr)

    snapshots = []
    for i in range(num_snapshots):
        qprog = synthesize(main)
        if i == 0:
            show(qprog)  # render the very first synthesised circuit
        df = sample(
            qprog,
            num_shots=1,
            random_seed=int(np.random.randint(1e6)),
        )
        snapshots.append(list(df["qarr"].iloc[0]))
    return snapshots, sequences


# We use a smaller shadow than the Pauli case (50 vs 200) to decrease runtime.
num_snapshots_clifford = 50
snapshots_clifford, clifford_sequences = calculate_shadow_clifford(
    num_snapshots_clifford, NUM_QUBITS
)
print(f"Collected {len(snapshots_clifford)} Clifford-shadow snapshots")
print(f"first 5 outcomes:           {snapshots_clifford[:5]}")
print(f"first sampled gate sequence: {clifford_sequences[0]}")
Output:

Submitting job to simulator
  

Output:

Quantum program link: https://platform.classiq.io/circuit/3DWxa5FD0hw8W3k5BJ45gKBRH7x
  

Output:

Job: https://platform.classiq.io/jobs/fa7e8de9-6286-476a-919c-b7c230c1561a
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/a2cc8d0d-3dbd-493a-a991-9f7c47684106
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/3a232641-b98c-45f1-bcff-76ee9489e71e
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/f1dd83f4-3c1c-49bb-997b-bf76924c3327
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/62dc0782-bf3a-4a13-bab9-18375eed57a9
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/5de7960a-4a42-499f-a7ca-17b21b8c5775
  Submitting job to simulator
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/a5519b6c-9832-4995-be1e-905c251862c3
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/c65cc59b-ef68-434b-ae82-cf206348fb09
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/d6926199-d80b-4745-8141-14748521cee2
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/65bb3183-dcb0-4d4d-bf35-cb0c17c5453f
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/3a6260f6-0fdb-4529-8510-6bc2f2ef2082
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/795793ca-df5a-46cd-87cd-d89cc2a5744f
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/51e916fe-417c-4132-b7b6-d01d4def5251
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/e2564b90-338c-4659-aa12-3fb928cd40cf
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/36ba8836-3372-48ab-a457-8498c654f104
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/a97a44bd-bf74-4d55-bae9-edd7b5a19923
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/ddc6bfa9-d6a2-4cae-9e4a-402f4366bffc
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/c4ac0362-71ca-4138-9f48-632c899fbf82
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/a342bfe2-3dd1-4ae4-a6ea-204301212478
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/5290ac1b-4f59-4b2f-8a97-d0df354e4c23
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/50aa1203-f3b0-4173-b519-7f0c807d1738
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/115ebf19-c74d-47af-834a-1f50bd83073a
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/005e16ab-d0ca-40c6-8481-43f729344abc
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/48d0ec7c-12bc-4e2b-9c27-70f1bc14883c
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/56b963fb-20f4-4ee9-a09d-74f8846a5edd
  Submitting job to simulator
  Job: https://platform.classiq.io/jobs/28a78587-12df-4a8a-9dc4-422eb030ea18
  

Output:

Collected 50 Clifford-shadow snapshots
  first 5 outcomes:           [[1, 0], [1, 1], [1, 0], [0, 1], [1, 0]]
  first sampled gate sequence: [('S', 0, -1), ('H', 0, -1), ('S', 1, -1), ('S', 1, -1), ('S', 0, -1), ('H', 1, -1), ('CX', 0, 1), ('CX', 0, 1), ('S', 0, -1), ('H', 1, -1), ('S', 0, -1), ('H', 0, -1), ('S', 0, -1), ('S', 0, -1), ('CX', 1, 0), ('H', 1, -1), ('H', 1, -1), ('S', 0, -1), ('S', 0, -1), ('S', 0, -1)]
  

Reconstruction with the Clifford shadow

The inverse-channel snapshot for the Clifford ensemble is given by: ρ^t  =  (2n+1)Utb^t ⁣b^tUt    I.\hat\rho_t \;=\; (2^{\,n} + 1)\, U_t^{\dagger}\, |\hat b_t\rangle\!\langle\hat b_t|\,U_t \;-\; \mathbb{I}. Here, UtU_t is a 2n×2n2^n \times 2^n unitary which generally does not factorize to nn single qubit gates. So we build b^t ⁣b^t|\hat b_t\rangle\!\langle\hat b_t| in the full 2n2^n-dimensional Hilbert space, and perform the inverse channel. Averaging across the snapshots than reconstructs ρ\rho.
def snapshot_reconstruction_clifford(snapshot, U, num_qubits):
    """Single-snapshot Clifford-shadow reconstruction.

Uses the fact that ``|b><b|`` is rank-1, so
    ``U^dagger |b><b| U = conj(v) outer v`` where ``v = U[b, :]`` is a single
    row of the Clifford unitary.

    Args:
        snapshot (list[int]): Measurement outcome, a list of `num_qubits` bits in
            `{0, 1}` ordered with qubit 0 as MSB (matching the convention used
            by `qarr` in the Pauli-shadow data acquisition).
        U (np.ndarray): Dense `2**num_qubits x 2**num_qubits` complex matrix of
            the random Clifford unitary that was applied to the state just
            before measurement.
        num_qubits (int): Number of qubits in the system.

    Returns:
        np.ndarray: The single-snapshot density-matrix estimator
        `(2**n + 1) * U^dagger |b><b| U 

- I`, of shape
        `(2**num_qubits, 2**num_qubits)`.
    """
    dim = 2**num_qubits
    # Convert bit list to basis index (qubit 0 = MSB by convention used in `qarr`).
    b_idx = sum(int(snapshot[i]) << (num_qubits - 1 - i) for i in range(num_qubits))
    v = U[b_idx, :]
    return (dim + 1) * np.outer(v.conjugate(), v) - np.eye(dim, dtype=complex)


def reconstruction_clifford(in_snapshots, in_sequences, num_qubits):
    """Average the snapshot reconstructions across all Clifford-shadow samples.

    `in_sequences[i]` is the gate sequence for snapshot `i`; the corresponding
    `2**n x 2**n` unitary is materialized inline via `sequence_to_unitary`.
    """
    rho_hat = np.zeros((2**num_qubits, 2**num_qubits), dtype=complex)
    for snap, seq in zip(in_snapshots, in_sequences):
        U = sequence_to_unitary(seq, num_qubits)
        rho_hat += snapshot_reconstruction_clifford(snap, U, num_qubits)
    return rho_hat / len(in_snapshots)


reconstructed_state_clifford = reconstruction_clifford(
    snapshots_clifford, clifford_sequences, NUM_QUBITS
)
print("Clifford-shadow reconstruction:")
print(np.round(reconstructed_state_clifford, decimals=4))
print("\nBell state target:")
print(bell_state_density_matrix)

distance_clifford = np.linalg.norm(
    bell_state_density_matrix - reconstructed_state_clifford, "fro"
)
print(f"\nFrobenius distance (Clifford shadow): {distance_clifford:.4f}")
Output:

Clifford-shadow reconstruction:
  [[ 0.475-0.j    -0.025-0.1j    0.1  -0.125j -0.8  -0.075j]
   [-0.025+0.1j   -0.175-0.j    -

0.   +0.025j  0.05 +0.075j]
   [ 0.1  +0.125j -

0.   -0.025j  0.075-0.j    -0.025-0.1j  ]
   [-0.8  +0.075j  0.05 -0.075j -0.025+0.1j    0.625+0.j   ]]

  Bell state target:
  [[ 0.5  

0.   0.  -0.5]
   [ 

0.   0.   

0.   0. ]
   [ 

0.   0.   

0.   0. ]
   [-0.5  

0.   0.   0.5]]

  Frobenius distance (Clifford shadow): 0.5958
  

Estimation of Observables

For the random nn-qubit Clifford measurement channel the inverse channel does not factorize across qubits, and the per-snapshot estimator reads ρ^i  =  (2n+1)Uib^i ⁣b^iUi    I.\hat{\rho}_{i} \;=\; (2^{n}+1)\, U_{i}^{\dagger}\,|\hat{b}_{i}\rangle\!\langle\hat{b}_{i}|\,U_{i} \;-\; \mathbb{I}. Substituting into tr(Oρ^i)\text{tr}(O\hat{\rho}_{i}) gives tr(Oρ^i)  =  (2n+1)(UiOUi)b^i,b^i    tr(O),\text{tr}(O\,\hat{\rho}_{i}) \;=\; (2^{n}+1)\, \bigl(U_{i}\, O\, U_{i}^{\dagger}\bigr)_{\hat b_{i},\,\hat b_{i}} \;-\; \text{tr}(O), so each snapshot’s contribution is a single diagonal entry of the rotated observable. As before, we average within each median-of-means chunk and take the median across the KK chunks (Eqs. (1)–(2)).
def estimate_observable_clifford(
    in_snapshots, in_sequences, observable, div, num_qubits
):
    """
    Estimates <O> = tr(O rho) from a Clifford classical shadow with
    median-of-means.

Each snapshot estimator is rho_hat_i = (2^n + 1) U_i^dagger |b_i><b_i| U_i 

- I,
    so tr(O rho_hat_i) = (2^n + 1) (U_i O U_i^dagger)[b_i, b_i] - tr(O).

    Args:
        in_snapshots (list[list[int]]): Bits per snapshot, num_qubits long each.
        in_sequences (list[list[tuple]]): Sampled Clifford gate sequences,
            one per snapshot.

Each is materialized to a 2^n x 2^n unitary
            inline via `sequence_to_unitary`.
        observable (SparsePauliOp): Observable to estimate.
        div (int): Number of median-of-means divisions K.
        num_qubits (int): Number of qubits in the system.

    Returns:
        float: estimated expectation value.
    """
    dim = 2**num_qubits
    N = len(in_snapshots)
    chunk = N // div

    O = hamiltonian_to_matrix(observable)
    trace_O = np.trace(O).real

    means = []
    for k in range(div):
        snaps_k = in_snapshots[k * chunk : (k + 1) * chunk]
        seq_k = in_sequences[k * chunk : (k + 1) * chunk]

        vals = []
        for snap, seq in zip(snaps_k, seq_k):
            U = sequence_to_unitary(seq, num_qubits)
            b_idx = sum(int(snap[i]) << (num_qubits - 1 - i) for i in range(num_qubits))
            v = U[b_idx, :]
            diag = (v @ O @ v.conjugate()).real
            vals.append((dim + 1) * diag - trace_O)
        means.append(np.mean(vals))

    return float(np.median(means))


estimate_clifford = estimate_observable_clifford(
    snapshots_clifford, clifford_sequences, observable, div=10, num_qubits=NUM_QUBITS
)
print(f"Estimated <ZZ> (Clifford shadow): {estimate_clifford:.1f}")
print(f"Error relative to ground truth value of 1.0: {(1.0 - estimate_clifford)}")
Output:

Estimated <ZZ> (Clifford shadow): 1.0
  Error relative to ground truth value of 1.0: 9.992007221626409e-16
  

We achieve machine precision in estimating the expectation value.

Error bound — Clifford ensemble

The sample-complexity bound (Eq. (5)) holds with the Clifford-ensemble shadow norm. Using the upper bound from the introduction, Otr(O)2nIshadow2    3tr ⁣(O02),O0=Otr(O)2nI,\| O - {\textstyle\frac{\text{tr}(O)}{2^{n}}}\,\mathbb{I} \|^{2}_{\text{shadow}} \;\le\; 3\,\text{tr}\!\left( O_{0}^{\,2}\right), \qquad O_{0} = O - \tfrac{\text{tr}(O)}{2^{n}}\,\mathbb{I}, we obtain a sufficient sample count N  =  34ϵ2max1iM3tr ⁣(O0,i2),K  =  2log ⁣(2Mδ).N \;=\; \frac{34}{\epsilon^{2}}\,\max_{1\le i\le M}\, 3\,\text{tr}\!\left( O_{0,i}^{\,2}\right), \qquad K \;=\; 2\log\!\Bigl(\tfrac{2M}{\delta}\Bigr). Unlike the Pauli ensemble, this does not scale with the locality of the observable, only with the Hilbert–Schmidt norm of its traceless part, so global observables (e.g. fidelities) are tractable.
def error_bound_clifford(error, observables, delta):
    """
    Calculates the minimum Clifford-shadow size guaranteeing additive error
    `error` on every estimated observable with probability `>= 1 - delta`.

Implements Eq. (5) with the Clifford-ensemble shadow-norm upper bound
        || O - tr(O)/2**n * I ||^2_shadow  <=  3 * tr( (O - tr(O)/2**n * I)^2 ).

    Args:
        error (float): Desired additive error per observable.
        observables (list[SparsePauliOp]): Observables to estimate.
        delta (float): Probability of failure.

    Returns:
        samples (tuple(int, int)): (N*K, K) — total snapshots required and the
        number of median-of-means divisions.
    """
    observable_ops = [hamiltonian_to_matrix(term) for term in observables]

    M = len(observable_ops)
    K = 2 * np.log(2 * M / delta)

    shadow_norms = []
    for op in observable_ops:
        n = int(np.log2(op.shape[0]))
        traceless = op - np.trace(op) / 2**n * np.eye(2**n)
        hs_sq = float(np.real(np.trace(traceless.conj().T @ traceless)))
        shadow_norms.append(3 * hs_sq)

    N = 34 * max(shadow_norms) / error**2

    return int(np.ceil(N * K)), int(K)


required_snapshots_clifford = error_bound_clifford(0.2, [observable], 0.01)
print(
    f"Bound predicts {required_snapshots_clifford[0]} Clifford samples are required to achieve 0.2 error\n"
    f"median-of-means divisions: {required_snapshots_clifford[1]}"
)
Output:

Bound predicts 108086 Clifford samples are required to achieve 0.2 error
  median-of-means divisions: 10
  

In practice, for the Bell state and chosen observable, we obtain an excellent estimation of the observables with much fewer samples than the error bound predicts.

Summary

We have reconstructed full states and estimated observables using the classical shadow procedure on Classiq. However, the procedure can accomplish much more, such as estimating fidelity, finding entanglement witnesses, and estimating entanglement entropy, as shown in Ref. [3].

References

[1] J. Haah, A. W. Harrow, Z. Ji, X. Wu and N. Yu, “Sample-Optimal Tomography of Quantum States,” in IEEE Transactions on Information Theory, vol. 63, no. 9, pp. 5628-5641, Sept. 2017, doi: 10.1109/TIT.2017.2719044 arXiv. [2] Aaronson, S. (2018). Shadow tomography of quantum states. arXiv. [3] Huang, HY., Kueng, R. & Preskill, J. Predicting many properties of a quantum system from very few measurements. Nat. Phys. 16, 1050–1057 (2020). https://doi.org/10.1038/s41567-020-0932-7. arXiv. [4] Quantum depolarizing channel. Wikipedia. https://en.wikipedia.org/wiki/Quantum_depolarizing_channel [5] Aaronson, S., & Gottesman, D. (2004). Improved Simulation of Stabilizer Circuits. Physical Review A, 70(5),
  1. https://doi.org/10.1103/PhysRevA.70.052328. arXiv.
[6] Wiersema, R., & Doolittle, B. (2021, June). Classical shadows. PennyLane Demos. Xanadu. https://pennylane.ai/qml/demos/tutorial_classical_shadows