Skip to content

State Preparation

Most quantum applications start with preparing a state in a quantum register. For example, in finance the state may represent the price distribution of some assets. In chemistry, it may be an initial guess for the ground state of a molecule, and in a quantum machine learning, a feature vector to analyze.

The state preparation function creates a quantum circuit that outputs either a probability distribution \(p_{i}\) or a real amplitudes vector \(a_{i}\) in the computational basis, with \(i\) denoting the corresponding basis state. The amplitudes take the form of list of float numbers. The probabilities are a list of positive numbers or a mixture of Gaussian distributions. This is the resulting wave function for probability:

\[ \left|\psi\right\rangle = \sum_{i}\sqrt{p_{i}} \left|i\right\rangle, \]

and this is for amplitude:

\[ \left|\psi\right\rangle = \sum_{i}a_{i} \left|i\right\rangle. \]

In general, state preparation is hard. Only a very small portion of the Hilbert space can be prepared efficiently (in \(O(poly(n))\) steps) on a quantum circuit. Therefore, in practice, an approximation is often used to lower the complexity. The approximation is specified by an error metric and an error range.

For a circuit consisting of error-less gates, there are several options for the error metric:

For amplitude preparation, you can only use the \(L_p\) norms.

The higher the specified error tolerance, the smaller the output circuit. If you do not specify an error, the Classiq engine attempts to build the exact circuit.

The state preparation algorithm can be tuned depending on whether the probability distribution is sparse or dense. The synthesis engine will automatically select the parameterization based on the given constraints and optimization level.

Syntax

Function: StatePreparation

Parameters:

  • probabilities: [pmf, GaussianMixture, List, Tuple, ndarray]

    or amplitudes: [ List, Tuple, ndarray]

  • error_metric: Optional[Dict[KL, L1, L2, MAX_PROBABILITY, LOSS_OF_FIDELITY, TOTAL_VARIATION, HELLINGER, BHATTACHARYYA, Dict[["lower_bound" "upper_bound"],NonNegativeFloatRange]]
{
  "function": "StatePreparation",
  "function_params": {
    "probabilities": [0.05, 0.11, 0.13, 0.23, 0.27, 0.12, 0.03, 0.06],
    "error_metric": {
      "KL": {
        "upper_bound": 0.01
      }
    }
  }
}

Example 1: Loading Point Mass (PMF) Function

{
  "functions": [
    {
      "name": "main",
      "body": [
        {
          "function": "StatePreparation",
          "function_params": {
              "probabilities": [0.05, 0.11, 0.13, 0.23, 0.27, 0.12, 0.03, 0.06],
              "error_metric": { "KL": { "upper_bound": 0.01 } }
          }
        }
      ]
    }
  ],
  "constraints": { "max_depth": 91 }
}
from classiq import Model, synthesize, show
from classiq.model import Constraints
from classiq.builtin_functions import StatePreparation

probabilities = (0.05, 0.11, 0.13, 0.23, 0.27, 0.12, 0.03, 0.06)
params = StatePreparation(
    probabilities=probabilities,
    error_metric={"KL": {"upper_bound": 0.01}},
)

constraints = Constraints(max_depth=91)
model = Model(constraints=constraints)
model.StatePreparation(params)
quantum_program = synthesize(model.get_model())

show(quantum_program)

This example generates a circuit whose output state probabilities are an approximation to the PMF given. That is, the probability of measuring the state \(|000⟩\) is \(0.05\), \(|001⟩\) is \(0.11\),... , and the probability to measure \(|111⟩\) is \(0.06\). The error metric is Kullback–Leibler.

example_1.png

To execute the circuit, you may run the following code:

qmod
{
  "functions": [
    {
      "name": "main",
      "body": [
        {
          "function": "StatePreparation",
          "function_params": {
            "probabilities": {
              "pmf": [
                0.05,
                0.11,
                0.13,
                0.23,
                0.27,
                0.12,
                0.03,
                0.06
              ]
            },
            "error_metric": {
              "KL": {
                "upper_bound": 0.01
              }
            }
          }
        }
      ]
    }
  ],
  "classical_functions": [
    {
      "name": "cmain",
      "body": [
        {
          "name": "result",
          "var_type": {
            "kind": "histogram"
          }
        },
        {
          "invoked_expression": {
            "function": "sample",
            "target_function": "main"
          },
          "assigned_variable": "result"
        },
        {
          "saved_variable": "result"
        }
      ]
    }
  ],
  "constraints": {
    "max_depth": 91
  }
}
from classiq import Model, synthesize
from classiq.model import Constraints
from classiq.builtin_functions import StatePreparation
from classiq import execute
from classiq.execution import ExecutionDetails

probabilities = (0.05, 0.11, 0.13, 0.23, 0.27, 0.12, 0.03, 0.06)
params = StatePreparation(
    probabilities=probabilities,
    error_metric={"KL": {"upper_bound": 0.01}},
)

constraints = Constraints(max_depth=91)
model = Model(constraints=constraints)
model.StatePreparation(params)
model.sample()
quantum_program = synthesize(model.get_model())

res = execute(quantum_program)

counts = sorted(res[0].value.counts_by_qubit_order(lsb_right=True).items())
num_shots = sum(count[1] for count in counts)
print(
    f"probabilities are:\n{dict([(bit_string, count/num_shots) for bit_string, count in counts])}"
)

The results are these values:

probabilities are:
{'000': 0.0495, '001': 0.1075, '010': 0.13425, '011': 0.2325, '100': 0.2795, '101': 0.1155, '110': 0.0465, '111': 0.03475}

Example 2: Loading a Gaussian Mixture

qmod
{
  "functions": [
    {
      "name": "main",
      "body": [
        {
         "function": "StatePreparation",
         "function_params": {
          "probabilities": {
           "gaussian_moment_list": [
            { "mu": 1, "sigma": 1 },
            { "mu": 3, "sigma": 1 },
            { "mu": -3, "sigma": 1 }
           ],
           "num_qubits": 8
          },
          "error_metric": { "L2": { "upper_bound": 0.023 } }
         }
        }
      ]
    }
  ],
  "constraints": { "max_depth": 91 },
  "classical_functions": [
    {
      "name": "cmain",
      "body": [
        {
          "name": "result",
          "var_type": {
            "kind": "histogram"
          }
        },
        {
          "invoked_expression": {
            "function": "sample",
            "target_function": "main"
          },
          "assigned_variable": "result"
        },
        {
          "saved_variable": "result"
        }
      ]
    }
  ],
  "constraints": {
    "max_depth": 91
  }
}
from classiq import Model, synthesize
from classiq.model import Constraints
from classiq.builtin_functions import StatePreparation
from classiq.builtin_functions.state_preparation import (
    GaussianMixture,
    GaussianMoments,
)
from classiq import execute

params = StatePreparation(
    probabilities=GaussianMixture(
        gaussian_moment_list=(
            GaussianMoments(mu=1, sigma=1),
            GaussianMoments(mu=3, sigma=1),
            GaussianMoments(mu=-3, sigma=1),
        ),
        num_qubits=8,
    ),
    error_metric={"L2": {"upper_bound": 0.023}},
)

constraints = Constraints(max_depth=91)
model = Model(constraints=constraints)
model.StatePreparation(params)
model.sample()
quantum_program = synthesize(model.get_model())

results = execute(quantum_program)

This example generates and executes a circuit whose output state probabilities correspond to a Gaussian mixture. GaussianMixture includes a list of Gaussian functions to describe the total distribution, and the num_qubits field determines the sampling. Each Gaussian function is described by the mean mu and standard deviation sigma.

The Classiq engine calculates the underlying PMF as follows:

  1. Truncates the support of the Gaussian mixture CDF at 5 sigma from the Gaussian at each edge.
  2. Divides the support into an equal size grid containing \(2^8+ 1\) points.
  3. Calculates the PMF by taking the difference between the CDF values of consecutive grid points. The error metric is L2. Note that four qubits do not undergo any operation because of the selected error bound (these qubits correspond to the least significant bits). A tighter error bound would result in a circuit operating on more qubits.

example_2.png

This is the resulting plot and the code to generate it using the SDK.

from classiq.execution import ExecutionDetails
from matplotlib import pyplot as plt

counts = results[0].value.counts_by_qubit_order(lsb_right=True)
sorted_counts = dict(sorted(counts.items()))

bit_strings, counts = sorted_counts.keys(), sorted_counts.values()
plt.title("Gaussian Mixtures graph")
plt.xlabel("State")
plt.ylabel("Measurement Probability [%]")
plt.plot(
    [int(bit_str, 2) for bit_str in bit_strings], [count / 4000 for count in counts]
)
plt.show()

example_2_graph.png

Example 3 - Preparating Amplitudes

{
  "functions": [
    {
      "name": "main",
      "body": [
        {
          "function": "StatePreparation",
          "function_params": {
            "amplitudes": [
              -0.5400617248673217,
              -0.3857583749052298,
              -0.23145502494313788,
              -0.07715167498104598,
              0.07715167498104593,
              0.23145502494313777,
              0.38575837490522974,
              0.5400617248673217
            ],
            "error_metric": {
              "L2": {
                "upper_bound": 0.1
              }
            }
          }
        }
      ]
    }
  ],
  "constraints": { "max_depth": 120 }
}
from classiq import Model, synthesize
from classiq.model import Constraints
from classiq.builtin_functions import StatePreparation
import numpy as np

amp = np.linspace(-1, 1, 8)
amp = amp / np.linalg.norm(amp)

params = StatePreparation(amplitudes=amp, error_metric={"L2": {"upper_bound": 0.1}})

constraints = Constraints(max_depth=120)
model = Model(constraints=constraints)
model.StatePreparation(params)
circuit = synthesize(model.get_model())

This example loads a normalized linear space between -1 to 1. The load state has an accuracy of 90 present under the L2 norm.

Note

When amplitudes are loaded, do not pass probabilities as an argument. StatePreparation must get either probabilities or amplitudes but not both.

example_4.png