View on GitHub
Open this notebook in GitHub to run it yourself
Introduction
In the Number Partitioning Problem [1] we need to find how to partition a set of integers into two subsets of equal sums. In case such a partition does not exist, we can ask for a partition where the difference between the sums is minimal.Mathematical formulation
Given a set of numbers , a partition is defined as , with and . In the Number Partitioning Problem we need to determine a partition such that is minimal. A partition can be represented by a binary vector of size , where we assign 0 or 1 for being in or , respectively. The quantity we ask to minimize is . In practice we will minimize the square of this expression.Solving with the Classiq platform
We go through the steps of solving the problem with the Classiq platform, using QAOA algorithm [2]. The solution is based on defining a pyomo model for the optimization problem we would like to solve.Building the Pyomo model from a graph input
We proceed by defining the Pyomo model that will be used on the Classiq platform, using the mathematical formulation defined above:Output:
Output:
Setting Up the Classiq Problem Instance
In order to solve the Pyomo model defined above, we use theCombinatorialProblem python class.
Under the hood it tranlates the Pyomo model to a quantum model of the QAOA algorithm, with cost hamiltonian translated from the Pyomo model. We can choose the number of layers for the QAOA ansatz using the argument num_layers, and the penalty_factor, which will be the coefficient of the constraints term in the cost hamiltonian.
Synthesizing the QAOA Circuit and Solving the Problem
We can now synthesize and view the QAOA circuit (ansatz) used to solve the optimization problem:Output:
Output:
optimize method of the CombinatorialProblem object.
For the classical optimization part of the QAOA algorithm we define the maximum number of classical iterations (maxiter) and the -parameter (quantile) for running CVaR-QAOA, an improved variation of the QAOA algorithm [3]:
Output:

Optimization Results
We can also examine the statistics of the algorithm. In order to get samples with the optimized parameters, we call thesample method:
| solution | probability | cost | |
|---|---|---|---|
| 0 | {‘x’: [1, 0, 0, 1, 0, 1, 0, 1, 0, 1]} | 0.018066 | 1 |
| 88 | {‘x’: [1, 1, 0, 0, 0, 1, 1, 0, 0, 1]} | 0.002441 | 1 |
| 263 | {‘x’: [0, 0, 1, 1, 1, 1, 0, 0, 0, 1]} | 0.000977 | 1 |
| 285 | {‘x’: [1, 1, 1, 1, 1, 0, 0, 1, 0, 0]} | 0.000977 | 1 |
| 287 | {‘x’: [0, 1, 0, 1, 1, 0, 1, 0, 0, 1]} | 0.000977 | 1 |

Output:
Output:
Output: