Max Colorable Induced Subgraph Problem¶
Background¶
Given a graph $G = (V,E)$ and number of colors K, find the largest induced subgraph that can be colored using up to K colors.
A coloring is legal if:
- each vetrex ${v_i}$ is assigned with a color $k_i \in \{0, 1, ..., k-1\}$
- adajecnt vertex have different colors: for each $v_i, v_j$ such that $(v_i, v_j) \in E$, $k_i \neq k_j$.
An induced subgraph of a graph $G = (V,E)$ is a graph $G'=(V', E')$ such that $V'\subset V$ and $E' = \{(v_1, v_2) \in E\ |\ v_1, v_2 \in V'\}$.
Necessary Packages¶
In this demo, besides the classiq
package, we'll use the following packages:
%%capture
! pip install 'networkx[default]'
! pip install pyomo
! pip install matplotlib
Define the optimization problem¶
import networkx as nx
import numpy as np
import pyomo.environ as pyo
def define_max_k_colorable_model(graph, K):
model = pyo.ConcreteModel()
nodes = list(graph.nodes())
colors = range(0, K)
# each x_i states if node i belongs to the cliques
model.x = pyo.Var(colors, nodes, domain=pyo.Binary)
x_variables = np.array(list(model.x.values()))
adjacency_matrix = nx.convert_matrix.to_numpy_array(graph, nonedge=0)
adjacency_matrix_block_diagonal = np.kron(np.eye(K), adjacency_matrix)
# constraint that 2 nodes sharing an edge mustn't have the same color
model.conflicting_color_constraint = pyo.Constraint(
expr=x_variables @ adjacency_matrix_block_diagonal @ x_variables == 0
)
# each node should be colored
@model.Constraint(nodes)
def each_node_is_colored_once_or_zero(model, node):
return sum(model.x[color, node] for color in colors) <= 1
def is_node_colored(node):
is_colored = np.prod([(1 - model.x[color, node]) for color in colors])
return 1 - is_colored
# maximize the number of nodes in the chosen clique
model.value = pyo.Objective(
expr=sum(is_node_colored(node) for node in nodes), sense=pyo.maximize
)
return model
Initialize the model with parameters¶
graph = nx.erdos_renyi_graph(6, 0.5, seed=7)
nx.draw_kamada_kawai(graph, with_labels=True)
NUM_COLORS = 2
coloring_model = define_max_k_colorable_model(graph, NUM_COLORS)
print the resulting pyomo model¶
coloring_model.pprint()
Setting Up the Classiq Problem Instance¶
In order to solve the Pyomo model defined above, we use the Classiq combinatorial optimization engine. For the quantum part of the QAOA algorithm (QAOAConfig
) - define the number of repetitions (num_layers
):
from classiq import construct_combinatorial_optimization_model
from classiq.applications.combinatorial_optimization import OptimizerConfig, QAOAConfig
qaoa_config = QAOAConfig(num_layers=8)
For the classical optimization part of the QAOA algorithm we define the maximum number of classical iterations (max_iteration
) and the $\alpha$-parameter (alpha_cvar
) for running CVaR-QAOA, an improved variation of the QAOA algorithm [3]:
optimizer_config = OptimizerConfig(max_iteration=20, alpha_cvar=0.7)
Lastly, we load the model, based on the problem and algorithm parameters, which we can use to solve the problem:
qmod = construct_combinatorial_optimization_model(
pyo_model=coloring_model,
qaoa_config=qaoa_config,
optimizer_config=optimizer_config,
)
We also set the quantum backend we want to execute on:
from classiq import set_execution_preferences
from classiq.execution import ClassiqBackendPreferences, ExecutionPreferences
backend_preferences = ExecutionPreferences(
backend_preferences=ClassiqBackendPreferences(backend_name="aer_simulator")
)
qmod = set_execution_preferences(qmod, backend_preferences)
with open("max_induced_k_color_subgraph.qmod", "w") as f:
f.write(qmod)
Synthesizing the QAOA Circuit and Solving the Problem¶
We can now synthesize and view the QAOA circuit (ansatz) used to solve the optimization problem:
from classiq import show, synthesize
qprog = synthesize(qmod)
show(qprog)
We now solve the problem using the generated circuit by using the execute
method:
from classiq import execute
res = execute(qprog).result()
We can check the convergence of the run:
from classiq.execution import VQESolverResult
vqe_result = res[1].value
vqe_result.convergence_graph
Optimization Results¶
We can also examine the statistics of the algorithm:
import pandas as pd
optimization_result = pd.DataFrame.from_records(res[0].value)
optimization_result.sort_values(by="cost", ascending=False).head(5)
And the histogram:
optimization_result.hist("cost", weights=optimization_result["probability"])
Let us plot the best solution:
import matplotlib.pyplot as plt
best_solution = optimization_result.solution[optimization_result.cost.idxmax()]
one_hot_solution = np.array(best_solution).reshape([NUM_COLORS, len(graph.nodes)])
integer_solution = np.argmax(one_hot_solution, axis=0)
colored_nodes = np.array(graph.nodes)[one_hot_solution.sum(axis=0) != 0]
colors = integer_solution[colored_nodes]
pos = nx.kamada_kawai_layout(graph)
nx.draw(graph, pos=pos, with_labels=True, alpha=0.3, node_color="k")
nx.draw(graph.subgraph(colored_nodes), pos=pos, node_color=colors, cmap=plt.cm.rainbow)
Classical optimizer results¶
Lastly, we can compare to the classical solution of the problem:
from pyomo.common.errors import ApplicationError
from pyomo.opt import SolverFactory
solver = SolverFactory("couenne")
result = None
try:
result = solver.solve(coloring_model)
except ApplicationError:
print("Solver might have not exited normally. Try again")
coloring_model.display()
if result:
classical_solution = [
pyo.value(coloring_model.x[i, j])
for i in range(NUM_COLORS)
for j in range(len(graph.nodes))
]
one_hot_solution = np.array(classical_solution).reshape(
[NUM_COLORS, len(graph.nodes)]
)
integer_solution = np.argmax(one_hot_solution, axis=0)
colored_nodes = np.array(graph.nodes)[one_hot_solution.sum(axis=0) != 0]
colors = integer_solution[colored_nodes]
pos = nx.kamada_kawai_layout(graph)
nx.draw(graph, pos=pos, with_labels=True, alpha=0.3, node_color="k")
nx.draw(
graph.subgraph(colored_nodes), pos=pos, node_color=colors, cmap=plt.cm.rainbow
)