QUBO Tutorial Version1
QUBO Tutorial Version1
Abstract
Recent years have witnessed the remarkable discovery that the Quadratic Unconstrained Binary
Optimization (QUBO) model unifies a wide variety of combinatorial optimization problems, and
moreover is the foundation of adiabatic quantum computing and a subject of study in
neuromorphic computing. Through these connections, QUBO models lie at the heart of
experimentation carried out with quantum computers developed by D-Wave Systems and
neuromorphic computers developed by IBM and are actively being explored for their research
and practical applications by Google and Lockheed Martin in the commercial realm and by Los
Alamos National Laboratory, Oak Ridge National Laboratory and Lawrence Livermore National
Laboratory in the public sector. Computational experience is being amassed by both the classical
and the quantum computing communities that highlights not only the potential of the QUBO
model but also its effectiveness as an alternative to traditional modeling and solution
methodologies.
This tutorial discloses the basic features of the QUBO model that give it the power and flexibility
to encompass the range of applications that have thrust it into prominence. We show how many
different types of constraints arising in practice can be embodied within the “unconstrained”
QUBO formulation in a very natural manner using penalty functions, yielding exact model
representations in contrast to the approximate representations produced by customary uses of
penalty functions. Each step of generating such models is illustrated in detail by simple
numerical examples, to highlight the convenience of using QUBO models in numerous settings.
We also describe recent innovations for solving QUBO models that offer a rich potential for
integrating classical and quantum computing and for applying these models in machine learning.
1
ECEE, College of Engineering and Applied Science, University of Colorado, Boulder, CO
80302 USA [email protected]
2
College of Business, University of Colorado at Denver, Denver, CO 80217 USA,
[email protected]
1
Table of Contents:
Section 1: Introduction
• Overview and Basic Formulation
Section 2: Illustrative Examples and Definitions
• Examples
• Definitions
Section 3: Natural QUBO Formulations
• The Number Partitioning Problem
• The Max Cut Problem
Section 4: Creating QUBO Models Using Known Penalties
• Frequently Used Penalties
• The Minimum Vertex Cover Problem
• The Set Packing Problem
• The Max 2-Sat Problem
Section 5: Creating QUBO Models Using a General Purpose Approach
• General Transformation
• The Set Partitioning Problem
• The Graph Coloring Problem
• The General 0/1 Linear Model
• The Quadratic Assignment Problem
• The Quadratic Knapsack Problem
Section 6: Connections with Quantum Computing and Machine Learning
• Quantum Computing QUBO Developments
• Unsupervised Machine Learning with QUBO
• Supervised Machine Learning with QUBO
• Machine Learning to Improve QUBO Solution Processes
Section 7: Concluding Remarks
Bibliography
Acknowledgments
2
Section 1: Introduction
The field of Combinatorial Optimization (CO) is one of the most important areas in the general
field of optimization, with important applications found in every industry, including both the
private and public sectors. It is also one of the most active research areas pursued by the
research communities of Operations Research, Computer Science and Analytics as they work to
design and test new methods for solving real world CO problems.
Generally, these problems are concerned with making wise choices in settings where a large
number of yes/no decisions must be made and each set of decisions yields a corresponding
objective function value – like a cost or profit value. Finding good solutions in these settings is
extremely difficult. The traditional approach is for the analyst to develop a solution algorithm
that is tailored to the mathematical structure of the problem at hand. While this approach has
produced good results in certain problem settings, it has the disadvantage that the diversity of
applications arising in practice requires the creation of a diversity of solution techniques, each
with limited application outside their original intended use.
This two-step process of first re-casting an original model into the form of a QUBO model and
then solving it with appropriate software enables the QUBO model to become a unifying
framework for combinatorial optimization. The alternative path that results for effectively
modeling and solving many important problems is a new development in the field of
combinatorial optimization. The significance of this situation is enhanced by the fact that the
QUBO model can be shown to be equivalent to the Ising model that plays a prominent role in
physics. Consequently, the broad range of optimization problems solved effectively by state-of-
the-art QUBO solution methods are joined by an important domain of problems arising in
physics applications.
3
The materials provided in the sections that follow illustrate the process of reformulating
important optimization problems as QUBO models through a series of explicit examples.
Collectively these examples highlight the application breadth of the QUBO model. We disclose
the unexpected advantages of modeling a wide range of problems in a form that differs from the
linear models classically adopted in the optimization community. As part of this, we provide
techniques that can be used to recast a variety of problems that may not seem at first to fit within
an unconstrained binary optimization structure, and perhaps existing in a classical mathematical
form, into an equivalent QUBO model. We also discuss the underpinnings of today’s leading
QUBO solution methods and the links they provide between classical and quantum computing.
As pointed out in Kochenberger and Glover (2006), the QUBO model embraces the following
important optimization problems:
P-Median Problems
4
Maximum Clique Problems
SAT problems
Details of such applications are elaborated more fully in Kochenberger et al. (2014).
In the following development we will show approaches that make it possible to model these and
many other types of problems in the QUBO framework. In the concluding section we
additionally provide information about recent developments linking QUBO to machine learning
and quantum computing.
We now give a formal definition of the QUBO model whose significance will be made clearer by
numerical examples that give a sense of the diverse array of practical QUBO applications.
QUBO: minimize y = xt Qx
where x is a vector of binary decision variables and Q is a square matrix of constants.
It is common to assume that the Q matrix is symmetric or in upper triangular form, which can be
achieved without loss of generality simply as follows:
5
Upper triangular form: For all i and j with j i , replace qij by qij + q ji . Then replace all qij
for j i by 0. (If the matrix is already symmetric, this just doubles the qij values above the
main diagonal, and then sets all values below the main diagonal to 0).
Note: In the examples given in the following sections, we will work with the full, symmetric Q
matrix rather than adopting the “upper triangular form.”
1. The function to be minimized is a quadratic function in binary variables with a linear part
−5x1 − 3x2 − 8x3 − 6 x4 and a quadratic part 4 x1x2 + 8x1x3 + 2 x2 x3 +10 x3 x4 .
6
about the main diagonal without needing to modify the coefficients by the approach
shown in Section 1.
5. Other than the 0/1 restrictions on the decision variables, QUBO is an unconstrained
model with all problem data being contained in the Q matrix. These characteristics make
the QUBO model particularly attractive as a modeling framework for combinatorial
optimization problems, offering a novel alternative to classically constrained
representations.
6. The solution to the model in (3) above is: y = −11, x1 = x4 = 1, x2 = x3 = 0.
Remarks:
• As already noted, the stipulation that Q is symmetric about the main diagonal does not
limit the generality of the model.
• Likewise, casting the QUBO model as a minimization problem does not limit generality.
A well-known observation permits a maximization problem to be solved by minimizing
the negative of its objective function (and the negative of the minimized objective
function value gives the optimum value for the maximization problem).
• As previously emphasized, a variety of optimization problems can naturally be
formulated and solved as an instance of the QUBO model. In addition, many other
problems that don’t appear to be related to QUBO problems can be re-formulated as a
QUBO model. We illustrate this special feature of the QUBO model in the sections that
follow.
7
Section 3: Natural QUBO Formulations
As mentioned earlier, several important problems fall naturally into the QUBO class. To
illustrate such cases, we provide two examples of important applications whose formulations
naturally take the form of a QUBO model.
The Number Partitioning problem has numerous applications cited in the Reference
section of these notes. A common version of this problem involves partitioning a set of numbers
into two subsets such that the subset sums are as close to each other as possible. We model this
problem as a QUBO instance as follows:
Consider a set of numbers S = s1, s2 , s3 ,..., sm . Let x j = 1 if s j is assigned to
m m m
sum1 = s j x j and the sum for subset 2 is given by sum2 = s j − s j x j .The difference
j =1 j =1 j =1
m m m
diff = s j − 2 s j x j = c − 2 s j x j .
j =1 j =1 j =1
2
m
diff 2 = c − 2 s j x j = c 2 + 4 xt Qx
j =1
where
qii = si ( si − c ) qij = q ji = si s j
Dropping the additive and multiplicative constants, our QUBO optimization problem becomes:
QUBO : min y = xt Qx
8
where the Q matrix is constructed with qii and qij as defined above.
By the development above, we have c2 = 27,556 and the equivalent QUBO problem is
min y = xt Qx with
Solving QUBO gives x = ( 0,0,0,1,1,0,0,1) for which y = −6889 , yielding perfectly matched
sums which equal 83. The development employed here can be expanded to address other forms
of the number partitioning problem, including problems where the numbers must be partitioned
into three or more subsets.
The Max Cut problem is one of the most famous problems in combinatorial optimization.
Given an undirected graph G(V,E) with a vertex set V and an edge set E, the Max Cut problem
seeks to partition V into two sets such that the number of edges between the two sets
(considered to be severed by the cut), is a large as possible.
We can model this problem by introducing binary variables satisfying x j = 1 if vertex j is in one
set and x j = 0 if it is in the other set. Viewing a cut as severing edges joining two sets, to leave
endpoints of the edges in different vertex sets, the quantity xi + x j − 2 xi x j identifies whether the
edge ( i, j ) is in the cut. That is, if ( xi + x j − 2 xi x j ) is equal to 1, then exactly one of xi and x j
9
equals 1, which implies edge ( i, j ) is in the cut. Otherwise ( xi + x j − 2 xi x j ) is equal to zero and
Thus, the problem of maximizing the number of edges in the cut can be formulated as
Sticking with our definition of QUBO in minimization form, we write our model as:
which is an instance of
QUBO : min y = xt Qx
The linear terms determine the elements on the main diagonal of Q and the quadratic terms
determine the off-diagonal elements.
Numerical Example: To illustrate the Max Cut problem, consider the following undirected
graph with 5 vertices and 6 edges.
Explicitly taking into account all edges in the graph gives the following formulation:
10
or
QUBO : min y = xt Qx
−2 1 1 0 0
1
−2 0 1 0
Q= 1 0 −3 1 1
0 1 1 −3 1
0 0 1 1 −2
Solving this QUBO model gives x = (0, 1, 1, 0, 0) . Hence vertices 2 and 3 are in one set and
vertices 1, 4, and 5 are in the other, with a maximum cut value of 5.
In the above examples, the problem characteristics led directly to an optimization problem in
QUBO form. As previously remarked, many other problems require “re-casting” to create the
desired QUBO form. We introduce a widely-used form of such re-casting in the next section.
11
Section 4: Creating QUBO Models Using Known Penalties
The “natural form” of a QUBO model illustrated thus far contains no constraints other than those
requiring the variables to be binary. However, by far the largest number of problems of interest
include additional constraints that must be satisfied as the optimizer searches for good solutions.
For certain types of constraints, quadratic penalties useful for creating QUBO models are known
in advance and readily available to be used in transforming a given constrained problem into a
QUBO model. Examples of such penalties for some commonly encountered constraints are
given in the table below. Note that in the table, all variables are intended to be binary and the
parameter P is a positive, scalar penalty value. This value must be chosen sufficiently large to
assure the penalty term is indeed equivalent to the classical constraint, but in practice an
acceptable value for P is usually easy to specify. We discuss this matter more thoroughly later.
x1 + x2 + x3 1 P( x1 x2 + x1 x3 + x2 x3 )
x= y P( x + y − 2 xy)
Table of a few Known constraint/penalty pairs
12
To illustrate the main idea, consider a traditionally constrained problem of the form:
Min y = f ( x )
x1 + x2 1
where x 1 and x2 are binary variables. Note that this constraint allows either or neither x
variable to be chosen. It explicitly precludes both from being chosen (i.e., both cannot be set to
1).
From the 1st row in the table above, we see that a quadratic penalty that corresponds to our
constraint is
Px1x2
where P is a positive scalar. For P chosen sufficiently large, the unconstrained problem
minimize y = f ( x) + Px1x2
has the same optimal solution as the original constrained problem. If f(x) is linear or quadratic,
then this unconstrained model will be in the form of a QUBO model. In our present example,
any optimizer trying to minimize y will tend to avoid solutions having both x 1 and x2 equal to
1, else a large positive amount will be added to the objective function. That is, the objective
function incurs a penalty corresponding to infeasible solutions.
In section 3.2 we saw how the QUBO model could be used to represent the famous Max
Cut problem. Here we consider another well-known optimization problem on graphs called the
Minimum Vertex Cover problem. Given an undirected graph with a vertex set V and an edge set
E, a vertex cover is a subset of the vertices (nodes) such that each edge in the graph is incident
to at least one vertex in the subset. The Minimum Vertex Cover problem seeks to find a cover
with a minimum number of vertices in the subset.
13
A standard optimization model for MVC can be formulated as follows. Let x j = 1 if vertex j is
in the cover (i.e., in the subset) and x j = 0 otherwise. Then the standard constrained, linear 0/1
Minimize
jV
xj
subject to
Note the constraints ensure that at least one of the endpoints of each edge will be in the cover
and the objective function seeks to find the cover using the least number of vertices. Note also
that we have a constraint for each edge in the graph, meaning that even for modest sized graphs
we can have many constraints. Each constraint will alternatively be imposed by adding a penalty
to the objective function in the equivalent QUBO model.
Referring to our table above, we see that the constraints in the standard MVC model can be
represented by a penalty of the form P(1 − x − y + xy) . Thus, an unconstrained alternative to
Minimize y = x j + P( (1 − xi − x j + xi x j ))
jV ( )
i , j E
where P again represents a positive scalar penalty. In turn, we can write this as minimize xt Qx
plus a constant term. Dropping the additive constant, which has no impact on the optimization,
we have an optimization problem in the form of a QUBO model.
Remark: A common extension of this problem allows a weight w j to be associated with each
vertex j. Following the development above, the QUBO model for the Weighted Vertex Cover
problem is given by:
Minimize y = w j x j + P( (1 − xi − x j + xi x j ))
jV ( )
i , j E
14
Numerical Example
Consider the graph of section 3.2 again but this time we want to determine a minimum vertex
cover.
For this graph with n = 6 edges and m = 5 nodes, the model becomes:
Minimize y = x1 + x2 + x3 + x4 + x5 +
P(1 − x1 − x2 + x1 x2 ) +
P(1 − x1 − x3 + x1 x3 ) +
P(1 − x2 − x4 + x2 x4 ) +
P(1 − x3 − x4 + x3 x4 ) +
P(1 − x3 − x5 + x3 x5 ) +
P(1 − x4 − x5 + x4 x5 )
Arbitrarily choosing P to be equal to 8 and dropping the additive constant (6P = 48) gives our
QUBO model
QUBO : min xt Qx
15
with the Q matrix given by
−15 4 4 0 0
4 −15 0 4 0
4 0 −23 4 4
0 4 4 −23 4
0 0 4 4 −15
Note that we went from a constrained model with 5 variables and 6 constraints to an
unconstrained QUBO model in the same 5 variables. Solving this QUBO model gives:
xt Qx = −45 at x = (0,1,1,0,1) for which y = 48 − 45 = 3 , meaning that a minimum cover is given
by nodes 2, 3, and 5. It’s easy to check that at this solution, all the penalty functions are equal to
0.
As we have indicated, the reformulation process for many problems requires the introduction of
a scalar penalty P for which a numerical value must be given. These penalties are not unique,
meaning that many different values can be successfully employed. For a particular problem, a
workable value is typically set based on domain knowledge and on what needs to be
accomplished. Often, we use the same penalty for all constraints but there is nothing wrong with
having different penalties for different constraints if there is a good reason to differentially treat
various constraints. If a constraint must absolutely be satisfied, i.e., a “hard” constraint, then P
must be large enough to preclude a violation. Some constraints, however, are “soft”, meaning
that it is desirable to satisfy them but slight violations can be tolerated. For such cases, a more
moderate penalty value will suffice.
A penalty value that is too large can impede the solution process as the penalty terms overwhelm
the original objective function information, making it difficult to distinguish the quality of one
solution from another. On the other hand, a penalty value that is too small jeopardizes the search
for feasible solutions. Generally, there is a ‘Goldilocks region’ of considerable size that contains
penalty values that work well. A little preliminary thought about the model can yield a ballpark
estimate of the original objective function value. Taking P to be some percentage (75% to
150%) of this estimate is often a good place to start. In the end, solutions generated can always
16
be checked for feasibility, leading to changes in penalties and further rounds of the solution
process as needed to zero in on an acceptable solution.
The Set Packing problem is a well-known optimization problem in binary variables with
n
max w j x j
j =1
st
n
j =1
aij x j 1 for i = 1,...m
where the aij are 0/1 coefficients, the w j are weights and the x j variables are binary. Using the
penalties of the form shown in the first and fifth rows of the table given earlier, we can easily
construct a quadratic penalty corresponding to each of the constraints in the traditional model.
Then by subtracting the penalties from the objective function, we have an unconstrained
representation of the problem in the form of a QUBO model. Keeping with our preference for
minimization.
Numerical Example
max x1 + x2 + x3 + x4
st
x1 + x3 + x4 1
x1 + x2 1
17
Here all the objective function coefficients, the w j values, are equal to 1. Using the penalties
QUBO : min xt Qx
−1 3 3 3
3 −1 0 0
3 0 −1 3
3 0 3 −1
Solving the QUBO model gives y = −2 at x = (0,1,1,0) . Note that at this solution, all four
Remark: Set covering problems with thousands of variables and constraints have been
efficiently reformulated and solved using the QUBO reformulation illustrated in this example.
Satisfiability problems, in their various guises, have applications in many different settings.
Often these problems are represented in terms of clauses, in conjunctive normal form, consisting
of several true/false literals. The challenge is to determine the literals so that as many clauses as
possible are satisfied.
18
For our optimization approach, we’ll represent the literals as 0/1 values and formulate models
that can be re-cast into the QUBO framework and solved with QUBO solvers. To illustrate the
approach, we consider the category of satisfiability problems known as Max 2-Sat problems.
For Max 2-Sat, each clause consists of two literals and a clause is satisfied if either or both
literals are true. There are three possible types of clauses for this problem, each with a traditional
constraint that must be satisfied if the clause is to be true. In turn, each of these three constraints
has a known quadratic penalty given in our previous table.
The three clause types along with their traditional constraints and associated penalties are:
1. No negations: Example ( xi x j )
Traditional constraint: xi + x j 1
Quadratic Penalty: (1− xi − x j + xi x j )
2. One negation: Example ( xi x j )
Traditional constraint: xi + x j 1
Quadratic Penalty: ( x j − xi x j )
3. Two negations: Example ( xi x j )
Traditional constraint: xi + x j 1
Quadratic Penalty: ( xi x j )
(Note that x j = 1 or 0 denoting whether literal j is true or false. The notation x j , the
complement of x j , is equal to (1 − x j ) . )
For each clause type, if the traditional constraint is satisfied, the corresponding penalty is equal
to zero, while if the traditional constraint is not satisfied, the quadratic penalty is equal to 1.
Given this one-to-one correspondence, we can approach the problem of maximizing the number
of clauses satisfied by equivalently minimizing the number of clauses not satisfied. This
perspective, as we will see, gives us a QUBO model.
For a given Max 2-Sat instance then, we can add the quadratic penalties associated with the
problem clauses to get a composite penalty function which we want to minimize. Since the
penalties are all quadratic, this penalty function takes the form of a QUBO model,
19
min y = xt Qx . Moreover, if y turns out to be equal to zero when minimizing the QUBO
model, this means we have a solution that satisfies all of the clauses; if y turns out to equal 5,
that means we have a solution that satisfies all but 5 of the clauses; and so forth.
This modeling and solution procedure is illustrated by the following example with 4 variables
and 12 clauses where the penalties are determined by the clause type.
Adding the individual clause penalties together gives our QUBO model
min y = 3 + x1 − 2 x4 − x2 x3 + x2 x4 + 2 x3 x4
or,
min y = 3 + xtQx
20
where the Q matrix is given by
1 0 0 0
0 0 −1/ 2 1/ 2
0 −1/ 2 0 1
0 1/ 2 1 −2
21
Section 5: Creating QUBO Models: A General Purpose Approach
In this section, we illustrate how to construct an appropriate QUBO model in cases where a
QUBO formulation doesn’t arise naturally (as we saw in section 3) or where useable penalties
are not known in advance (as we saw in section 4). It turns out that for these more general
cases, we can always “discover” useable penalties by adopting the procedure outlined below.
For this purpose, consider the general 0/1 optimization problem of the form:
min y = xt Cx
s.t. Ax = b, x binary
This model accommodates both quadratic and linear objective functions since the linear case
results when C is a diagonal matrix (observing that x j 2 = x j when x j is a 0-1 variable). Under
the assumption that A and b have integer components, problems with inequality constraints can
always be put in this form by including slack variables and then representing the slack variables
by a binary expansion. (For example, this would introduce a slack variable s to convert the
inequality 4 x1 + 5x2 − x3 6 into 4 x1 + 5x2 − x3 + s = 6 , and since clearly s 7 (in case
x3 = 1 ), s could be represented by the binary expansion s1 + 2s2 + 4s 3 where s1, s2, and s 3
are additional binary variables. If it is additionally known that at not both x1 and x2 can be 0,
then s can be at most 3 and can be represented by the expansion s1 + 2 s2 . A fuller treatment of
slack variables is given subsequently.) These constrained quadratic optimization models are
converted into equivalent unconstrained QUBO models by converting the constraints Ax = b
(representing slack variables as x variables) into quadratic penalties to be added to the objective
function, following the same re-casting as we illustrated in section 4.
22
y = xt Cx + P ( Ax − b ) ( Ax − b )
t
= xt Cx + xt Dx + c
= xt Qx + c
where the matrix D and the additive constant c result directly from the matrix multiplication
indicated. Dropping the additive constant, the equivalent unconstrained version of the
constrained problem becomes
Remarks:
1. A suitable choice of the penalty scalar P, as we commented earlier, can always be chosen
so that the optimal solution to QUBO is the optimal solution to the original constrained
problem. Solutions obtained can always be checked for feasibility to confirm whether or
not appropriate penalty choices have been made.
2. For ease of reference, the preceding procedure that transforms the general problem into
an equivalent QUBO model will be called Transformation # 1. The mechanics of
Transformation #1 can be employed whenever we need to convert linear constraints of
the form Ax = b into usable quadratic penalties in our efforts to re-cast a given problem
with equality constraints into the QUBO form.
3. Note that the additive constant, c, does not impact the optimization and can be ignored
during the optimization process. Once the QUBO model has been solved, the constant c
can be used to recover the original objective function value. Alternatively, the original
objective function value can always be determined by using the optimal x j found when
QUBO is solved.
23
Transformation #1 is the “go to” approach in cases where appropriate quadratic penalty functions
are not known in advance. In general, it represents an approach that can be adopted for any
problem. Due to this generality, Transformation # 1 has proven to be an important modeling tool
in many problem settings.
Before moving on to applications in this section, we want to single out another constraint/penalty
pair for special recognition that we worked with before in section 4:
( xi + x j 1) → P( x1x j )
Constraints of this form appear in many important applications. Due to their importance and
frequency of use, we refer to this special case as Transformation #2. We’ll have occasion to use
this as well as Transformation # 1 later in this section.
The set partitioning problem (SPP) has to do with partitioning a set of items into subsets so that
each item appears in exactly one subset and the cost of the subsets chosen is minimized. This
problem appears in many settings including the airline and other industries and is traditionally
formulated in binary variables as
n
min c j x j
j =1
st
n
j =1
aij x j = 1 for i = 1,...m
x j where denotes whether or not subset j is chosen, c j is the cost of subset j, and the aij
Note that his model has the form of the general model given at the beginning of this section
where, in this case, the objective function matrix C is a diagonal matrix with all off-diagonal
elements equal to zero and the diagonal elements are given by the original linear objective
24
function coefficients. Thus, we can re-cast the model into a QUBO model directly by using
Numerical Example
subject to
x1 + x3 + x6 = 1
x2 + x3 + x5 + x6 = 1
x3 + x4 + x5 = 1
x1 + x2 + x4 + x6 = 1
routine and employed to re-cast this problem into an equivalent instance of a QUBO model. For
this small example, however, we can proceed manually as follows: The conversion to an
equivalent QUBO model via Transformation # 1 involves forming quadratic penalties and adding
them to the original objective function. In general, the quadratic penalties to be added (for a
n
minimization problem) are given by P ( aij xij − bi )2 where the outer summation is taken
i j =1
25
Arbitrarily taking P to be 10, and recalling that x j 2 = x j since our variables are binary, this
becomes
Dropping the additive constant (40), we then have our QUBO model
−17 10 10 10 0 20
10 −18 10 10 10 20
10 10 −29 10 20 20
Q=
10 10 10 −19 10 10
0 10 20 10 −17 10
20 20 20 10 10 −28
Solving this QUBO formulation gives an optimal solution x1 = x5 = 1 (with all other variables
equal to 0) to yield y = 6 .
Remarks:
1. The QUBO approach to solving set partitioning problems has been successfully applied
2. The special nature of the set partitioning model allows an alternative to Transformation
#1 for constructing the QUBO model. Let k j denote the number of 1’s in the jth column
of the constraint matrix A and let rij denote the number of times variables i and j appear
26
in the same constraint. Then the diagonal elements of Q are given by qii = ci − Pki and
the off – diagonal elements of Q are given by qij = q ji = Prij . The additive constant is
given by m * P . These relationships make it easy to formulate the QUBO model for any
Transformation # 1.
3. The set partitioning problem may be viewed as a form of clustering problem and is
produce an equivalent QUBO model, as demonstrated next in the context of graph coloring.
Vertex coloring problems seek to assign colors to nodes of a graph in such a way that adjacent
nodes receive different colors. The K-coloring problem attempts to find such a coloring using
exactly K colors. A wide range of applications, ranging from frequency assignment problems to
printed circuit board design problems, can be represented by the K-coloring model.
K
j =1
xij = 1 i = 1,..., n
where n is the number of nodes in the graph. A feasible coloring, in which adjacent nodes are
27
xip + x jp 1 p = 1,..., K
This problem, then, can be re-cast in the form of a QUBO model by using Transformation # 1 on
the node assignment constraints and using Transformation # 2 on the adjacency constraints. This
problem does not have an objective function in its original formulation, meaning our focus is on
finding a feasible coloring using the K colors allowed. As a result, any positive value for the
penalty P will do. (The resulting QUBO model of course has an objective function given by
Numerical Example
Consider the problem of finding a feasible coloring of the following graph using K= 3 colors.
5 2
4 3
Given the discussion above, we see that the goal is to find a solution to the system:
xi1 + xi 2 + xi3 = 1 i = 1, 5
xip + x jp 1 p = 1, 3
(for all adjacent nodes i and j)
28
In this traditional form, the model has 15 variables and 26 constraints. As suggested above, to
recast this problem into the QUBO form, we can use Transformation # 1 on the node assignment
equations and Transformation # 2 on adjacency inequalities. One way to proceed here is to start
with a 15-by-15 Q matrix where initially all the elements are equal to zero and then re-define
clarify the approach, we’ll take these two sources of penalties one at a time. For ease of notation
and to be consistent with earlier applications, we’ll first re-number the variables using a single
( x11, x12 , x 13 , x21, x22 , x23 , x31,.....x52 , x53 ) = ( x1, x2 , x3 , x4 , x5 , x6 , x7 ,......x14 , x15 )
As we develop our QUBO model, we’ll use the variables with a single subscript.
First, we’ll consider the node assignment equations and the penalties we get from
P( x10 + x11 + x12 − 1)2 which becomes P(− x10 − x11 − x12 + 2 x10 x11 + 2 x10 x12 + 2 x11x12 ) + P .
P( x13 + x14 + x15 − 1)2 which becomes P(− x13 − x14 − x15 + 2 x13 x14 + 2 x13 x15 + 2 x14 x15 ) + P .
Taking P to equal 4 and inserting these penalties in the “developing” Q matrix gives the
29
−4 4 4 0 0 0 0 0 0 0 0 0 0 0 0
4 −4 4 0 0 0 0 0 0 0 0 0 0 0 0
4 4 −4 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 −4 4 4 0 0 0 0 0 0 0 0 0
0 0 0 4 −4 4 0 0 0 0 0 0 0 0 0
0 0 0 4 4 −4 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 −4 4 4 0 0 0 0 0 0
0 0 0 0 0 0 4 −4 4 0 0 0 0 0 0
0 0 0 0 0 0 4 4 −4 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 −4 4 4 0 0 0
0 0 0 0 0 0 0 0 0 4 −4 4 0 0 0
0 0 0 0 0 0 0 0 0 4 4 −4 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 −4 4 4
0 0 0 0 0 0 0 0 0 0 0 0 4 −4 4
0 0 0 0 0 0 0 0 0 0 0 0 4 4 −4
Note the block diagonal structure. Many problems have patterns that can be exploited in
developing Q matrices needed for their QUBO representation. Looking for patterns is often a
To complete our Q matrix, it’s a simple matter of inserting the penalties representing the
adjacency constraints into the above matrix. For these, we use the penalties of Transformation #
2, namely Pxi x j , for each adjacent pair of nodes and each of the three allowed colors. We have
7 adjacent pairs of nodes and three colors, yielding a total of 21 adjacency constraints. Allowing
for symmetry, we’ll insert 42 penalties into the matrix, augmenting the penalties already in place.
For example, for the constraint ensuring that nodes 1 and 2 can not both have color #1, the
penalty is Px1 x4 , implying that we insert the penalty value “2” in row 1 and column 4 of our
matrix and also in column 1 and row 4. (Recall that we have re-labled our variables such that
30
the original variables x1,1 and x2,1 are now variables x1 and x4 ). Including the penalties for the
−4 4 4 2 0 0 0 0 0 0 0 0 2 0 0
4 −4 4 0 2 0 0 0 0 0 0 0 0 2 0
4 4 −4 0 0 2 0 0 0 0 0 0 0 0 2
2 0 0 −4 4 4 2 0 0 2 0 0 2 0 0
0 2 0 4 −4 4 0 2 0 0 2 0 0 2 0
0 0 2 4 4 −4 0 0 2 0 0 2 0 0 2
0 0 0 2 0 0 −4 4 4 2 0 0 0 0 0
Q = 0 0 0 0 2 0 4 −4 4 0 2 0 0 0 0
0 0 0 0 0 2 4 4 −4 0 0 2 0 0 0
0 0 0 2 0 0 2 0 0 −4 4 4 2 0 0
0 0 0 0 2 0 0 2 0 4 −4 4 0 2 0
0 0 0 0 0 2 0 0 2 4 4 −4 0 0 2
2 0 0 2 0 0 0 0 0 2 0 0 −4 4 4
0 2 0 0 2 0 0 0 0 0 2 0 4 −4 4
0 0 2 0 0 2 0 0 0 0 0 2 4 4 −4
The above matrix incorporates all of the constraints of our coloring problem, yielding the
QUBO : min xt Qx
Switching back to our original variables, this solution means that nodes 1 and 4 get color #2,
31
Remark: This approach to coloring problems has proven to be very effective for a wide variety
Many important problems in industry and government can be modeled as 0/1 linear programs
with a mixture of constraint types. The general problem of this nature can be represented in
matrix form by
max cx
st
Ax = b
x binary
where slack variables are introduced as needed to convert inequality constraints into equalities.
Given a problem in this form, Transformation # 1 can be used to re-cast the problem into the
QUBO form
max x0 = xt Qx
st x binary
As discussed earlier, problems with inequality constraints can be handled by introducing slack
variables, via a binary expansion, to create the system of constraints Ax = b . The maximization
Numerical Example
32
max 6 x1 + 4 x2 + 8 x3 + 5 x4 + 5 x5
st
2 x1 + 2 x2 + 4 x3 + 3x4 + 2 x5 7
1x1 + 2 x2 + 2 x3 + 1x4 + 2 x5 = 4
3x1 + 3x2 + 2 x3 + 4 x4 + 4 x5 5
x 0,1
convert the 1st and 3rd constraints to equations by including slack variables via a binary
expansion. To do this, we first estimate upper bounds on the slack activities as a basis for
determining how many binary variables will be required to represent the slack variables in the
binary expansions. Typically, the upper bounds are determined simply by examining the
constraints and estimating a reasonable value for how large the slack activity could be. For the
problem at hand, we can refer to the slack variables for constraints 1 and 3 as s1 and s3 with
0 s1 3 s1 = 1x6 + 2 x7
0 s3 6 s3 = 1x8 + 2 x9 + 4 x10
Where x6 , x7 , x 8 , x9 and x10 are new binary variables. Note that these new variables will have
objective function coefficients equal to zero. Including these slack variables gives the system
2 2 4 3 2 1 2 0 0 0
A = 1 2 2 1 2 0 0 0 0 0
3 3 2 4 4 0 0 −1 −2 −4
33
We can now use Transformation # 1 to reformulate our problem as a QUBO instance. Changing
min y = − 6 x1 − 4 x2 − 8 x3 − 5 x4 − 5 x5
+ P(2 x1 + 2 x2 + 4 x3 + 3x4 + 2 x5 + 1x6 + 2 x7 − 7) 2
+ P(1x1 + 2 x2 + 2 x3 + 1x4 + 2 x5 − 4)2
+ P(3x1 + 3x2 + 2 x3 + 4 x4 + 4 x5 −1x8 − 2 x9 − 4 x10 − 5)2
min y = xQx
x1 = x4 = x5 = x9 = x10 = 1
for which y = −916. Note that the third constraint is loose. Adjusting for the additive constant,
and recalling that we started with a maximization problem, gives an objective function value of
34
16. Alternatively, we could have simply evaluated the original objective function at the solution
Remarks: Any problem in linear constraints and bounded integer variables can be converted
through a binary expansion into min y = xQx as illustrated here. In such applications,
however, the elements of the Q matrix can, depending on the data, get unacceptably large and
A distance matrix ( dij ) specifies the distance between sites i and j. The optimization problem is
to find an assignment of facilities to locations to minimize the weighted flow across the system.
Cost information can be explicitly introduced to yield a cost minimization model, as is common
in some applications.
The decision variables are xij = 1 if facility i is assigned to location j; otherwise, xij = 0 . Then
n n n n
Minimize
i =1 j =1 k =1 l =1
fij dkl xik x jl
n
Subject to
i =1
xij = 1 j = 1, n
n
j =1
xij = 1 i = 1, n
xij 0,1 , i, j = 1, n
35
All QAP problems have n 2 variables, which often yields large models in practical settings.
This model has the general form presented at the beginning of this section and consequently
Transformation # 1 can be used to convert any QAP problem into a QUBO instance.
Numerical Example
Consider a small example with n = 3 facilities and 3 locations with flow and distance matrices
0 5 2 0 8 15
respectively given as follows: 5 0 3 and 8 0 13 .
2 3 0 15 13 0
It is convenient to re-label the variables using only a single subscript as we did previously in the
( x11, x12 , x13 , x21, x22 , x23 , x31, x32 , x33 ) by ( x1, x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 )
Given the flow and distance matrices our QAP model becomes:
subject to x1 + x2 + x3 = 1
x4 + x5 + x6 = 1
x7 + x8 + x9 = 1
x1 + x4 + x7 = 1
x2 + x5 + x8 = 1
x3 + x6 + x9 = 1
Converting the constraints into quadratic penalty terms and adding them to the objective function
36
min y = 80 x1x5 + 150 x1x6 + 32 x1x8 + 60 x1x9 + 80 x2 x4 + 130 x2 x6 + 60 x2 x7 + 52 x2 x9
+ 150 x3 x4 + 130 x3 x5 + 60 x3 x7 + 52 x3 x8 + 48x4 x8 + 90 x4 x9 + 78x5 x9 + 78x6 x8
+ P( x1 + x2 + x3 − 1)2 + P( x4 + x5 + x6 − 1)2 + P( x7 + x8 + x9 − 1)2
+ P( x1 + x4 + x7 − 1)2 + P( x2 + x5 + x8 − 1)2 + P( x3 + x6 + x9 − 1)2
Choosing a penalty value of P = 200, this becomes the standard QUBO problem
QUBO: min y = xt Qx
Solving QUBO gives y = −982 at x1 = x5 = x9 = 1 and all other variables = 0. Adjusting for
the additive constant, we get the original objective function value of 1200 − 982 = 218.
Remark: A QUBO approach to solving QAP problems, as illustrated above, has been
successfully applied to problems with more than 30 facilities and locations.
37
5.5 Quadratic Knapsack
Knapsack problems, like the other problems presented earlier in this section, play a prominent
role in the field of combinatorial optimization, having widespread application in such areas as
project selection and capital budgeting. In such settings, a set of attractive potential projects is
identified and the goal is to identify a subset of maximum value (or profit) that satisfies the
budget limitations. The classic linear knapsack problem applies when the value of a project
depends only on the individual projects under consideration. The quadratic version of this
problem arises when there is an interaction between pairs of projects affecting the value
obtained.
For the general case with n projects, the Quadratic Knapsack Problem (QKP) is commonly
modeled as
n −1 n
max vij xi x j
i =1 j =i
n
j =1
ajxj b
respectively, the value associated with choosing projects i and j, the resource requirement of
project j, and the total resource budget. Generalizations involving multiple knapsack constraints
are found in a variety of application settings.
Numerical Example
38
We re-cast this into the form of a QUBO model by first converting the constraint into an
equation and then using the ideas embedded in Transformation # 1. Introducing a slack variable
in the form of the binary expansion 1x5 + 2 x6 , we get the equality constraint
Changing to minimization and including the penalty term in the objective function gives the
unconstrained quadratic model:
Choosing a penalty P = 10, and cleaning up the algebra gives the QUBO model
QUBO: min y = xt Qx
Solving QUBO gives y = −2588 at x = (1,0,1,1,0,0) . Adjusting for the additive constant
and switching back to “maximization” gives the value 28 for the original objective function.
Remark: The QUBO approach to QKP has proven to be successful on problems with several
hundred variables and as many as five knapsack constraints.
39
Section 6: Connections to Quantum Computing and Machine Learning
Quantum Computing QUBO Developments: -- As noted in Section 1, one of the most significant
applications of QUBO emerges from the observation that it is equivalent to the famous Ising
problem in physics. In common with the earlier demonstration that many NP-hard problems such
as graph and number partitioning, covering and set packing, satisfiability, matching, spanning
tree as well as others can converted into the QUBO form, Lucas (2014) more recently has
observed that such problems can be converted into the Ising form. Ising problems replace x ∈ {0,
1}n by x ∈ {−1, 1} n and can be put in the QUBO form by defining xj' = (xj + 1)/2 and then
redefining xj to be xj'.1 Efforts to solve Ising problems are often carried out with annealing
approaches, motivated by the perspective in physics of applying annealing methods to find a
lowest energy state.
More effective methods for QUBO problems, and hence for Ising problems, are obtained using
modern metaheuristics. Among the best metaheuristic methods for QUBO are those based on
tabu search and path relinking as described in Glover (1996, 1997), Glover and Laguna (1997)
and adapted to QUBO in Wang et al. (2012, 2013).
A bonus from this development has been to create a link between QUBO problems and quantum
computing. A new type of quantum computer based on quantum annealing with an integrated
physical network structure of qubits known as a Chimera graph has incorporated ideas from
Wang et al. (2012) in its software and has been implemented on the D-Wave System. The ability
to obtain a quantum speedup effect for this system applied to QUBO problems has been
demonstrated in Boixo et al. (2014).
Additional advances incorporating methodology from Wang et al. (2012, 2013) are provided in
the D-Wave open source software system Qbsolv (2017) and in the supplementary QMASM
system by Pakin (2018). Recent QUBO quantum computing applications in the literature include
those for Graph Partitioning Problems (Mniszewski et al., 2016) and Maximum Clique Problems
(Chapuis et al., 2018). In another recent development, QUBO models are being studied using the
IBM neuromorphic computer at Lawrence Livermore National Laboratory, as reported in Alom
et al. (2017).
There has been some controversy about the relative merits of different quantum computing
frameworks. One of the most active debates concerns the promise of quantum gate systems, also
known as quantum circuit systems, versus the promise of adiabatic or quantum annealing
systems. Now an important new discovery by Yu et al. (2018) shows that these two systems offer
effectively the same potential for achieving the gains inherent in quantum computing processes,
with a mathematical demonstration that the quantum circuit algorithm can be transformed into
quantum adiabatic algorithm with the exact same time complexity. This has valuable
implications for the relevance of QUBO models in the D-Wave adiabatic system, disclosing that
analogous advances associated with QUBO models may ultimately be realized through quantum
1
This adds a constant to (1), which is irrelevant for optimization.
40
circuit systems. The study of the QUBO/Ising model in neuromorphic computing suggests the
growing recognition of the universality and significance of this model.
One of the major initiatives currently underway is to unite quantum computing and classical
computing in order to exploit specific advantages unique to each. Here, too, QUBO models are
actively entering the picture. A new classical computing system, called Alpha-QUBO (2018), is
under development for the purpose of integrating classical and quantum computing to provide
more effective solutions to QUBO and QUBO-related problems.
Unsupervised Machine Learning with QUBO: -- One of the most salient forms of unsupervised
machine learning is the type represented by clustering. As remarked earlier, the QUBO set
partitioning model provides a very natural form of clustering, and hence offers a useful model for
unsupervised machine learning. Surprisingly, to date, very little exploration of this model has
been undertaken in the machine learning context. An exception is the recent use of clustering to
facilitate the solution of QUBO models in Samorani et al. (2018).
Machine Learning to Improve QUBO Solution Processes: -- Devising rules and strategies to
learn the implications of specific model instances has had a long history. Today it permeates the
field of mixed integer programming, for example, to identify relationships such as values or
bounds that can be assigned to variables, or inequalities that can constrain feasible spaces more
tightly. Cast under the name of pre-processing, such approaches have not traditionally be viewed
through the lens of machine learning, but it is evident that they qualify as a viable and important
example of the field.
Efforts to apply this type of machine learning to QUBO problems have proceeded more slowly.
A landmark paper in this regard is the work of Boros et al. (2008), which uses roof duality and a
max-flow algorithm to provide useful inferences. More recently, sets of logical tests were
developed to learn relationships among variables in QUBO applications in Glover et al. (2017),
which were successful in setting many variables a priori, leading to significantly smaller
problems. In about half of the problems in the test bed the learning approach achieved a 45%
reduction in size and exactly solved 10 problems. The rules also identified many significant
implied relationships between pairs of variables resulting in many simple logical inequalities.
41
A different type of learning approach proposed many years ago in Glover (1977) that uses
clustering in association with population-based metaheuristics was recently updated and
implemented with a path relinking algorithm for QUBO problems in Samorani et al. (2018).
Instead of using pre-processing to learn solution implications, this approach generates and
exploits clusters as the solution algorithm progresses and has proved particularly effective for
solving larger QUBO instances.
Other types of machine learning approaches also merit a closer look in the future for applications
with QUBO. Among these is the Programming by Optimization approach of Hoos (2012) and
the Integrative Population Analysis approach of Glover et al. (1998).
1. The uses of logical analysis to identify relationships between variables in the work of
Glover et al. (2017) can be implemented in the setting of quantum computing to combat
the difficulties of applying current quantum computing methods to scale effectively for
solving large problems. Approximation methods based on such analysis can be used for
decomposing and partitioning large QUBO problems to solve large problems and provide
strategies relevant to a broad range of quantum computing applications.
2. The National Academies of Sciences, Engineering and Medicine have released a
consensus study report on progress and prospects in quantum computing (2018) that
disclose the relevance of marrying quantum and classical computing, which accords with
the objectives of the Alpha-QUBO system (2018). As stated in the National Academies
report, “formulating an R&D program with the aim of developing commercial
applications for near-term quantum computing is critical to the health of the field. Such a
program would include … identification of algorithms for which hybrid classical-
quantum techniques using modest-size quantum subsystems can provide significant
speedup.” Studies devoted to the use of Alpha-QUBO in conjunction with quantum
computing initiatives at Los Alamos National Laboratory are investigating the
possibilities for achieving such speedup.
3. In both classical and quantum settings, the transformation to QUBO can sometimes be
aided considerably by first employing a change of variables. This is particularly useful in
settings where the original model is an edge-based graph model, as in clique partitioning
where the standard models can have millions of variables due to the number of edges in
the graph. A useful alternative is to introduce node-based variables, by replacing each
edge variable with the product of two node variables. Such a change converts a linear
model into a quadratic model with many fewer variables, since a graph normally has a
42
much smaller number of nodes than edges. The resulting quadratic model, then, can be
converted to a QUBO model by the methods illustrated earlier.
4. Problems with higher order polynomials arise in certain applications and can be re-cast
into a QUBO framework by employing a reduction technique. For example, consider a
problem with a cubic term x1x2 x3 in binary variables. Replace the product x1 x2 by a
binary variable, y1 and add a penalty to the objective function of the form
P( x1x2 − 2 x1 y1 − 2 x2 y1 + 3 y1) . By this process, when the optimization drives the penalty
term to 0, which happens only when y1 = x1 x2 , we have reduced the cubic term to an
equivalent quadratic term ( y1x3 ) . This procedure can be used recursively to convert
higher order polynomials to quadratic models of the QUBO form.
5. The general procedure of Transformation # 1 has similarities to the Lagrange Multiplier
approach of classical optimization. The key difference is that our scalar penalties (P) are
not “dual” variables to be determined by the optimization. Rather, they are parameters
set a priori to encourage the search process to avoid candidate solutions that are
infeasible. Moreover, the Lagrange Multiplier approach is not assured to yield a solution
that satisfies the problem constraints except in the special case of convex optimization, in
contrast to the situation with the QUBO model. To determine good values for Lagrange
multipliers (which in general only yield a lower bound instead of an optimum value for
the problem objective) recourse must be made to an additional type of optimization called
subgradient optimization, which QUBO models do not depend on.
6. Solving QUBO models: QUBO models belong to a class of problems known to be NP-
hard. The practical meaning of this is that using exact solvers (like CPLEX or Gurobi) to
find “optimal “ solutions will most likely be unsuccessful except for very small problem
instances. Realistic sized problems can run for days and even weeks when using exact
methods without producing high quality solutions. As discussed in Section 6, to
overcome this computational difficulty, QUBO models are typically solved by using
modern Metaheuristic methods (such as Tabu Search and Path Relinking), which are
designed to find high quality but not necessarily optimal solutions in a modest amount of
computer time, and which are actively being adapted for creating QUBO solution
approaches in quantum computing. Continuing progress in the design and
implementation of such methods will have an impact across a wide range of practical
applications of optimization and machine learning.
43
Bibliography:
G. Chapuis, H. Djidjev, G. Hahn, G. Rizk (2018) “Finding Maximum Cliques on the D-Wave
Quantum Annealer,” To be published in: Journal of Signal Processing Systems DOI
10.1007/s11265-018-1357-8.
F. Glover (1977) "Heuristics for Integer Programming Using Surrogate Constraints," Decision
Sciences, Vol. 8, No. 1, pp. 156-166.
F. Glover (1996) "Tabu Search and Adaptive Memory Programming - Advances, Applications
and Challenges," in Interfaces in Computer Science and Operations Research, Barr, Helgason
and Kennington (eds.) Kluwer Academic Publishers, Springer, pp. 1-75.
F. Glover (1997) “A Template for Scatter Search and Path Relinking,” in Artificial Evolution,
Lecture Notes in Computer Science, 1363, J.-K. Hao, E. Lutton, E. Ronald, M. Schoenauer and
D. Snyers, Eds. Springer, pp. 13-54.
F. Glover and M. Laguna (1997) Tabu Search, Kluwer Academic Publishers, Springer.
F. Glover, G. Kochenberger, Y Wang (2018) “A new QUBO model for unsupervised machine
learning,” Research in progress.
44
F. Glover, M. Lewis and G. Kochenberger (2017) “Logical and inequality implications for
reducing the size and difficulty of quadratic unconstrained binary optimization problems,”
European Journal of Operational Research, Article in Press, DOI: 10.1016/j.ejor.2017.08.025.
F. Glover, J. Mulvey, D. Bai, and M. Tapia (1998) “Integrative Population Analysis for Better
Solutions to Large-Scale Mathematical Programs,” in Industrial Applications of Combinatorial
Optimization, G. Yu, Ed. Kluwer Academic Publishers, Springer, Boston, MA, pp. 212-237.
G. Kochenberger and F. Glover (2006) “A Unified Framework for Modeling and Solving
Combinatorial Optimization Problems: A Tutorial,” In: Multiscale Optimization Methods and
Applications, eds. W. Hager, S-J Huang, P. Pardalos, and O. Prokopyev, Springer, pp. 101-124.
G. Kochenberger, J-K. Hao, F. Glover, M. Lewis, Z. Lu, H. Wang, Y. Wang (2014) "The
Unconstrained Binary Quadratic Programming Problem: A Survey,” Journal of Combinatorial
Optimization, Vol. 28, Issue 1, pp. 58-81.
A. Lucas (2014) "Ising Formulations of Many NP Problems," Frontiers in Physics, vol. 5, no.
arXiv:1302.5843, p. 2.
K. L., Pudenz and D.A. Lidar (2013). “Quantum adiabatic machine learning,” Quantum
information processing, 12(5), 2027-2070.
The National Academies of Sciences, Engineering and Medicine Consensus Study Report
(2018), “Quantum Computing: Progress and Prospects (2018),
https://www.nap.edu/catalog/25196/quantum-computing-progress-and-prospects.
45
Y. Wang, Z. Lu, F. Glover and J-K. Hao (2012) “Path relinking for unconstrained binary
quadratic programming,” European Journal of Operational Research 223(3): pp. 595-604.
Y. Wang, Z. Lu, F. Glover and J-K. Hao (2013) “Backbone guided tabu search for solving the
UBQP problem,” Journal of Heuristics, 19(4): 679-695.
H. Yu, Y. Huang and B. Wu (2018) “Exact Equivalence between Quantum Adiabatic Algorithm
and Quantum Circuit Algorithm,” arXiv:1706.07646v3 [quant-ph], DOI: 10.1088/0256-
307X/35/11/110303.
Acknowledgements:
This tutorial was influenced by our collaborations on many papers over recent years with several
colleagues to whom we owe a major debt of gratitude. These key co-workers, listed in
alphabetical order, are: Bahram Alidaee, Dick Barr, Andy Badgett, Yu Du, Jin-Kao Hao, Mark
Lewis, Karen Lewis, Zhipeng Lu, Abraham Punnen, Cesar Rego, Yang Wang, Haibo Wang and
Qinghua Wu. Other collaborators whose work has inspired us are too numerous to mention.
Their names may be found listed as our coauthors on our home pages.
46