0% found this document useful (0 votes)
100 views

Dual Simplex: January 2011

This document provides an overview of the dual simplex algorithm for solving linear programming problems. It discusses the relationship between primal and dual linear programs, and how the dual simplex method ensures dual feasibility and complementary slackness at each iteration while seeking primal feasibility only at the optimal solution. The document also covers the Karush-Kuhn-Tucker conditions and provides an example to illustrate complementary slackness.

Uploaded by

Yunia Roza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views

Dual Simplex: January 2011

This document provides an overview of the dual simplex algorithm for solving linear programming problems. It discusses the relationship between primal and dual linear programs, and how the dual simplex method ensures dual feasibility and complementary slackness at each iteration while seeking primal feasibility only at the optimal solution. The document also covers the Karush-Kuhn-Tucker conditions and provides an example to illustrate complementary slackness.

Uploaded by

Yunia Roza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/233748737

Dual Simplex

Chapter · January 2011


DOI: 10.1002/9780470400531.eorms0269

CITATIONS READS

4 6,089

1 author:

Mihai Banciu
Bucknell University
17 PUBLICATIONS   142 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Mihai Banciu on 22 November 2017.

The user has requested enhancement of the downloaded file.


Dual Simplex
Mihai Banciu
School of Management
Bucknell University
Lewisburg, PA 17837
[email protected]
January 2010

Abstract
The dual simplex algorithm is an attractive alternative as a solution method for
linear programming problems. Since the addition of new constraints to a problem
typically breaks primal feasibility but not dual feasibility, the dual simplex can be
deployed for rapid reoptimization, without the need of finding new primal basic feasible
solutions. This is especially useful in integer programming, where the use of cutting
plane techniques require the introduction of new constraints at various stages of the
branch-and-bound/cut/price algorithms. In this article, we give a detailed synopsis of
the dual simplex method, including its history and its relationship to the primal simplex
algorithm, as well as its properties, implementation challenges, and applications.

Consider the following linear programming problem P expressed in standard form:


[P] min cT x
subject to:
Ax = b
x≥0
where cT = (c1 , c2 , . . . , cn ) ∈ Rn is an n-dimensional cost vector, A = (aij ) ∈ Rm×n is a
matrix of constraint coefficients, b = (b1 , b2 , . . . , bm )T ∈ Rm is an m-dimensional right hand
side vector, and x = (x1 , x2 , . . . , xn ) ∈ Rn is an n-dimensional vector of decision variables.
We refer to the total cost z = cT x : Rn 7→ R as the objective value of the problem P. Stated
in this form, P is called the primal problem, and the vector x∗ that minimizes the objective
function cT x∗ , while satisfying the equality condition Ax∗ = b, is called the optimal solution
to the problem P. Associated with the primal problem P, there exists a dual problem D,
which is formulated in the following way:
[D] max bT u
subject to:
AT u ≤ c
u free

1
Here, the vector u = (u1 , u2 , . . . , um ) ∈ Rm is called the vector of dual variables. Notice
that formulating the dual problem of D will yield the original primal problem P. The
relationship between problems P and D is established by the following two theorems (the
proofs of these and any subsequent result presented in this encyclopedic entry can be found in
any introductory linear programming textbooks-see for example [1] or [2], as well as section
1.1.1.8 from the encyclopedia):
Theorem 1 (Weak Duality). If x is a feasible solution of P, and u is a feasible solution
to D, then bT u ≤ cT x.
Theorem 2 (Strong Duality). If problem P has an optimal solution, then problem D also
has an optimal solution. Moreover, the respective optimal solution values are equal to one
another.
An immediate consequence of weak duality is that if the primal problem P is unbounded
from below, then its dual D must be infeasible, and correspondingly, if the dual problem D
is unbounded from above, then the primal P must be infeasible. Additionally, if the choice
of x and u is such that the inequality described in the weak duality theorem becomes an
equality, then x and u are the optimal solutions to P and D, respectively. Optimality of x
and u can also be verified by way of the complementary slackness conditions:
Theorem 3 (Complementary Slackness). Let x and u be feasible solutions to P and D,
respectively. Then, x and u are optimal solutions for each problem if and only if uT (Ax −
b) = 0 and (c − AT u)T x = 0.
Theorem 3 implies three things. First, observe that in problem P, the first complementary
slackness condition, i.e., uT (Ax − b) = 0, is automatically satisfied (since P is expressed in
standard form). Second, the complementary slackness theorem states that in problem D, if
any constraint is not tight, then the corresponding value of the primal variable corresponding
to each such constraint is 0. If the constraint is tight, then the corresponding primal value is
non-negative. Finally, the theorem provides an easy way of translating primal solutions into
corresponding dual solutions, and vice-versa, as well as providing a certificate for whether
a proposed solution for P is, in fact, optimal. The following example illustrates this last
property.
Example (Complementary Slackness). Consider the following LP expressed in standard
form, together with its dual:

max 3x1 + 4x2 − x3 min 8u1 + 22u2


subject to: 3x1 − 4x2 + 2x3 = 8 subject to: 3u1 + 4u2 ≤ 3
4x1 − 2x2 + 5x3 = 22 −4u1 − 2u2 ≤ 4
x 1 , x2 , x3 ≥ 0 2u1 + 5u2 ≤ −1

Suppose we are presented with the solution x = (0, 0.25, 4.5) and wish to establish optimality
via complementary slackness. The first complementary slackness condition is automatically
satisfied because the problem is in standard form. Since x2 > 0 and x3 > 0, from the

2
second complementary slackness condition we obtain −4u1 − 2u2 = 4 and 2u1 + 5u2 = −1,
which solves to u = (−1.125, 0.25). This solution is dual feasible, as it also satisfies the first
constraint of the dual problem, and the total cost is -3.5. This is the same cost of the primal
problem; therefore, by the strong duality theorem, x is an optimal solution to the original
LP.

The following theorem establishes the Karush–Kuhn–Tucker conditions for a linear pro-
gramming problem. For more advanced discussions on the topic, consult section 1.1.1.8 of
the encyclopedia.

Theorem 4 (KKT optimality conditions for LP). Given a linear programming problem
P, the vector x is an optimal solution to P if and only if there exist vectors u and r such
that:

1. Ax = b, x ≥ 0 (primal feasibility)
2. AT u + r = c, r ≥ 0 (dual feasibility)
T
3. r x = 0 (complementary slackness)

In Theorem 4, the vector r is simply the vector of slack/surpluses c − AT u from the dual
problem D. In the primal problem P, the vector r is referred to as the vector of reduced
costs, and it is usually denoted by c̄.
From a computational perspective, the importance of the theorem is that a computer
algorithm designed to solve problem P using an extreme vertex approach on the polyhedron
described by the constraints of P, can work in one of two possible ways. The first approach
is to make sure that both primal feasibility and complementary slackness are preserved at
every iteration of the algorithm, while seeking dual feasibility only at the optimal solution.
The second approach is to ensure dual feasibility and complementary slackness during every
iteration, while seeking primal feasibility only at the optimal solution. The first approach
yielded George Dantzig’s (primal) simplex algorithm around 1947 [3, 4, 5], which is, arguably,
the cornerstone of the entire linear optimization literature. The second method led to the
discovery of the dual simplex algorithm, in 1954 by Carlton Lemke [6].
While the simplex algorithm, in its both incarnations, has shown remarkable good perfor-
mance in practice, in that it usually requires O(m) iterations (pivots), there exist instances
where an exponential number of iterations may be required until an optimal solution is
found (see [7] for the first description of instances that lead to a performance on the or-
der of O(2n )). Hence, later theoretical efforts concentrated on a different attack for solving
the linear programming problem. These efforts were successful, first in 1979 when Leonid
Khachian [8] discovered the ellipsoidal method for solving linear programming problems in
polynomial time, and later in 1984 with Narendra Karmarkar’s discovery of the first interior
point method [9], which created a very fertile research area in subsequent years (see section
1.1.3 of the encyclopedia for more information). In the following section we will discuss the
theory and the implementation of the dual simplex algorithm.

3
1 The Dual Simplex Algorithm
Throughout this section, we assume familiarity with basic linear algebra concepts, such as
bases, spaces, and polyhedra, as well as an understanding of the (revised) simplex algorithm,
which is described in section 1.1.1.3 of this encyclopedia.
Assume that the coefficient matrix A is of full rank m (m < n), and that there exist a
basis B corresponding to the m linearly independent columns of A, and a matrix N that
contains the remaining n − m non-basic columns of A. We partition the solution vector x
into two components xB = B−1 b = (x1 , x2 , . . . , xm )T and xN = (xm+1 , xm+2 , . . . , xn )T = 0,
called the basic and non-basic solutions, respectively. In a similar fashion, we partition the
cost vector cT = [cB |cN ] into a basic and a non-basic component, that is, cB and cN are the
cost coefficients associated with xB and xN , respectively. Suppose, moreover, that the basis
B induces a dual feasible solution u, that is, AT u ≤ c and uT = cTB B−1 . It is easy to see
that the equality constraints of P are satisfied, since
[ −1 ]
B b
Ax = [B|N] =b
0
and that complementary slackness is also satisfied, since

(c − AT u)T x = cT x − uT Ax = cT B−1 b − cT B−1 b = 0

The only ingredient missing for an optimal solution to P is the requirement that xB =
B−1 b ≥ 0. If this inequality happens to hold, then x is optimal for P, since all conditions
of Theorem 4 are met. On the other hand, if there exists at least one negative entry in
xB , then we perform a change of basis that eliminates this entry from B, thus reducing the
primal infeasibility. The procedure repeats until all entries in xB are non-negative. This is
the essence of the dual simplex algorithm.
Naturally, the two questions that arise are about how this change of basis is actually
performed, and whether or not the new solution obtained after the basis change is actually
an improvement over the old one. The following sub-sections describe the actual flow of
the dual simplex algorithm, alongside with a discussion on several theoretical and practical
implementation challenges.

1.1 Summary of the dual simplex method: description and geom-


etry
The dual simplex method entails the following steps.
Step 0 (Initialization) Find a dual feasible basis B such that the associated reduced cost
vector c̄ = c−uT A is non-negative (in the sense that each vector entry is non-negative).
(We will later provide a quick procedure for identifying such a basis.)

Step 1 (Pricing) If xB = B−1 b ≥ 0, stop; the current solution is optimal. Otherwise,


select a pivot row i with the corresponding value of the basic variable xi < 0. If there
are multiple candidates, select the one with the minimum value. The variable xi will
leave the basis.

4
Step 2 (Ratio test) If all constraint coefficients aij alongside the ith row are nonnegative
(aij ≥ 0, ∀j), stop. The dual problem is unbounded, and therefore the primal problem is
infeasible. Otherwise, select the pivot column j, by performing the following minimum
ratio test: { }
c̄k
j = arg min | aik < 0
k=1,..,m aik
The corresponding variable xj will enter the basis.

Step 3 (Basis change) Pivot on element aij and go to step 1.

We will illustrate the dual simplex method by contrasting it with the primal simplex, on a
simple example, presented below. We will use the traditional simplex tableau exposition,
presented in the following fashion (see [10] for one of the first descriptions of the simplex
tableau, and [11] for a detailed description of this particular implementation form):

Table 1: The general form of a simplex tableau


z xB xN RHS
z 1 0 cB B−1 N − cN cTB B−1 b
xB 0 I B−1 N B−1 b

Note that in other treatments of linear programming (e.g. [1]), the reduced costs are de-
fined the other way around, that is, cN − cB B−1 N. Under this interpretation, the arguments
presented below still hold, but the signs are reversed—for example, the algorithm would
terminate when all reduced costs are non–negative (in a minimization example), rather than
non-positive, as we assume.

Example (Primal Simplex). Consider the following linear programming problem, together
with its corresponding dual:

min 2x1 + x2 max 3u1 + 2u2


subject to: x1 + x2 ≥ 3 subject to: u1 + u2 ≤ 2
x1 ≥ 2 u1 ≤ 1
x 1 , x2 ≥ 0 u 1 , u2 ≥ 0

The following primal simplex tableaus solve the primal and the dual problem. For the primal
problem, we use the big-M method to form an initial basis, rather than the 2-phase method,
to keep everything in one tableau.
The optimal solution to the primal problem is x = (2, 1, 0, 0, 0, 0); the corresponding
optimal dual vector is u = (1, 1, 0, 0). The optimal value of the objective function is z = 5.
One can also quickly check the complementary slackness conditions and verify that they are
met.

We will examine now the way in which the dual simplex algorithm operates. First, we
look at the primal tableau, with the starting basis given by the surplus variables x3 and x4

5
Table 2a: Primal simplex on primal problem
z x1 x2 x3 x4 x5 x6 RHS
z 1 2M − 2 M − 1 −M −M 0 0 5M
x5 0 1 1 -1 0 1 0 3
x6 0 1∗ 0 0 -1 0 1 2
z 1 0 M − 1 −M M − 2 0 −2M + 2 M +4

x5 0 0 1 -1 1 1 -1 1
x1 0 1 0 0 -1 0 1 2
z 1 0 0 -1 -1 −M + 1 −M + 1 5
x2 0 0 1 -1 1 1 -1 1
x1 0 1 0 0 -1 0 1 2

Table 2b: Primal simplex on dual problem


z u1 u2 u3 u4 RHS
z 1 3 2 0 0 0
u3 0 1 1 1 0 2
u4 0 1∗ 0 0 1 1
z 1 0 2 0 -3 -3
u3 0 1∗ 1 1 0 2
u1 0 1 0 0 1 1
z 1 0 0 -2 -1 -5
u2 0 0 1 1 -1 1
u1 0 1 0 0 1 1

Table 3a: Initial basis for dual simplex


z x1 x2 x3 x4 RHS
z 1 -2 -1 0 0 0
x3 0 -1 −1∗ 1 0 -3
x4 0 -1 0 0 1 -2

(and thus we multiply both rows by -1 to obtain the basis given by the identity matrix I2 ).
Notice that this basis is primal infeasible, because of the negative terms in B−1 b.
We are at step 1 of the dual simplex method. We select x3 as the leaving variable
(−3 < −2). According to step 2, the problem is not infeasible (two negative entries on the
−2 −1
selected row). The entering variable is x2 , by the minimum ratio test: min { −1 , −1 } = 1.
The new tableau is obtained by subtracting row 1 from row 0, and by dividing row 1 to -1:

Table 3b: First pivot using dual simplex


z x1 x2 x3 x4 RHS
z 1 -1 0 -1 0 3
x2 0 1 1 -1 0 3
x4 0 −1∗ 0 0 1 -2

6
The solution is still primal infeasible. The variable x4 leaves the basis, and x1 enters, by
the minimum ratio test. Row 2 gets subtracted from row 0, row 2 is added to row 1, and
row 2 is divided by -1 to get the new tableau:

Table 3c: Second pivot using dual simplex


z x1 x2 x3 x4 RHS
z 1 0 0 -1 -1 5
x2 0 0 1 -1 1 1
x1 0 1 0 0 -1 2

Since all terms in B−1 b are positive and all entries in row 0 are negative, this is the
optimal basis. The algorithm terminates, with the same solution as reported by the primal
simplex above. Since the dual problem maximizes the objective function, the increase in z
from one pivot to the next is natural.
Several observations are in order. First, notice the correspondence between the dual
simplex, as exhibited in Tables 3a through 3c, and the primal simplex method applied to the
dual problem, as shown in Table 2b. Through simple transposition manipulations, we can see
that the two tableaus are equivalent. This is important: from a mathematical point of view,
the dual simplex method used to solve a primal problem is equivalent to the primal simplex
method applied to the dual problem, as long as the dual is not converted into standard form.
(Converting the dual problem into standard form could lead to different tableaus.) Second,
from a geometrical perspective, the dual simplex iterates around infeasible primal solutions
(but dual feasible!), until a solution that is both primal and dual feasible is found. In our
example, Table 3a is equivalent to the primal basic infeasible solution x = (0, 0, −3, −2)T . By
complementary slackness, the associated dual solution is u = (0, 0)T , which is basic feasible
in the dual problem. Figure 1, adapted from a similar example presented in [1], displays the
progression of the dual simplex method in terms of primal and dual basic feasible solutions.

Figure 1: The primal and the dual feasible spaces

In Figure 1, the path A-B-C traces, in both spaces, the pivot sequence of the dual simplex
applied to the example problem. In the primal space, bases A and B are infeasible, but at

7
optimality, C is feasible. In the dual space, the algorithm maintains feasibility at each base
A, B, and finally C. Naturally, the solution is improved during every iteration.

1.2 Implementation issues


1.2.1 Cycling
While iterating, it is possible that the dual simplex can arrive at a situation where the
reduced cost associated with the selected pivot column is zero, a situation first identified in
[12]. If this happens, then the reduced costs remain unchanged after pivoting, and therefore
the value of the dual objective function (cTB B−1 b) does not change as well. As a result,
the dual simplex algorithm may cycle. The situation is avoided by choosing a lexicographic
pivoting rule, which avoids cycling. Similar to anticycling rules for the primal simplex
method, the rule states that one should choose the lexicographically smallest column, found
by dividing all entries in the consideration set (columns with aik < 0) by aik . If a tie exists
between several lexicographically smallest columns, one should choose the column with the
smallest index. The proof of the finiteness of the simplex algorithm (either in primal or
dual form) when using the lexicographic rule can be found in a number of reference texts
on linear programming like [1], [13] or [14]. Besides the lexicographic rule, Bland’s rule
[15] (choose as entering variable the column with the smallest index that corresponds to a
positive reduced cost and break ties among row candidates by favoring the row with the
smallest index number) can also be easily adapted from the primal simplex algorithm and
implemented in the dual simplex.

1.2.2 Degeneracy
Another issue that could appear while implementing the algorithm is dual degeneracy. Dual
degeneracy appears whenever there are more than m reduced costs that are 0, that is,
there exists at least a non-basic variable with reduced cost equal to zero. Since cycling and
degeneracy are somewhat related, practical implementations of the dual simplex algorithm
try to resolve both issues simultaneously, if possible (see, for example, the COIN-OR project
that provides, among other things, open source implementations of simplex algorithms. A
good overview of the project can be found in [16] and [17]).

1.2.3 Finding an initial dual basic feasible solution


As we have mentioned in the outline of the dual simplex algorithm, the algorithm must
start with a basic dual feasible solution (BDFS), which sometimes is not readily available.
Assuming that in the primal problem the initial basis is given∑by the first m columns, a dual
feasible basis can be constructed by adding the constraint nj=m+1 xij ≤ M to the primal
problem. Then, let the leaving variable be the slack measure associated with this constraint,
and let the entering variable be the one corresponding to the maximum reduced cost. After
the pivot, either the new basis is dual feasible and thus the dual simplex can commence, or
the primal problem is optimal, or the primal is unbounded (see [11] for additional discussions
as to why one of these outcomes must happen).

8
Example (Finding a BDFS: [11], pg. 277). Consider the following tableau, which we
want to solve using dual simplex:

Table 4:
z x1 x2 x3 x4 x5 RHS
z 1 0 1 5 -1 0 0
x1 0 1 2 -1 1 0 4
x5 0 0 3 4 -1 1 3

To get a dual basic feasible solution, we add the artificial constraint x2 + x3 + x4 ≤ M


to the problem. The slack of this constraint is x6 , and the resulting tableau is:

Table 5:
z x1 x2 x3 x4 x5 x6 RHS
z 1 0 1 5 -1 0 0 0
x6 0 0 1 1∗ 1 0 1 M
x5 0 1 2 -1 1 0 0 4
x1 0 0 3 4 -1 1 0 3

In the next step, x6 leaves the basis, and x3 enters (the maximum reduced cost is 5). The
new tableau is dual feasible (but not primal feasible—notice the negative value of x5 ).

Table 6:
z x1 x2 x3 x4 x5 x6 RHS
z 1 0 -4 0 -6 0 -5 −5M
x3 0 0 1 1 1 0 1 M
x1 0 1 3 0 2 0 1 M +4
x5 0 0 -1 0 -5 1 -4 −4M + 3

While intuitively easy, this method is superseded in practical implementations by dual


phase 1 algorithms. For an extensive review of dual phase 1 methods, see [18].

1.2.4 Selection Rules


From an implementation perspective, two important decisions that are quintessential to the
practical performance of the dual simplex algorithm are the proper choices for the entering
variable (also known as the “pricing” step) and the exiting variable (the “minimum ratio
test” step). For a relatively long period of time, from the late 1950s until the early 1990s,
the dual simplex was relatively ignored in practice, due to the perception that it would not
outperform the primal simplex algorithm. The last important theoretical contribution from
the 1950s is Wagner’s adaptation of the dual simplex for bounded variables [19]. However, as
the understanding of the structure of linear programming problems grew, it has become in-
creasingly obvious that there exist classes of programs for which the primal simplex method
is better, and other types of problems that tend to be better solved by the dual simplex

9
[20]. [21] and [22] helped in changing the perception about the dual simplex algorithm, by
devising clever pricing techniques (steepest edge) and better ratio tests (bound flipping ratio
test), that boost the dual simplex performance over its primal counterpart. Separately, [23]
generalized the two-phase primal simplex algorithm to its dual counterpart. An excellent
survey of the progress made with respect to the practical implementation of the dual sim-
plex algorithm can be found in [24]. Further breakthroughs emerged with the advance of
parallelized implementations of the dual simplex [25].

2 Application of dual simplex: integer programming


One of the most important applications of the dual simplex is in large-scale optimization, par-
ticularly large-scale mixed integer programming. Traditionally, if the relaxation of a general
mixed integer programming problem does not satisfy the original integrality requirements,
then some mixture of variable branching and addition of valid inequalities happens, until the
branch–and–bound/cut/price implementation converges to the optimal solution (assuming
it exists). Notice that branching on a fractional variable creates two sub-problems, each of
which has an additional constraint-a bound on the branching variable, which destroys the
primal feasibility of the original problem. Similarly, the addition of a valid inequality to the
problem breaks primal feasibility. However, in the dual space, the problem remains feasible,
since the addition of a row in the primal is equivalent to the addition of a column to the dual.
If the added column is non–basic, the dual basis can be used to continue the optimization via
dual simplex, without the need to re–compute and re–factor a new basis (both of which are
time intensive operations). This hot–starting feature makes the dual simplex very attractive
for large-scale optimization projects. We illustrate this property with the following knapsack
example:

Example (Valid Inequalities and Dual Simplex). Consider the following knapsack
problem:

max 2x1 + 3x2 + 4x3


subject to: x1 + 2x2 + 3x3 ≤ 4
x 1 , x2 , x3 binary

The initial tableau for the relaxation is

Table 7:
z x1 x2 x3 x4 x5 x6 x7 RHS
z 1 2 3 4 0 0 0 0 0
x4 0 1 2 3 1 0 0 0 4
x5 0 1 0 0 0 1 0 0 1
x6 0 0 1 0 0 0 1 0 1
x7 0 0 0 1 0 0 0 1 1

After several iterations we obtain the final optimal tableau for the LP relaxation

10
Table 8:
z x1 x2 x3 x4 x5 x6 x7 RHS
z 1 0 0 0 − 34 − 13 − 13 0 − 19
3
x2 0 0 1 0 0 0 1 0 1
x1 0 1 0 0 0 1 0 0 1
x7 0 0 0 0 − 31 1
3
2
3
1 2
3
x3 0 0 0 1 1
3
1
3
− 23 0 1
3

The solution is x1 = x2 = 1, x3 = 13 . This solution does not satisfy the binary require-
ments. On the other hand, notice that x2 and x3 cannot be simultaneously equal to 1 in the
optimal solution, as that would violate the knapsack constraint. Therefore, the inequality
x2 + x3 ≤ 1 is valid for the original knapsack problem, and can be added to the previous
tableau, yielding

Table 9:
z x1 x2 x3 x4 x5 x6 x7 x8 RHS
z 1 0 0 0 − 43 − 13 − 13 0 0 − 19
3
x2 0 0 1 0 0 0 1 0 0 1
x1 0 1 0 0 0 1 0 0 0 1
x7 0 0 0 0 − 13 1
3
2
3
1 0 2
3
x3 0 0 0 1 1
3
1
3
− 23 0 0 1
3
x8 0 0 1 1 0 0 0 0 1 1

Subtracting row 1 and row 4 from row 5 in order to get I5 as basis, yields a dual feasible
solution (notice that row 5 has a negative RHS, rendering the solution primal infeasible).
Dual simplex leads to optimality after one pivot.

Table 10:
z x1 x2 x3 x4 x5 x6 x7 x8 RHS
z 1 0 0 0 − 34 − 13 − 31 0 0 − 19
3
x2 0 0 1 0 0 0 1 0 0 1
x1 0 1 0 0 0 1 0 0 0 1
x7 0 0 0 0 −3 1 1
3
2
3
1 0 2
3
x3 0 0 0 1 1
3
1
3
− 32 0 0 1
3

x8 0 0 0 0 − 31 1
3
− 31 0 1 − 13
z 1 0 0 0 -1 − 32 0 0 -1 -6
x2 0 0 1 0 -1 1 0 0 3 0
x1 0 1 0 0 0 1 0 0 0 1
x7 0 0 0 0 -1 1 0 1 2 0
x3 0 0 0 1 1 − 31 0 0 -2 1
x6 0 0 0 0 1 -1 1 0 -3 1

The solution is x1 = x3 = 1, x2 = 0. Since this set also satisfies the binary requirements,
it must be optimal to the original knapsack. The value of the objective function is z = 6.

11
3 Conclusions
Modern advances in linear programming theory establish the use of the dual simplex al-
gorithm as a powerful optimization tool. While the performance of the dual simplex was
originally considered to lag that of its most popular primal variant, the dual simplex is
widely used today in practice. One of the more popular applications of the algorithm in-
cludes large-scale mixed integer programming, where row additions break primal feasibility,
but typically, dual feasibility remains intact. The dual simplex is also used as a de-facto
linear programming solver, since research has identified [20] many classes of problems where
the application of the dual algorithm outperforms the performance of its primal counterpart.
All these interesting properties of the method, coupled with the fact that linear optimization
is a fundamental topic in operations research, ensure that advances in this area will continue
to be positioned at the forefront of the discipline.

References
[1] Dimitris Bertsimas and John Tsitsiklis. Introduction to Linear Optimization. Athena
Scientific, Belmont, MA, 1st edition, 1997.

[2] Katta G. Murty. Linear Programming. Wiley, New York, NY, 1st edition, 1983.

[3] George B. Dantzig. Programming in a linear structure, 1948.

[4] George B. Dantzig. Programming of interdependent activities ii: Mathematical model.


Econometrica, 17(3/4):200–211, 1949.

[5] George B. Dantzig. Maximization of a linear function of variables subject to linear


inequalities. In T. C. Koopmans, editor, Activity Analysis of Production and Allocation,
pages 359–373. John Wiley and Sons, Inc., New York, NY, 1951.

[6] C. E. Lemke. The dual method of solving the linear programming problem. Naval
Research Logistics Quarterly, 1:36–47, 1954.

[7] V. Klee and G. J. Minty. How good is the simplex algorithm? In O. Shisha, editor,
Inequalities - III, pages 159–175. Academic Press, New York, NY, 1972.

[8] L. G. Khachian. A polynomial algorithm in linear programming. Soviet Mathematics


Doklady, 20:191–194, 1979.

[9] N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinator-


ica, 4:373–395, 1984.

[10] George B. Dantzig. Linear Programming and Extensions. Princeton University Press,
Princeton, NJ, 1st edition, 1963.

[11] Mokhtar S. Bazaraa, John J. Jarvis, and Hanif D. Sherali. Linear Programming and
Network Flows. Wiley, New York, NY, 2nd edition, 1990.

12
[12] E. M. L. Beale. Cycling in the dual simplex algorithm. Naval Research Logistics Quar-
terly, 2(4):269–275, 1955.

[13] Vašek Chvátal. Linear Programming. Freeman, 1st edition, 1983.

[14] Robert J. Vanderbei. Linear Programming: Foundations and Extensions. Kluwer, 2nd
edition, 2001.

[15] Robert G. Bland. New finite pivoting rules for the simplex method. Mathematics of
Operations Research, 2(2):103–107, 1977. doi: 10.1287/moor.2.2.103.

[16] Robin Lougée-Heimer, Francisco Barahona, Brenda Dietrich, J. P. Fasano, John Forrest,
Robert Harder, Laszlo Ladanyi, Tobias Pfender, Theodore Ralphs, Matthew Saltzman,
and Katya Scheinberg. The coin-or initiative: Open-source software accelerates opera-
tions research progress. ORMS Today, 28(5):20–22, 2001.

[17] Robin Lougée-Heimer. The common optimization interface for operations research:
Promoting open-source software in the operations research community. IBM Journal of
Research and Development, 47(1):57–66, 2003.

[18] Achim Koberstein and Uwe Suhl. Progress in the dual simplex method for large scale
lp problems: practical dual phase 1 algorithms. Computational Optimization and Ap-
plications, 37(1):49–65, 2007.

[19] H. M. Wagner. The dual simplex algorithm for bounded variables. Naval Research
Logistics Quarterly, 5:257–261, 1958.

[20] Robert E. Bixby. Solving real-world linear programs: A decade and more of progress.
Operations Research, 50(2):3–15, 2002.

[21] J. J. Forrest and D. Goldfarb. Steepest edge simplex algorithms for linear programming.
Mathematical Programming, 57(3):341–374, 1992.

[22] R. Fourer. Notes on the dual simplex method. Technical report, Draft Report, North-
western University, 1994.

[23] I. Maros. A generalized dual phase-2 simplex algorithm. European Journal of Opera-
tional Research, 149(1):1–16, 2003.

[24] Achim Koberstein. Progress in the dual simplex algorithm for solving large scale lp prob-
lems: techniques for a fast and stable implementation. Computational Optimizations
and Applications, 41(2):185–204, 2008.

[25] R. E. Bixby and A. Martin. Parallelizing the dual simplex method. INFORMS Journal
on Computing, 12(1):45–56, 2000.

13

View publication stats

You might also like