0% found this document useful (0 votes)
5 views

OAP 2

The document provides a comprehensive overview of optimization concepts, including duality, sensitivity analysis, and the simplex algorithm. It covers the relationships between primal and dual problems, the steps for sensitivity analysis, and the simplex algorithm's procedures for solving linear programming problems. Additionally, it discusses integer linear programming, branch and bound methods, and cutting plane techniques for improving solutions.

Uploaded by

tamcraxiit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

OAP 2

The document provides a comprehensive overview of optimization concepts, including duality, sensitivity analysis, and the simplex algorithm. It covers the relationships between primal and dual problems, the steps for sensitivity analysis, and the simplex algorithm's procedures for solving linear programming problems. Additionally, it discusses integer linear programming, branch and bound methods, and cutting plane techniques for improving solutions.

Uploaded by

tamcraxiit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

OAP Cheat Sheet: Duality, Sensitivity Analysis, and Simplex

Algorithm

1 Duality
1.1 Primal and Dual Problems
Given a primal linear program (P) in canonical form:

(P) max Z = cT x
s.t. Ax ≤ b
x≥0

The corresponding dual problem (D) is:

(D) min w = y T b
s.t. y T A ≥ cT
y≥0

Where y is the vector of dual variables.


Property 1. The dual of the dual problem is the primal problem.

1.2 Primal-Dual Relationships Table


Maximization Problem (Primal) Minimization Problem (Dual)
Constraint i: ≤ Variable yi ≥ 0
Constraint i: = Variable yi unrestricted
Constraint i: ≥ Variable yi ≤ 0
Variable xj ≥ 0 Constraint j: ≥
Variable xj unrestricted Constraint j: =
Variable xj ≤ 0 Constraint j: ≤

(Based on table structure )

1.3 Duality Theorems


Theorem 1 (Weak Duality). Let x be a feasible solution for the primal (max) and y be a feasible solution
for the dual (min). Then cT x ≤ y T b.
Corollary 1. Any feasible dual solution provides an upper bound for the primal objective, and any
feasible primal solution provides a lower bound for the dual objective.
Corollary 2. If x and y are feasible solutions for the primal and dual respectively, and cT x = y T b, then
x and y are optimal solutions for their respective problems.
Corollary 3. If a problem has an unbounded optimal value, its dual has no feasible solution.
Theorem 2 (Strong Duality). If the primal (or dual) problem has a finite optimal solution, then so does
the dual (or primal) problem, and their optimal values are equal (Z ∗ = w∗ ).
Corollary 4. Equality of objective functions (Z ∗ = w∗ ) is a necessary and sufficient condition for
optimality of feasible solutions.

1
1.4 Complementary Slackness
Theorem 3. Let x and y be feasible solutions for the primal and dual problems. They are optimal if
and only if:

y T (b − Ax) = 0
(y T A − cT )x = 0

This means:
• If a primal constraint is not binding (Ai x < bi ), the corresponding dual variable is zero (yi = 0).
• If a primal variable is positive (xj > 0), the corresponding dual constraint is binding ((y T A)j = cj ).
• If a dual variable is positive (yi > 0), the corresponding primal constraint is binding (Ai x = bi ).

• If a dual constraint is not binding ((y T A)j > cj ), the corresponding primal variable is zero (xj = 0).

1.5 Finding Optimal Dual Variables from Primal Simplex Tableau


Let ei be the slack/surplus variable for constraint i.
• If ei is basic in the optimal primal tableau, the optimal value of the corresponding dual variable yi
is zero.
• If ei is non-basic in the optimal primal tableau, the optimal value of the dual variable yi is the
reduced cost (marginal cost) of ei in the objective function row (pay attention to the sign convention
used). For a maximization problem with ≤ constraints converted to standard form, yi = cei , where
cei is the coefficient of ei in the final objective row.

2 Sensitivity Analysis
2.1 Change in Objective Function Coefficient (cj )
Objective: Find the range for cj (coefficient of variable xj ) such that the current optimal basis remains
optimal. Method:
1. Let the change be ∆. The new coefficient is cj + ∆.
2. If xj is non-basic: The reduced cost must remain non-positive (for max problems). Calculate the
new reduced cost c̄′j = c̄j + ∆. Require c̄j + ∆ ≤ 0.

3. If xj is basic (let xj = xBk ): The reduced costs of all non-basic variables must remain non-positive.
Calculate the new reduced costs c̄′N using the modified cB . This usually involves finding the change
in cTB B −1 and applying it to the original reduced costs. c̄′N = cTN − (cB + ∆ek )T B −1 N ≤ 0.
4. Solve the resulting inequalities for ∆.

2.2 Change in Right-Hand Side (bi )


Objective: Find the range for bi such that the current optimal basis remains feasible. Method:
1. Let the change be ∆. The new RHS is b + ∆ei .

2. The new values of the basic variables are x′B = B −1 (b + ∆ei ) = B −1 b + ∆(B −1 ei ) = xB +
∆(B −1 column for ei ).
3. Require x′B ≥ 0. This gives inequalities involving ∆.
4. Solve these inequalities for ∆. (Note: B −1 column for ei is the column corresponding to the slack
variable ei in the final simplex tableau).

2
3 Simplex Algorithm
3.1 Standard Form
A linear program is in standard form if:

max Z = cT x
s.t. Ax = b
x≥0

Where b ≥ 0. Problems with ≤ constraints are converted using slack variables, and ≥ constraints with
surplus and artificial variables.

3.2 Basic Concepts


• Basis (B): An m × m invertible submatrix of A.
• Basic Variables (xB ): Variables corresponding to the columns of B.

• Non-Basic Variables (xN ): All other variables.


• Basic Solution: Set xN = 0, solve BxB = b to get xB = B −1 b.
• Basic Feasible Solution (BFS): A basic solution where xB ≥ 0.

• Extreme Point: Corresponds to a BFS.

3.3 Algorithm Steps (Maximization)


1. Initialization: Start with a BFS (e.g., using slack variables if b ≥ 0, or artificial variables and
Phase I/Big M method otherwise ).
2. Optimality Check: Calculate reduced costs for non-basic variables: c̄TN = cTN − cTB B −1 N . If all
c̄j ≤ 0 for j ∈ N , the current BFS is optimal. STOP.
3. Entering Variable: Choose a non-basic variable xk with the most positive reduced cost (c̄k > 0)
to enter the basis.

4. Leaving Variable: Calculate ratios θi = (B −1 b)i /(B −1 Ak )i for all i where (B −1 Ak )i > 0. Choose
the row r corresponding to the minimum ratio θmin . The basic variable xBr in row r leaves the
basis. If all (B −1 Ak )i ≤ 0, the problem is unbounded. STOP.
5. Pivot: Perform row operations to make the pivot element (B −1 Ak )r equal to 1 and all other
elements in the pivot column equal to 0. This updates the tableau (B −1 , B −1 b, c̄T , Z). Go to
Step 2.

3.4 Special Cases


• Degeneracy: A BFS is degenerate if at least one basic variable is zero. This can cause cycling
(rare in practice). Occurs when > n hyperplanes pass through an extreme point.
• Alternative Optima: If an optimal tableau has a non-basic variable with a reduced cost of zero,
and the solution is not degenerate, there are multiple optimal solutions. Pivoting this variable into
the basis leads to another optimal BFS.

• Unbounded Solution: If, during the ratio test (Step 4), all coefficients in the pivot column
(B −1 Ak ) are ≤ 0 for the entering variable xk , the objective function can be increased indefinitely.
• No Feasible Solution: If Phase I (or Big M) ends with an artificial variable still in the basis with
a positive value, the original problem has no feasible solution.

3
3.5 Reduced Costs
The coefficients of the non-basic variables in the objective function row of the simplex tableau are the
reduced costs (or marginal costs). At optimality for a max problem, all reduced costs are ≤ 0.

4 Integer Linear Programming (ILP)


4.1 Introduction
An ILP seeks to optimize a linear objective function subject to linear constraints, with the additional
requirement that some or all variables must be integers.

max Z = cT x
s.t. Ax ≤ b
x ≥ 0, some or all xj ∈ Z

The feasible region is a set of discrete points, not a continuous polyhedron. The optimal solution to the
LP relaxation (dropping integrality) is generally not the same as the ILP optimal solution.

4.2 Relaxation and Bounds


To solve ILPs (often maximization problems):
• Lower Bound (z): The value of any feasible integer solution. Heuristics can find good feasible
solutions quickly.
• Upper Bound (z): Obtained by solving a relaxation of the problem, typically the Linear Pro-
gramming (LP) relaxation. The optimal value of the LP relaxation, Z LP , satisfies Z LP ≥ Z ∗ .
Duality theory can also provide upper bounds.
• Optimality is proven when z = z.

5 Branch and Bound (B&B) Algorithm


5.1 Core Idea: Divide and Conquer
S
Break the original problem S into smaller subproblems Sk such that S = k Sk . The optimal solution
is z ∗ = maxk {zk }, where zk = max{cT x|x ∈ Sk }. This is represented by an enumeration tree.

5.2 Implicit Enumeration and Pruning


Instead of exploring all subproblems (nodes in the tree), B&B uses bounds to prune branches:
1. Pruning by Optimality: The subproblem Sk is solved, and its optimal solution xk is integer. If
cT xk > z, update the best-known solution: z = cT xk and x∗ = xk .
2. Pruning by Bound: The upper bound for subproblem Sk (obtained from its LP relaxation, z k )
is less than or equal to the current best lower bound (z k ≤ z). This branch cannot contain a better
integer solution.
3. Pruning by Infeasibility: The subproblem Sk (or its LP relaxation) has no feasible solution
(Sk = ∅).

5.3 Algorithm Steps (Maximization)


1. Initialization (Step 0): Initialize the list of active subproblems (nodes) with the original problem
P . Set the best lower bound z = −∞. Set best solution x∗ = null.
2. Node Selection (Step 1): If the list is empty, STOP; x∗ is optimal. Otherwise, select a sub-
problem S i from the list.

4
3. Bound Computation (Step 2): Solve the LP relaxation of S i . Let the optimal value be z i and
solution xi .
4. Pruning (Step 3):

• If LP is infeasible, prune by infeasibility. Go to Step 1.


• If z i ≤ z, prune by bound. Go to Step 1.
• If xi is integer: Update z = z i and x∗ = xi . Prune by optimality. Go to Step 1.
5. Branching (Step 4): If not pruned, select an integer variable xj with a non-integer value xij .
Create two new subproblems by adding constraints xj ≤ ⌊xij ⌋ and xj ≥ ⌈xij ⌉. Add these new
subproblems to the list. Go to Step 1.

5.4 Implementation Choices


• Node Selection Strategy:
– Depth-First: Explores deeper branches first. Finds feasible integer solutions faster.
– Best-Bound (Best-First): Selects the node with the highest upper bound (z i ). Aims to mini-
mize the total number of nodes explored.

• Branching Variable Selection:


– Variable with largest objective coefficient.
– Variable whose fractional part is closest to 0.5 (maximum infeasibility).

6 Cutting Plane Methods


6.1 Motivation and Formulation
For an ILP S = P ∩ Zn where P = {x|Ax ≤ b, x ≥ 0}, the goal is to find the convex hull of the
feasible integer points, conv(S). Solving the LP max{cT x|x ∈ conv(S)} yields the ILP optimal solution.
However, finding conv(S) explicitly is generally hard. Instead, we add valid inequalities (cuts) to the
original formulation P to tighten the LP relaxation towards conv(S).

6.2 Valid Inequalities and Facets


• An inequality π T x ≤ π0 is a valid inequality for S if it holds true for all x ∈ S.
• A valid inequality defines a facet of conv(S) if it is essential to defining the shape of the convex
hull (i.e., it’s as ”strong” as possible). Adding facets progressively tightens the formulation.

6.3 Chvátal-Gomory (C-G) Cuts


A general method to generate valid inequalities for S = {x|Ax ≤ b, x ∈ Nn }.

1. Choose a non-negative vector u ∈ Rm


+.

2. Combine the original constraints: uT Ax ≤ uT b. This is a valid inequality for P .


3. Since x ∈ Nn , we can strengthen this. The C-G cut is:
n
X
⌊uT aj ⌋xj ≤ ⌊uT b⌋
j=1

where aj is the j-th column of A, and ⌊·⌋ denotes the floor function (rounding down). This
inequality is valid for S because xj ≥ 0 and integer.

Theorem 4. Any valid inequality for S can be derived by applying the C-G procedure a finite number of
times.

5
6.4 Gomory’s Fractional Cut (Derived from Simplex Tableau)
A specific way to generate a C-G cut directly from an optimal LP relaxation tableau where a basic
variable xB i is fractional.
P
1. Consider the tableau row for xBi : xBi + j∈N āij xj = b̄i , where N is the set of non-basic variables,
b̄i is fractional.
2. Rewrite using floor function: ⌊āij ⌋ and fractional part fij = āij − ⌊āij ⌋ ≥ 0. Similarly fi =
b̄i − ⌊b̄i ⌋ > 0. X
xBi + (⌊āij ⌋ + fij )xj = ⌊b̄i ⌋ + fi
j∈N

3. Rearrange: X X
fij xj − fi = ⌊b̄i ⌋ − xBi − ⌊āij ⌋xj
j∈N j∈N

4. The RHS must be an integer for any feasible integer solution. Therefore, the LHS must also be an
integer.
P
5. Since xj ≥ 0 and fij ≥ 0, we have j∈N fij xj ≥ 0.
6. Also, fi > 0. So, the smallest integer value the LHS can take is 0.
P
7. This implies j∈N fij xj − fi ≥ −fi . Since the LHS must be an integer, it must be ≥ 0.
8. Gomory’s Fractional Cut: X
fij xj ≥ fi
j∈N

This cut is violated by the current LP solution (where xj = 0 for j ∈ N , making LHS=0, but
fi > 0) but holds for all integer solutions.

6.5 Cutting Plane Algorithm


1. Solve the LP relaxation of the ILP.
2. If the solution is integer, STOP; it is optimal for the ILP.
3. If the solution is fractional, find a tableau row corresponding to a fractional basic variable.
4. Generate a Gomory cut (or another valid inequality) from this row that cuts off the current frac-
tional LP solution.
5. Add the cut to the LP formulation.
6. Go to Step 1.

Note: Can be slow to converge. Adding cuts a priori can strengthen the formulation but may make the
LP large.

7 Branch and Cut (B&C) Algorithm


Combines Branch and Bound with Cutting Planes.
• Solves LP relaxations at nodes of the B&B tree.
• If the LP solution xi at node i is fractional, B&C first tries to find and add valid inequalities (cuts)
that are violated by xi but satisfied by all feasible integer solutions.
• Add cuts, re-solve the LP. Repeat until no more effective cuts are found or the solution becomes
integer.
• If the solution remains fractional after adding cuts, then branch on a fractional variable as in
standard B&B.

6
• Pruning rules (optimality, bound, infeasibility) are the same as B&B.
B&C is often more effective than pure B&B or pure Cutting Planes, especially for hard combinatorial
problems like the Traveling Salesman Problem (TSP).

8 Column Generation (CG)


8.1 Motivation
Used for Linear Programs with a very large number of variables (columns), where explicitly listing all
variables is impractical. Often arises from formulations where variables represent complex structures
(e.g., paths, patterns). Example: Multi-commodity flow using path formulation instead of arc formula-
tion.

8.2 Master Problem (MP) and Restricted Master Problem (RMP)


• Master Problem (MP): The original LP with all possible columns (variables).

min cT x
s.t. Ax = b
x≥0

(Assuming minimization for explanation)


• Restricted Master Problem (RMP): The same LP but restricted to a small subset of columns
(variables) xR .

min cTR xR (1)


s.t. AR x R = b (2)
xR ≥ 0 (3)

8.3 Column Generation Algorithm (Iterative Process)


1. Initialize: Start with an initial RMP containing a subset of columns that allows a feasible solution.
2. Solve RMP: Solve the current RMP using the Simplex method. Obtain the optimal primal
solution x∗R and the optimal dual variables π ∗ associated with the constraints AR xR = b.
3. Pricing Subproblem (The Oracle): Find a column (variable) xk from the original MP (not
currently in the RMP) that has the most negative reduced cost with respect to the current dual
variables π ∗ . The reduced cost of a column Ak with cost ck is c̄k = ck − π ∗T Ak . The pricing
subproblem aims to find:
min{c̄k } = min{ck − π ∗T Ak }
k∈R
/ k∈R
/

Solving this efficiently depends on the structure of the original problem (e.g., shortest path problem
in cutting stock or vehicle routing).
4. Check Optimality: If the minimum reduced cost found in Step 3 is non-negative (min c̄k ≥
0), then no non-basic variable can improve the current RMP solution. The current solution x∗R
(extended with zeros for variables not in RMP) is optimal for the original MP. STOP.
5. Add Column(s): If min c̄k < 0, add the column Ak (corresponding to the variable xk with the
most negative reduced cost) to the RMP. Optionally, add multiple columns with negative reduced
costs.

6. Iterate: Go back to Step 2 and solve the updated RMP.

7
8.4 Dantzig-Wolfe (DW) Decomposition
A specific reformulation technique that leads to a column generation approach. It applies when the
constraint matrix has a special block-angular structure:

min cT x
s.t. A0 x = b0 (Linking/Complicating Constraints)
I I
A x=b (Easy/Block Constraints)
x≥0

Assuming the set XI = {x ≥ 0|AI x = bI } is a bounded polyhedron (polytope). Any x ∈ XI can be


written as a convex combination of its extreme points x(i) :
X X
x= ui x(i) , ui = 1, ui ≥ 0
i i

Substituting this into the original problem gives the DW Master Problem (variables are ui ):
X
min (cT x(i) )ui
i
X
s.t. (A0 x(i) )ui = b0 (Dual vars: π)
i
X
ui = 1 (Dual var: ν)
i
ui ≥ 0

This MP usually has far too many columns (one for each extreme point x(i) ). CG is used to solve it.
• RMP: The DW Master Problem restricted to a subset of known extreme points x(i) .
• Subproblem (Pricing Problem): Given duals (π, ν) from the RMP, find an extreme point x∗
of XI = {x ≥ 0|AI x = bI } that minimizes the reduced cost. The reduced cost for a column
corresponding to x(i) is (cT x(i) ) − π T (A0 x(i) ) − ν. The subproblem is:

min {(cT − π T A0 )x} − ν


x∈XI

This involves optimizing a modified objective function over the ’easy’ constraints AI x = bI , x ≥ 0.
If the optimal value is < 0, the corresponding x∗ generates a column with negative reduced cost to
add to the RMP.
For block-diagonal structures (multiple independent blocks linked by common constraints), DW decom-
position leads to multiple independent subproblems, one for each block.

You might also like