Linear and Non-Linear Programming Methods
Linear and Non-Linear Programming Methods
Linear Programming
• Standard Form of LPP
• Slack Variable
• Surplus Variable
• Methods for Solving
o Graphical Method
o Simplex Method
o Two Phase Method
o Big M Method
o Revised Simplex Method
o Big M + Revised Simplex Method
∑ aij xj = bi (i = 1,2, … , m)
j=1
xi ≥ 0 (i = 1,2, … , m)
Matrix Form
max z = c T x , subject to
Ax = b
x≥0
1) Graphical Method
• The set of points S satisfying all the constraints is called the feasible
region.
• The feasible region is a closed, bounded convex set having finitely
many corner points.
• The optimal solution occurs at the boundary of the feasible region.
2) Simplex Method
• Select a set of m variables such that their coefficient matrix (B) is
identity.
𝐱𝐁 𝐲𝟏 𝐲𝟐 … 𝐲𝐣 … 𝐲𝐧 𝐱𝐁
𝐲𝐣
xB1 = y11 y12 … y1j … y1n r1
xB2 = y21 y22 … y2j … y2n r2
… … … … … … … …
xBk = yk1 yk2 … 𝐲𝐤𝐣 … ykn 𝐫𝐤
… … … … … … … …
xBm = ym1 ym2 … ymj … ymn rm
z(xB ) = z1 − c1 z2 − c2 … 𝐳𝐣 − 𝐜𝐣 … zn − cn
Procedure
1. Identify the variable xj which will enter the basis:
zj − cj = min{zi − ci : zi − ci < 0 and some yij > 0}
i
2. Identify the variable xBk which will leave the basis:
rk = min{ri : yij > 0}
i
3. Replace xBk with xj .
Phase 1
max z = −xa1 − xa2 − ⋯ − xak , subject to
Ax = b
x≥0
• Proceed as per the Simplex Method and try to remove the artificial
variables (xa ) from the basis.
Phase 2
• Consider the original objective function.
• Delete the artificial variables.
• Remove the columns corresponding to artificial variables.
• Change the last row as per the new values.
• Proceed as per the Simplex Method to obtain the optimal solution.
4) Big M Method
Let us assume that we added k artificial variables.
1 −c T 0 z −cj
A{R} = [ ], b{R} = [ ] , x{R} = [ ] , aj{R} =[a ]
0 A b x j
1 −c T
B −1 1 cBT B−1
B{R} = [ ] ⇒ B{R} = [ ]
0 B 0 B−1
−1
2. Compute yj = B{R} aj{R} .
−1
5. Compute the new value of B{R} :
E = [e1 e2 … ek−1 ξ ek+1 … em ]
ei = Column vector having the ith element as 1, rest 0s
y1j y2j yk−1j 1 yk+1j ymj T
ξ = [− − … − − … − ]
ykj ykj ykj ykj ykj ykj
−1 −1
B{R} → EB{R}
OR
A function f is convex if for any two points P and Q on the curve, the
line segment joining P and Q is always on or above the curve between P
and Q but never below the curve.
Concave Function
Let S ⊆ ℝn be a convex set and f: S → ℝ. Then f is called a concave
function if for all x1 , x2 ∈ S and for all 0 ≤ λ ≤ 1, we have
OR
A function f is convex if for any two points P and Q on the curve, the
line segment joining P and Q is always on or below the curve between P
and Q but never above the curve.
∂2 f
H(x) = [ ]
∂xi ∂xj
n×n
∇f(x ∗ ) + ∑ λ∗i ∇g i (x ∗ ) = 0
i=1
∗ ∗
λi g i (x ) = 0 (i = 1,2, … , m)
g i (x ∗ ) ≤ 0 (i = 1,2, … , m)
λ∗i ≥ 0 (i = 1,2, … , m)
‖x k+1 − x̅‖
lim =a
k→∞ ‖x k − x
̅‖p
Unimodal Function
The function f: [a, b] → ℝ is said to be a unimodal min function if it has
only one mode, i.e. it has a single relative min, i.e. ∃ a ≤ α ≤ b such
that
• f is strictly decreasing in [a, α).
• f is strictly increasing in [α, b].
On solving we obtain:
xp,k = xU,k − 0.618Ik
xq,k = xL,k + 0.618Ik
On solving we obtain:
Fn−k
xp,k = xU,k − ( )I
Fn−k+1 k
Fn−k
xq,k = xL,k + ( )I
Fn−k+1 k
In 1 √5
RF = = ≈ n+1 (For large n)
I1 Fn c
In 1
R GS = = n−1
I1 c
R GS c 2
⇒ = = 1.17
RF √5
Thus, for the same number of function evaluations, the final search
interval for the Fibonacci Search Method will be 17% smaller than the
one obtained by the Golden Section Rule.
x k = current solution
̅̅̅
αk = step size
dk = direction of movement
〈dk+1 , dk 〉 = 0
2) Newton’s Method
−1
̅̅̅
αk dk = − (H(x k )) ∇f(x k )
−1
Mk = (H(x k ) + εk I)
n−1 T
(dk ) b
x̅ = ∑ ( k T k ) dk
(d ) Qd
k=0
Formulas
g k = ∇f(x k )
d0 = −g 0 , dk+1 = −g k+1 + βk dk
k k T k+1 T k
̅̅̅ (g ) d (g ) Qd
αk = − k T k , βk =
(d ) Qd (dk )T Qdk
0
p0 (p0 )T 1/6 0
A = 0 T 0=[ ]
(p ) q 0 0
0
S 0 q0 (q0 )T S 0 9/13 −6/13
B = = [ ]
(q0 )T S 0 q0 −6/13 4/13
37/78 6/13
S1 = S 0 + A0 − B0 = [ ]
6/13 9/13
−16/13
d1 = −S1 g1 = [ ]
−24/13
384 1 2 64 1 4
h(α1 ) = f(x1 + α1 d1 ) = (α ) − α − ⇒ ̅̅̅
α1 = 13/12
169 13 3
−2
x 2 = x1 + ̅̅̅
α1 d1 = [ ]
−2
Methods for Solving Convex Quadratic Programming Problem
max c T x + x T Dx , subject to
Ax ≤ b
x≥0
1) Wolfe’s Method
min −(c T x + x T Dx), subject to
Ax − b ≤ 0
−Ix ≤ 0
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝛌𝟏 𝛍𝟏 𝛍𝟐 𝐱 𝐚𝟏 𝐱 𝐚𝟐 𝐬𝟏
x a1 = 1 2 −2 2 −1 0 1 0 0
x a2 = 1 −2 𝟒 1 0 −1 0 1 0
s1 = 1 2 1 0 0 0 0 0 1
−2 0 −2 −3 1 1 0 0 0
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝛌𝟏 𝛍𝟏 𝛍𝟐 𝐱 𝐚𝟏 𝐱 𝐚𝟐 𝐬𝟏
3 1 0 5 −1 1 1 1 0
x a1 = −
2 2 2 2
1 1 1 1 0 1 0 1 0
x2 = − −
4 2 4 4 4
1 𝟓 0 1 0 1 0 1 1
s1 = − −
4 𝟐 4 4 4
3 −1 0 5 1 1 0 1 0
− −
2 2 2 2
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝛌𝟏 𝛍𝟏 𝛍𝟐 𝐱 𝐚𝟏 𝐱 𝐚𝟐 𝐬𝟏
6 0 0 𝟏𝟑 −1 3 1 1 2
x a1 = − −
5 𝟓 5 5
2 0 1 1 0 1 0 0 1
x2 = −
5 5 5 5
3 1 0 1 0 1 0 1 2
x1 = − −
10 10 10 10 5
6 0 0 13 1 3 0 0 2
− −
5 5 5 5
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝛌𝟏 𝛍𝟏 𝛍𝟐 𝐱 𝐚𝟏 𝐱 𝐚𝟐 𝐬𝟏
6 0 0 1 5 3 5 5 2
λ1 = − − −
13 13 13 13 13 13
4 0 1 0 1 2 1 1 3
x2 = − − −
13 13 13 13 13 13
9 1 0 0 1 1 1 8 5
x1 = − −
26 26 13 26 130 13
0 0 0 0 0 0 1 1 0
Using ∇L = 0, we obtain:
∂L
= 0 (j = 1,2, … , n)
∂xj
∂L
= 0 (i = 1,2, … , m)
∂λi
∇f(x ∗ ) + ∑ λ∗i ∇g i (x ∗ ) = 0
i=1
Let Z(x = {z ∈ ℝ : z ∇g(x = 0, z ≠ 0} and HL (x ∗ , λ∗ ) be the
∗) n T ∗)
x 3 3y 2
L(x, y, λ) = − + 2x + λ(x − y)
3 2
x2 + 2 − λ = 0
−3y − λ = 0
x−y=0
∂2 L ∂2 L
∂x 2 ∂x ∂y 2x 0
HL (x, y, λ) = 2 2 =[ ]
∂ L ∂ L 0 −3
[∂x ∂y ∂y 2 ]
4 0 2 0
⇒ HL (2,2, −6) = [ ], HL (1,1, −3) = [ ]
0 −3 0 −3
z T ∇g(x ∗ ) = 0 ⇒ z1 − z2 = 0
⇒ z T HL (2,2, −6)z = z12 > 0 ⇒ (2,2) is a strict local min point.
⇒ z T HL (1,1, −3)z = −z12 < 0 ⇒ (1,1) is a strict local max point.
T
f(x) ≈ fL (x) = f(x k ) + (x − x k ) ∇f(x k )
T
⇒ f(x k ) + x T ∇f(x k ) − (x k ) ∇f(x k )
Let x̅̅̅k be an optimal solution of this problem. The following cases arise:
• wk (x̅̅̅k ) = wk (x k ) ⇒ x k is a KKT point.
• wk (x̅̅̅k ) < wk (x k ) ⇒ x k+1 = x k + ̅̅̅
αk dk , dk = x̅̅̅k − x k ,
̅̅̅
αk is chosen such that h(α ̅̅̅k ) = min h(αk ) , h(αk ) = f(x k + αk dk ).
k
α ∈(0,1]
1/2
Let x 0 = [ ].
1/2
0
Solving graphically, we obtain ̅̅̅
x 0 = [ ].
1
−1/2
d0 = ̅̅̅
x0 − x0 = [ ]
1/2
h(α0 ) = f(x 0 + α0 d0 ) ̅̅̅0 = 1
⇒α
̅̅̅0 d0 = [0]
x1 = x 0 + α
1
min w1 (x) = x T ∇f(x1 ) = −2x1 − 2x2 , subject to
x1 + 2x2 = 2
x1 , x2 ≥ 0
2
Solving graphically, we obtain x̅̅̅1 = [ ].
0
2
d1 = x̅̅̅1 − x1 = [ ]
−1
h(α1 ) = f(x1 + α1 d1 ) ⇒ ̅̅̅
α1 = 1/6
1/3
x 2 = x1 + ̅̅̅
α1 d1 = [ ]
5/6
0
Solving graphically, we obtain ̅̅̅
x 2 = [ ].
1
dq
= 0 ⇒ 2x − 2α max(1 − x, 0) = 0
dx
Case 1: x ≥ 1
⇒x=0
Not possible.
Case 2: x ≤ 1
α
⇒ x(α) = < 1 (α > 0)
1+α
which is consistent with x ≤ 1. Thus, the optimal solution is
x ∗ = lim x(α) = 1
α→∞
Numerical Implementation
• Choose a suitable penalty function; P(x) = ∑m 2
i=1 max(g i (x), 0) .
• Choose an increasing sequence of positive real numbers which tend
to infinity; α1 = 1, α2 = 10, α3 = 100, and so on.
• Choose an arbitrary x 0 (= ̅̅̅ x 0 ) ∈ ℝn .
• Solve the following UMPs
min q(x, αk ) = f(x) + αk P(x)
with ̅̅̅̅̅̅
x k−1 as the starting point, to obtain x̅̅̅k .
• Continue till αk P(x̅̅̅k ) < ε.
4) The Barrier Function Method
min f(x), subject to
g i (x) ≤ 0 (i = 1,2, … , m)
g1 (x) = x − 1 ≤ 0
g 2 (x) = −1 − x ≤ 0
1 1 1 1 1 1
B(x) = − ( + ) = −( − )=− +
g1 (x) g 2 (x) x−1 x+1 x−1 x+1
r r
c(x) = f(x) + rB(x) = x 2 − +
x−1 x+1
dc 2r
= 0 ⇒ x( + 1) = 0
dx (x − 1)2 (x + 1)2
⇒ x(r) = 0
⇒ x ∗ = lim x(r) = 0
r→0
Numerical Implementation
1
• Choose a suitable penalty function; B(x) = − ∑m
i=1 .
gi (x)
• Choose a decreasing sequence of positive real numbers which tend
to 0; r1 = 1, r2 = 0.1, r3 = 0.01, and so on.
• Choose an arbitrary x 0 (= ̅̅̅ x 0 ) ∈ int S.
• Solve the following UMPs
min r(x, αk ) = f(x) + rk B(x)
with ̅̅̅̅̅̅
x k−1 as the starting point, to obtain x̅̅̅k .
• Continue till rk B(x̅̅̅k ) < ε.
Goal Programming
• It consists of formulating an optimization problem in such a manner
that ensures that the objective criteria come close to the specified
aspiration levels in order of priorities set up by the decision maker.
• It aims at satisfaction of the goals rather than exact achievement of
the goals.
• The constraints are also treated as goals.
Aspiration Level: Aspiration level is the numerical value specified by the
decision maker that reflects his desire or satisfactory level regarding the
objective function under consideration.
He wishes to keep the first, second, and third priorities objective values
below 8, −2,1, respetively. So, the goals in order of their importance are
given by
4x1 + 5x2 ≤ 20
3x1 + 2x2 ≤ 12
4x1 − 5x2 ≤ 8
x1 ≥ 2
2x1 − x2 ≤ 1
+ − +
The undesirable variables are d1+ , d+
2 , d3 , d4 , d5 , respectively.
+ − +
Now we try to minmize (d1+ + d+
2 , d3 , d4 , d5 ) in this order only.
fi (x) + d− +
i − di = vi (i = 1,2, … , p)
g j (x) + d− +
j − dj = 0 (j = 1,2, … , m)
d− + − +
i , di , dj , dj ≥ 0 (i = 1,2, … , p) (j = 1,2, … , m)
If all the functions fi and g j are linear functions of the decision variable
x then the GPP is called Linear Goal Programming Problem.
Methods for Solving Linear Goal Programming Problem
1) Graphical Method
• Used if the decision variable x belongs to ℝ2 .
+ − +
Example: lexi − min(d1+ + d+ 2 , d3 , d4 , d5 ), subject to
G1 : 4x1 + 5x2 + d1− − d1+ = 20
+
G2 : 3x1 + 2x2 + d− 2 − d2 = 12
+
G3 : 4x1 − 5x2 + d− 3 − d3 = 8
G4 : x1 + d− +
4 − d4 = 2
+
G5 : 2x1 − x2 + d− 5 − d5 = 1
x j , d− +
i , di ≥ 0 (j = 1,2) (i = 1,2, … ,5)
The final priority goal G5 is outside the current feasible region, we move
the objective line G5 parallel to itself to meet the feasible region.
Column Drop Rule: Any non basic variable that has a negative
opportunity cost zj − cj in the optimal table of a LPP can be assigned
zero value in the subseqent LPPs and therefore the column
corresponding to this variable can be dropped from the subsequent
LPPs.
Implicitly the rule states that if a non basic variable with negative
opportunity cost zj − cj is introduced in the basis at the later stages of
the algorithm it will degrade the solution in the lexi − min order.
min d1+ + d+
2 , subject to
4x1 + 5x2 + d1− − d1+ = 20
+
3x1 + 2x2 + d− 2 − d2 = 12
x i , d− +
i , di ≥ 0 (i = 1,2)
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝐝𝟏+ 𝐝𝟐+
−
d1 = 20 4 5 −1 0
d−
2 = 12 3 2 0 −1
z(xB ) = 0 0 0 −1 −1
Using the column drop rule, the columns corresponding to d1+ and d+
2
are dropped from the subsequent iterations.
min d+
3 , subject to
4x1 + 5x2 + d1− − d1+ = 20
+
3x1 + 2x2 + d− 2 − d2 = 12
+
4x1 − 5x2 + d− 3 − d3 = 8
x j , d− +
i , di ≥ 0 (j = 1,2) (i = 1,2,3)
𝐱𝐁 𝐱𝟏 𝐱𝟐 𝐝+𝟑
d1− = 20 4 5 0
d−2 = 12 3 2 0
d−3 =8 4 −5 −1
z(xB ) = 0 0 0 −1
min d−
4 , subject to
4x1 + 5x2 + d1− − d1+ = 20
+
3x1 + 2x2 + d− 2 − d2 = 12
+
4x1 − 5x2 + d− 3 − d3 = 8
x1 + d − +
4 − d4 = 2
x j , d− +
i , di ≥ 0 (j = 1,2) (i = 1,2,3,4)
𝐱𝐁 𝐱𝟐 𝐝−𝟒 𝐝+𝟒
−
d1 = 12 5 −4 4
d−2 =6 2 −3 3
d−3 =0 −5 −4 4
x1 = 2 0 1 −1
z(xB ) = 0 0 −1 0
min d+
5 , subject to
4x1 + 5x2 + d1− − d1+ = 20
+
3x1 + 2x2 + d− 2 − d2 = 12
+
4x1 − 5x2 + d− 3 − d3 = 8
x1 + d− +
4 − d4 = 2
+
2x1 + d− 5 − d5 = 1
x j , d− +
i , di ≥ 0 (j = 1,2) (i = 1,2,3,4,5)
𝐱𝐁 𝐝𝟏− 𝐝−𝟓 𝐝+𝟒
x2 = 12/5 1/5 0 4/5
d−2 = 6/5 −2/7 0 7/5
d−3 = 12 1 0 8
+
d5 = 3/5 −1/5 −1 −14/5
x1 = 2 0 0 −1
z(xB ) = 3/5 −1/5 −1 −14/5
1) Simulated Annealing
• It is a randomized search technique based on the principles of
thermodynamics.
• SA exploits this analogy between the way in which a metal cools and
freezes into a minimum energy crystalline structure.
(−∆f)k
p((∆f)k ) ≈ exp ( )
bT
Procedure
• The search is initiated at a random feasible point x (0) .
Set xmin = x (0) , fmin = f(x (0) ).
We start with an initial temperature T = T 0 which is set to a high
level.
• The change in a state from the present state to the candidate new
state is accepted if
o (∆f)k < 0.
o (∆f)k > 0, but p((∆f)k ) > r, where r ∈ (0,1) is a randomly
generated number.
• The most popular and widely used way of encoding is binary coding.
In this coding, every chromosome is a string of bits (0 or 1).
L
xU − xL
x = x + (decoded value of s) ( l )
2 −1
Fi
pi =
∑ni=1 Fi
Procedure
• Choose a coding to represent problem parameters; a selection
operator; a crossover operator; a mutation operator; population size;
crossover probability; mutation probability; maximum allowable
generation number k max .