0% found this document useful (0 votes)
64 views

4 Chapter 21 Non Linear Programming

This document provides an overview of non-linear programming techniques including: - Unconstrained optimization methods like the Newton-Raphson method and gradient descent method. - Constrained optimization methods like the Karush-Kuhn-Tucker (KKT) conditions which provide necessary conditions for optimality of problems with inequality constraints. - Examples are provided to demonstrate solving unconstrained and constrained non-linear programming problems using various techniques like gradient descent, Newton-Raphson, and KKT conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

4 Chapter 21 Non Linear Programming

This document provides an overview of non-linear programming techniques including: - Unconstrained optimization methods like the Newton-Raphson method and gradient descent method. - Constrained optimization methods like the Karush-Kuhn-Tucker (KKT) conditions which provide necessary conditions for optimality of problems with inequality constraints. - Examples are provided to demonstrate solving unconstrained and constrained non-linear programming problems using various techniques like gradient descent, Newton-Raphson, and KKT conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Operations Research

Course Code: IPE 3103

Chapter (20+21)
Non Linear Programming

Instructor: Md. Rasel Sarkar


Dept. of IPE, RUET, Rajshahi, Bangladesh
Academic Year: 2020-21
Non linear programming (NLP)
 Nonlinear programming (NLP) is the process of
solving an optimization problem where some of the
constraints or the objective function are nonlinear.
Non linear
programming

Unconstrained non linear Constrained non linear


programming programming

Newton Raphson
Graphical solution
Method

Direct search KKT conditions

Gradient Search
Unconstrained optimization
An extreme point of a function f(X) defines either a maximum or
a minimum of the function.

Figure illustrates the maxima and minima of a single-variable


function f(x) defined in the range a ≤ x ≤ b. The points x 1, x2, x3, x4,
x5 and x6 are all extrema.
 x1, x3, x6 as maxima and x2, x4 as minima.
 The value f(x6) = max{f(x1), f(x3), f(x6)} is a global
maximum, and f(x1) and f(x3) are local maxima.
 Similarly, f(x4) is a local minimum and f(x2) is a
global minimum.
 The first derivative (slope) of f equals zero at all
extrema. If a point with zero slope is not an
extremum, then it must be an inflection or a saddle
point.
Necessary and sufficient conditions
Necessary condition. A necessary condition for X0 to
be an extreme point of f(X) is that –
∇f(X0) = 0
Solution obtained from the of ∇f(X0) = 0 as stationary
points.
Sufficiency condition. The Sufficiency condition for a
single-variable functions is as follows. Given that y0 is
a stationary point, then
(i) y0 is a maximum if f ″(y0) > 0.
(ii) y0 is a minimum if f ″(y0) < 0.
Unconstrained optimization
(Single variable)
The Newton–Raphson method
The Newton–Raphson method is an iterative
algorithm for solving simultaneous nonlinear
equations.

The relationship between xk and xk + 1 for a single-


variable function f(x) reduces to
Example
Consider the function
Starting with x0 = 10, the following table provides the
successive iterations:
Problem
Solve Problem by the Newton–Raphson method.
What is Gradient?
The gradient of a multivariable differentiable
function f(x,y,…), denoted as ∇f, is the collection of
all its partial derivatives into a vector.

Properties of a gradient –
1. The gradient points in the direction of greatest
increase of a function
- simply “forward” or “backward” along the x-axis,
when it is with functions of a single variable.
- If we have two variables, then our 2-component
gradient can specify any direction on a plane.
- Likewise, with 3 variables, the gradient can specify
and direction in 3D space to move to increase our
function.
2. is zero at a local maximum or local minimum.
Unconstrained optimization
(Multivariable )
Gradient Method
Suppose that f(X) is to be maximized.
Define, ∇f(Xk) as the gradient of f at point Xk.
Let, X0 be the initial point from which the procedure
starts.
The interpretation of the gradient suggests that an
efficient search procedure should keep moving in
the direction of the gradient until it reaches an
optimal solution x*, where ∇f(x*) = 0.
A better approach is to keep moving in a fixed
direction from the current trial solution, until f(x) stops
increasing. This stopping point would be the next trial
solution, so the gradient then would be recalculated
to determine the new direction in which to move.
With this approach, each iteration involves changing the
current trial solution Xk as follows:
Xk + 1 = Xk + rk∇ f(Xk)
here, rk is determined such that the next point, Xk + 1,
leads to the largest improvement in f. This is equivalent
to determining r = rk that maximizes the function
h(r) = f[Xk + r∇ f(Xk)]
Because h(r) is a single-variable function, use a
search procedure for one-variable unconstrained
optimization (or calculus) to find the optimum.

The iterations of this gradient search procedure


continue until ∇f(x) = 0 within a small tolerance , that
is, until
Example
Consider the following problem:

Starting from the initial trial solution (x1, x2) = (1, 1),
interactively apply the gradient search procedure.

Solution:
Initial point X0 = (1, 1)
The gradient of f is,

() =
Iteration 1
Now, ∇f(X0) = ( –2, 0)

The next point X1 is obtained by considering

Xk + 1 = Xk + rk∇ f(Xk)

X1 = (1, 1) + r(–2, 0) = (1 – 2r, 1)


Thus, h(r) = f(1 – 2r, 1) = 4(1 – 2r) +6 –2(1 – 2r)2 – 2(1 – 2r) –2

= – 2(1 – 2r)2 + 2(1 – 2r) – 4


Using the classical necessary conditions, the maximum
value of h(r) is r1 = ¼.

Thus, the next solution point X1 = ( 1/2, 1)


Iteration 2

Iteration 3
Iteration 4

Iteration 5
Now,

The process can be terminated at this point because


∇f(X5) ≈ 0.
The approximate maximum point is given by X5 =
(0.3438, 1.3125)
Exercise
Consider the following unconstrained optimization
problem:

(a) Starting from the initial trial solution (x1, x2) = (1,
1), interactively apply the gradient search procedure
with = 0.25 to obtain an approximate solution.
Concave and Convex function
Such a function that is always “curving downward” is
called a concave function.
If the function is always “curving upward” it is called
a convex function.
Constrained Optimization
(Inequality constraint) //Chapter 20
Karush–Kuhn–tucker (KKT) conditions for
inequality constraints:
The Karush-Kuhn-Tucker (KKT) conditions are
necessary conditions for a solution in constrained
nonlinear-programming problem to be optimal must
satisfy, provided that some regularity conditions are
satisfied.
The Lagrangian function
The Lagrangian function is a technique that combines
the objective function with constraints into a single
equation for finding the local maxima and minima of
a function subject to equality constraints.

How to construct the Lagrangian function?


Consider the problem
Maximize z = f(X)
subject to
g(X) = 0
Now, multiply the constraints by the factor λ and
subtract the constraints from the to the objective
function in order to form the Lagrangian function.
L(X, λ) = f(X) – λg(X)
The function L is called the Lagrangean function and
the vector λ is called the Lagrange multipliers.

The vector λ measures the rate of variation of f with


respect to g—that is,
With the inequality constraint [g(X) ≤ 0], in
maximization case, λ ≥ 0 and for minimization λ ≤ 0. If
the constraints are equalities [g(X) = 0], then λ
becomes unrestricted in sign.

The equations –

give the necessary conditions for determining


stationary points of f(X) subject to g(X) = 0. Sufficiency
conditions for the Lagrangean method exist, but they
are generally computationally difficult
Karush–Kuhn–tucker (KKT) conditions
This section extends the Lagrangean method to problems
with inequality constraints.
Consider the problem,
Maximize z = f(X)
subject to
g(X) ≤ 0

Let be the slack quantity added to the ith constraint gi(X)


≤ 0 and define S = (S1, S2,…., Sm), S2 = (, , ……., )
where, m is the total number of inequality constraints.
The Lagrangean function is thus given by –

Taking the partial derivatives of L with respect to X, S, and


λ, we obtain

From the second and third sets of equations, we obtain


λigi(X) = 0,
Thus the KKT necessary conditions for maximization
problem are as follows:
(a)
(b)
(c)
(d)
Sufficient Condition
 The necessary conditions are sufficient for
optimality if the objective function f of a
maximization problem is a concave function.
 Similarly, if the objective function of a minimization
problem is a convex function, the necessary
conditions are also sufficient for optimality.
i. Write the KKT necessary conditions for the problem.
ii. Use the KKT conditions to derive an optimal solution.
Solution:
This is a minimization problem, hence λ ≤ 0.
We have the Lagrangian function for the problem –
, i =1, 2,…5

The KKT conditions are thus given as –


(a)
(b)
(c)
(d)

The solution is x1 = 1, x2 = 2, x3 = 0, λ1 = λ2 = λ5 = 0, λ3 =
- 2, λ4 = - 4. Because both f(X) and the solution space
g(X) ≤ 0 are convex, L(X, S, λ) must be convex, and the
resulting stationary point yields a global constrained
minimum.
Example
Consider the following non linear programming problem:

i. Write the KKT necessary conditions for the problem/


Obtain the KKT conditions for this problem.
ii. Use the KKT conditions to derive an optimal solution.

You might also like