4 Chapter 21 Non Linear Programming
4 Chapter 21 Non Linear Programming
Chapter (20+21)
Non Linear Programming
Newton Raphson
Graphical solution
Method
Gradient Search
Unconstrained optimization
An extreme point of a function f(X) defines either a maximum or
a minimum of the function.
Properties of a gradient –
1. The gradient points in the direction of greatest
increase of a function
- simply “forward” or “backward” along the x-axis,
when it is with functions of a single variable.
- If we have two variables, then our 2-component
gradient can specify any direction on a plane.
- Likewise, with 3 variables, the gradient can specify
and direction in 3D space to move to increase our
function.
2. is zero at a local maximum or local minimum.
Unconstrained optimization
(Multivariable )
Gradient Method
Suppose that f(X) is to be maximized.
Define, ∇f(Xk) as the gradient of f at point Xk.
Let, X0 be the initial point from which the procedure
starts.
The interpretation of the gradient suggests that an
efficient search procedure should keep moving in
the direction of the gradient until it reaches an
optimal solution x*, where ∇f(x*) = 0.
A better approach is to keep moving in a fixed
direction from the current trial solution, until f(x) stops
increasing. This stopping point would be the next trial
solution, so the gradient then would be recalculated
to determine the new direction in which to move.
With this approach, each iteration involves changing the
current trial solution Xk as follows:
Xk + 1 = Xk + rk∇ f(Xk)
here, rk is determined such that the next point, Xk + 1,
leads to the largest improvement in f. This is equivalent
to determining r = rk that maximizes the function
h(r) = f[Xk + r∇ f(Xk)]
Because h(r) is a single-variable function, use a
search procedure for one-variable unconstrained
optimization (or calculus) to find the optimum.
Starting from the initial trial solution (x1, x2) = (1, 1),
interactively apply the gradient search procedure.
Solution:
Initial point X0 = (1, 1)
The gradient of f is,
() =
Iteration 1
Now, ∇f(X0) = ( –2, 0)
Xk + 1 = Xk + rk∇ f(Xk)
Iteration 3
Iteration 4
Iteration 5
Now,
(a) Starting from the initial trial solution (x1, x2) = (1,
1), interactively apply the gradient search procedure
with = 0.25 to obtain an approximate solution.
Concave and Convex function
Such a function that is always “curving downward” is
called a concave function.
If the function is always “curving upward” it is called
a convex function.
Constrained Optimization
(Inequality constraint) //Chapter 20
Karush–Kuhn–tucker (KKT) conditions for
inequality constraints:
The Karush-Kuhn-Tucker (KKT) conditions are
necessary conditions for a solution in constrained
nonlinear-programming problem to be optimal must
satisfy, provided that some regularity conditions are
satisfied.
The Lagrangian function
The Lagrangian function is a technique that combines
the objective function with constraints into a single
equation for finding the local maxima and minima of
a function subject to equality constraints.
The equations –
The solution is x1 = 1, x2 = 2, x3 = 0, λ1 = λ2 = λ5 = 0, λ3 =
- 2, λ4 = - 4. Because both f(X) and the solution space
g(X) ≤ 0 are convex, L(X, S, λ) must be convex, and the
resulting stationary point yields a global constrained
minimum.
Example
Consider the following non linear programming problem: