0% found this document useful (0 votes)
25 views18 pages

Optmizationtechniques 150308051251 Conversion Gate01

Uploaded by

Dhruv Bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views18 pages

Optmizationtechniques 150308051251 Conversion Gate01

Uploaded by

Dhruv Bansal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

OPTIMIZATION

TECHNIQUES

By:Dhruv Bansal
21U02049
Definition:
An optimization is the act of achieving the
best possible result under given
circumstances.
Primary objective may not be optimize
absolutely but to compromise effectively
&thereby produce the best formulation under a
given set of restrictions .
Why Optimization is necessary?
Reduce
the
cost

Safety &
Save optimizati
reduce
the time on
the error

Innovatio
n&
reproducibil efficienc
ity y

3
Historical development
 IsaacNewton (1642-1727) :The development of differential calculus
methods of optimization.
 Joseph-Louis Lagrange (1736-1813) :Calculus of variations,
minimization of functionals, method of optimization for constrained
problems.
 Augustin-Louis Cauchy (1789-1857) :Solution by direct
substitution, steepest descent method for unconstrained optimization.
 George Bernard Dantzig (1914-2005):Linear programming and
Simplex method (1947).
 Albert William Tucker (1905-1995):Necessary and sufficient
conditions for the optimal solution of programming problems,
nonlinear programming.
OPTIMIZATION PARAMETERS
Objective function
An objective function expresses the main aim
of the model which is either to be minimized or
maximized.
 For example: in a manufacturing process, the aim
may be to maximize the profit or minimize the cost.
 The two exceptions are:

• No objective function
• Multiple objective functions.
Variables
A set of unknowns or variables control the value of
the objective function.

variables can be broadly classified as:


• Independent variable
• Dependent variable
Constraints
The restrictions that must be satisfied to produce an
acceptable design are collectively called design constraints.

Constraints can be broadly classified as:


• Behavioral or Functional
• Geometric or Side
Statement of an optimization problem

 An optimization problem can be stated as


follows:

To find X =
which minimizes f(X)

Subject to the constraints


gi(X) ≤ 0 , i = 1, 2, …., m
lj(X) = 0, j = 1, 2, …., p
where X is an n-dimensional vector called the design vector,
f(X) is called the objective function, and gi(X) and lj(X) are
known as inequality and equality constraints, respectively.
Classification of optimization
Based on Constraints
◦ Constrained optimization (Lagrangian method)
◦ Unconstrained optimization (Least Squares)

Based on Nature of the design variables


◦ Static optimization
◦ Dynamic optimization

Based on Physical structure


◦ Optimal control
◦ Sub-optimal control
Classical Optimization
 The classical methods of optimization are useful in
finding the optimum solution of continuous and
differentiable functions.
classical optimization techniques, can handle 3 types of
problems:
i. single variable functions
ii. multivariable functions with no constraints
iii. multivariable functions with both equality and
inequality constraints
Single variable optimization:
 A single-variable optimization problem is one in which
the value of x = x ∗ is to be found in the interval [a, b]
such that x ∗ minimizes f (x).
f (x) at x = x ∗ is said to have a
local minimum if f (x∗ ) ≤ f (x∗ + h) for all small ± h
local maximum if f (x∗ ) ≥ f (x∗ + h) for all
values of h≈0
Global minimum if f (x∗ ) ≤ f (x) for all x
Global maximum if f (x∗ ) ≥ f (x) for all x
MULTIVARIABLE OPTIMIZATION WITH NO CONSTRAINTS

 It is the minimum or maximum of an unconstrained


function of several variables
Necessary Condition
If f (X) has an extreme point (max or min) at X = X ∗ and if
the first partial derivatives of f (X) exist at X ∗ , then
∂f /∂x1 (X ∗ ) = ∂f/ ∂x2 (X ∗ ) = · · · = ∂f /∂xn (X ∗ ) = 0
Sufficient Condition
The Hessian matrix defined by H is made using the second
order derivatives
(i) positive definite when X ∗ is a relative
minimum point
(ii) negative definite when X ∗ is a relative
maximum point.
MULTIVARIABLE WITH EQUALITY CONSTRAINTS

Minimize f= f(X)
Subject to the constraints
gi(X) =0 , i = 1, 2, …., m
where X=
 Here m ≤n; otherwise (if m > n), the problem becomes
over defined and, in general, there will be no solution.
 There are several methods available for the solution of
this problem
Such methods are
1. Direct substitution
2 .Constrained variation
3. Lagrange multipliers
Solution by Direct Substitution
 A problem with n variables and m equality constraints, ,
it is theoretically possible to solve simultaneously the m
equality constraints and express any set of m variables
in terms of the remaining n − m variables.
 With these new objective unction is obtained.

Drawbacks
 constraint equations will be nonlinear for most of
practical problems.
 often it becomes impossible to solve them and express
any m variables in terms of the remaining n − m
variables.
By the Method of Constrained Variation
The basic idea used in the method of constrained
variation is to find a closed-form expression for
the first-order differential of f (df) at all points at
which the constraints gj (X) = 0, j = 1, 2, . . . , m,
are satisfied.

Drawback
Prohibitive for problems with
more than three constraints.
By The Method Of Lagrange Multipliers
For instance consider the optimization problem
maximize f(x1, x2)
subject to g(x1, x2) = c.
We introduce a new variable (λ) called a Lagrange
multiplier and Lagrange function is defined by
L(x1, x2, λ) = f (x1, x2) + λg(x1, x2)
By treating L as a function of the three variables
x1, x2, and λ, the necessary conditions for its extreme
are given by
∂L/∂x1(x1, x2, λ)= ∂f /∂x1(x1, x2)+ λ ∂g /∂x1(x1, x2) = 0
∂L/∂x2 (x1, x2, λ) = ∂f /∂x2 (x1, x2) + λ ∂g/ ∂x2 (x1, x2) = 0
∂L/ ∂λ (x1, x2, λ) = g(x1, x2) = 0
MULTIVARIABLE OPTIMIZATION WITH INEQUALITY
CONSTRAINTS
 The inequality constraints can be transformed to
equality constraints by adding nonnegative slack
variables, y ^2 (j ), as
gj (X) + y ^2 (j ) = 0, j = 1, 2, . . . , m
 where the values of the slack variables are
yet unknown. The problem now becomes
Gj (X, Y) = gj (X) + y ^2 (j ) = 0, j = 1, 2, . . . , m
where Y = {y1, y2, . . . , ym} T is the vector of
slack variables
 This problem can be solved conveniently by the method
of Lagrange multipliers.
CONVEX PROGRAMMING PROBLEM
The optimization problem with inequality constraint is
called a convex programming problem if the objective
function f (X) and the constraint functions gj (X) are
convex.
A function is convex if its slope is non
decreasing or ∂2 f / ∂x2 ≥ 0. It is strictly convex if its slope
is continually increasing or ∂2 f / ∂x2 > 0 throughout the
function.
Concave function
A differentiable function f is concave on an interval if its
derivative function f ′ is decreasing on that interval: a
concave function has a decreasing slope.
Thank You

You might also like