0% found this document useful (0 votes)
2 views

Optimization

Uploaded by

m.m.m.essa22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Optimization

Uploaded by

m.m.m.essa22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

GENG 603

Advanced Numerical Methods


Lecture 7
Optimization

Slides by M. R. Gustafson II and S. Chapra

© McGraw Hill LLC 1


Optimization

© McGraw Hill LLC 2


Objectives 1
Understanding why and where optimization occurs
in engineering and scientific problem solving.
Recognizing the difference between one-
dimensional and multi-dimensional optimization.
Distinguishing between global and local optima.
Knowing how to recast a maximization problem so
that it can be solved with a minimizing algorithm.
Being able to define the golden ratio and
understand why it makes one-dimensional
optimization efficient.
© McGraw Hill LLC 3
Objectives 2
Locating the optimum of a single-variable function
with the golden-section search.
Locating the optimum of a single-variable function
with parabolic interpolation.
Knowing how to apply the fminbnd function to
determine the minimum of a one-dimensional
function.
Being able to develop MATLAB contours and
surface plots to visualize two-dimensional functions.
Knowing how to apply the fminsearch function to
determine the minimum of a multidimensional
function.
© McGraw Hill LLC 4
Optimization
Optimization is the process of creating
something that is as effective as possible.
From a mathematical perspective,
optimization deals with finding the maxima
and/or minima of a function that depends on
one or more variables.

Access the text alternative for slide images.

© McGraw Hill LLC 5


Multidimensional Optimization
One-dimensional problems involve functions that depend
on a single dependent variable—for example, f x.
Multidimensional problems involve functions that depend
on two or more dependent variables—for example, f x, y 

Access the text alternative for slide images.

© McGraw Hill LLC 8


Global vs. Local
A global optimum represents the very best solution
while a local optimum is better than its immediate
neighbors. Cases that include local optima are called
multimodal or non-convex. Cases with a single optima
are called convex. The optima can be a termination
point.
Generally, it is desire to find the global optimum.

© McGraw Hill LLC 9


Golden-Section Search (1-D) 1

Search algorithm for finding a minimum on


an interval xl xu with a single minimum
(convex or unimodal interval)

Uses the golden ratio   1.6180. . . to


determine location of two interior points x1
and x2 ; by using the golden ratio, one of the
interior points can be re-used in the next
iteration.

© McGraw Hill LLC 10


Golden-Section Search (1-D) 2
d   1 xu  xl 
x1  xl  d
x2  xu  d
If f x1   f x2 , x2 becomes the new lower
limit and x1 becomes the new x2 (as in
figure).
If f x2   f x1 , x1 becomes the new
upper limit and x2 becomes the new x1.
In either case, only one new interior
point is needed and the function is only
evaluated one more time.
Access the text alternative for slide images.

© McGraw Hill LLC 11


Code for Golden-Section Search
function [x,fx,ea,iter]=goldmin(f,xl,xu,es,maxit,varargin)
% goldmin: minimization golden section search
% [x,fx,ea,iter]=goldmin(f,xl,xu,es,maxit,p1,p2,...):
% uses golden section search to find the minimum of f
% input:
% f = name of function
% xl, xu = lower and upper guesses
% es = desired relative error (default = 0.0001%)
% maxit = maximum allowable iterations (default = 50)
% p1,p2,... = additional parameters used by f
% output:
% x = location of minimum
% fx = minimum function value
% ea = approximate relative error (%)
% iter = number of iterations
if nargin<3,error('at least 3 input arguments required'),end
if nargin<4||isempty(es), es=0.0001;end
if nargin<5||isempty(maxit), maxit=50;end
phi=(1+sqrt(5))/2; d = (phi-1)*(xu - xl);
iter = 0; x1 = xl + d; x2 = xu - d;
f1 = f(x1,varargin{:}); f2 = f(x2,varargin{:});
while(1)
xint= xu - xl;
if f1 < f2
xopt = x1; xl = x2; x2 = x1; f2 = f1;
x1 = xl + (phi-1)*(xu-xl); f1 = f(x1,varargin{:});
else
xopt = x2; xu = x1; x1 = x2; f1 = f2;
x2 = xu - (phi-1)*(xu-xl); f2 = f(x2,varargin{:});
end
iter=iter+1;
if xopt~=0, ea = (2 - phi) * abs(xint / xopt) * 100;end
if ea <= es || iter >= maxit,break,end
end
x=xopt;fx=f(xopt,varargin{:});

© McGraw Hill LLC 15


Parabolic Interpolation (1-D)
Another algorithm uses parabolic interpolation of three points
to estimate optimum location.
The location of the maximum/minimum of a parabola defined
as the interpolation of three points (x1 , x2 , and x3 ) is:
1 x2  x1   f x2   f x3   x2  x3   f x2   f x1 
2 2

x4  x2 
2 x2  x1  f x 2   f x3   x 2  x3  f x 2   f x1 

The new point x4 and the two


surrounding it (either x1 and x2
or x2 and x3) are used for the
next iteration of the algorithm.
Access the text alternative for slide images.

© McGraw Hill LLC 16


fminbnd Function

MATLAB has a built-in function, fminbnd,


which combines the golden-section search and
the parabolic interpolation.
• [xmin, fval] = fminbnd(function, x1, x2)

Options may be passed through a fourth


argument using optimset, similar to fzero.

© McGraw Hill LLC 19


Multidimensional Visualization

Functions of two-dimensions may be


visualized using contour or surface/mesh
plots.

Access the text alternative for slide images.

© McGraw Hill LLC 20


fminsearch Function 1
MATLAB has a built-in function, fminsearch, that
can be used to determine the minimum of a
multidimensional function.
• [xmin, fval] = fminsearch(function, x0)

• xmin in this case will be a row vector containing the


location of the minimum, while x0 is an initial guess.
Note that x0 must contain as many entries as the
function expects of it.
The function must be written in terms of a single
variable, where different dimensions are
represented by different indices of that variable.

© McGraw Hill LLC 21


fminsearch Function 2
To minimize
f x, y   2  x  y  2x 2  2xy  y2
rewrite as
f x1 , x2   2  x1  x2  2 x1   2x1 x2  x2 
2 2

f=@(x) 2+x(1)-
x(2)+2*x(1)^2+2*x(1)*x(2)+x(2)^2
[x, fval] = fminsearch(f, [-0.5, 0.5])
Note that x0 has two entries - f is expecting it to
contain two values.
MATLAB reports the minimum value is 0.7500 at a
location of [-1.000 1.5000]
© McGraw Hill LLC 22
Nelder– Mead Method (N-D)

© McGraw Hill LLC 23


Nelder– Mead Code (2-D)

© McGraw Hill LLC 24


Nelder– Mead Code (N-D)
In case of N-D case (N > 2), the 2-D algorithm should be repeated for each sub-plane

© McGraw Hill LLC 25


Steepest Descent Method (m-D)

© McGraw Hill LLC 26


Steepest Descent Code

© McGraw Hill LLC 27


Random Approaches
- Highly likely to reach global optima in multi-modal functions.
- Uses randomized approaches to escape local optima.
- Inspired by nature.
Simulated Annealing
Slow cooling of metal to reach a crystal with minimum energy state

Genetic Algorithm
Mutation, comparison, survival of the fittest, and reproduction

© McGraw Hill LLC 28


Constrained Optimization
- Using Lagrange multiplier can be converted to
unconstrained (equality constraints).

- Inequality constraints can be converted to equality by


introducing slack variables.

© McGraw Hill LLC 29


Matlab Most Common
linprog()
fminsearch()
fminuncon()
fmincon()

There is also CVX package from Boyd’s webpages


(Convex functions).

© McGraw Hill LLC 30

You might also like