0% found this document useful (0 votes)
8 views

Opt Class CH17102 - Unit 2

Uploaded by

mohd.20218010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Opt Class CH17102 - Unit 2

Uploaded by

mohd.20218010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

CH 17102

Optimization of Chemical
Processes
Instructor: Dr. Anand Mohan Verma
Department of Chemical Engineering
MNNIT Allahabad, India
Unit 2: Unconstrained Multivariable Optimization
The numerical optimization of general nonlinear multivariable objective functions requires efficient and robust
techniques.
Efficiency is important because these problems require an iterative solution procedure, and trial and error
becomes impractical for more than three or four variables.
Robustness (the ability to achieve a solution) is desirable because a general nonlinear function is unpredictable in
its behavior; there may be relative maxima or minima, saddle points, regions of convexity, concavity, and so on.
We discuss the solution of the unconstrained optimization problem:
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.1 Random Search

The algorithm of a random search method is simply:


▪ Select a starting vector x0 and evaluate f(x) at x0,
▪ Randomly select another vector x1 and evaluate f(x) at x1.
▪ After one or more stages, the value of f(xk) is compared with the best previous value
of f(x) from among the previous stages, and the decision is made to continue or
terminate the procedure.
• In effect, both a search direction and step length are chosen simultaneously.
• Variations of this form of random search involve randomly selecting a search
direction and then minimizing (possibly by random steps) in that search direction as a
series of cycles.
• Clearly, the optimal solution can be obtained with a probability of 1 only as k → ∞.
• Even though the method is inefficient insofar as function evaluations are concerned,
it may provide a good starting point for another method.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only Grid search designs
1.2 Grid Search
• It consists of four steps:
1. Constructing a proper grid within
the design space,
2. Estimating the values of the
objective function at all these grid
points,
3. Locating the grid point with lowest
function value,
N dimensional Z level factorial
Minimization → lowest
ZN – 1 = A + center
Maximization → highest e.g. N = 30
4. Move to the point that improves so 330 – 1
the objective function the most
N.B.: Not good for too many decision variables.
and repeat.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.3 Univariate Search

Advantage: simple, easily performed.


Disadvantage: does not converge quickly because of the
oscillation tendency.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.4 Simplex Search
Simplex is a generalization of the notion of a triangle or tetrahedron to
arbitrary dimensions, e.g.
0D simplex → point ; 1D simplex → line
2D simplex → triangle ; 3D simplex → tetrahedron and others
Consider a function of two variables: Min f(x)

When to stop?
When the simplex size is smaller than a prescribed tolerance.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.5 Conjugate Search Directions
Experience has shown that conjugate directions are much more effective as search
directions than arbitrarily chosen search directions, such as in univariate search, or
even orthogonal search directions.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only: Conjugate Search Directions

H.W.

For an objective function

𝑓(𝒙) = 6𝑥12 + 2𝑥22 − 6𝑥1𝑥2 − 𝑥1 − 2𝑥2


, find s2 that is conjugate to direction
s1 if s0 = [1 2]T.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.6 Powell method
• Method for finding out the minima without calculating the
derivatives.
• It depends upon the characteristics of conjugate directions defined by
a quadratic function.
• One of the most extensively used pattern search methods.
• Search is made sequentially until the minimum is found.
• Previous base point is stored as Z.

Terms:
X1 = arbitrary starting point
si = search direction
α = optimal step length
Z = vector containing previous base point
n = cycles/search directions
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Gradient of a function: The gradient of a function is an n-component vector
given by
The gradient has a very important property:
If we move along the gradient direction from any point in n-
dimensional space, the function value increases at the fastest rate. Hence the
gradient direction is called the direction of steepest ascent. Unfortunately, the
direction of steepest ascent is a local property and not a global one.
• Since the gradient vector represents the direction of steepest ascent, the
negative of the gradient vector denotes the direction of steepest descent.
• Any method that makes use of the gradient vector can be expected to give
the minimum point faster than one that does not make use of the
gradient vector.
• All the descent methods make use of the gradient vector, either directly
or indirectly, in finding the search directions.
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent (also called Cauchy Method)
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives First, let us consider the perfectly scaled
quadratic objective function
2.1 Steepest Descent whose contours are concentric circles as
shown in figure.
Suppose we calculate the gradient at the
point

The direction of steepest descent is s = -[4 4]T


Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent

Observe that s is a vector pointing toward the


optimum at (0, 0). In fact, the gradient at any
point passes through the origin (the optimum).
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent
For the determination of α
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives Note: Steepest descent can terminate at
2.1 Steepest Descent any type of stationary point, that is, at
Algorithm: any point where the elements of the
gradient of f(x) are zero. Thus you must
ascertain if the presumed minimum is
indeed a local minimum (i.e., a solution)
or a saddle point. If it is a saddle point, it
is necessary to employ a nongradient
method to move away from the point,
after which the minimization may
continue as before.

Disadvantage: The basic difficulty with the steepest descent method is that it is too sensitive to the
scaling of f(x), so that convergence is very slow and what amounts to oscillation in the x space can
easily occur.
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent

Assignment #1:

𝑓 𝒙 = 𝑥1 − 𝑥2 + 2𝑥12 + 2𝑥1 𝑥2 + 𝑥22

If 𝒙1 = 0 0 T , Using Cauchy method, find 𝒙 value till it


reaches −1 1.5 T

Due Date: 04th Sept 2024 by 05 pm.


Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.2 Conjugate Gradient Methods Algorithm:

• The earliest conjugate gradient method was devised by Fletcher


and Reeves (1964).
• If f(x) is quadratic and is minimized exactly in each search
direction, it has the desirable features of converging in at most n
iterations because its search directions are conjugate.
• It combines current information about the gradient vector with
that of gradient vectors from previous iterations (a memory
feature) to obtain the new search direction.
• You compute the search direction by a linear combination of the
current gradient and the previous search direction.
• The method represents a major improvement over steepest
descent with only a marginal increase in computational effort.
• The main advantage of this method is that it requires only a
small amount of information to be stored at each stage of
calculation and thus can be applied to very large problems.
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.2 Conjugate Gradient Methods: Algorithm

Note that if the ratio of the inner products of the


gradients from stage k + 1 relative to stage k is
very small, the conjugate gradient method
behaves much like the steepest descent method.
One difficulty is the linear dependence of search
directions, which can be resolved by
periodically restarting the conjugate gradient
method with a steeped descent search (step 1).

H.W.: Minimize the function


𝑓 𝒙 = 𝑥1 − 𝑥2 + 2𝑥12 + 2𝑥1 𝑥2 + 𝑥22 ,
if 𝒙1 = 0 0 T , using Conjugate Gradient
method. Go through S S Rao book.
Unit 2: Unconstrained Multivariable Optimization
Step length; usually 1 in
2. Methods that use first derivatives Newton’s method
2.3 Newton’s Method:

Compare it with previous equation


Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.2 Newton’s Method:
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Movement in the search direction:
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Movement in the search direction:
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.4 Quasi-Newton’s Method:

You might also like