Opt Class CH17102 - Unit 2
Opt Class CH17102 - Unit 2
Optimization of Chemical
Processes
Instructor: Dr. Anand Mohan Verma
Department of Chemical Engineering
MNNIT Allahabad, India
Unit 2: Unconstrained Multivariable Optimization
The numerical optimization of general nonlinear multivariable objective functions requires efficient and robust
techniques.
Efficiency is important because these problems require an iterative solution procedure, and trial and error
becomes impractical for more than three or four variables.
Robustness (the ability to achieve a solution) is desirable because a general nonlinear function is unpredictable in
its behavior; there may be relative maxima or minima, saddle points, regions of convexity, concavity, and so on.
We discuss the solution of the unconstrained optimization problem:
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.1 Random Search
When to stop?
When the simplex size is smaller than a prescribed tolerance.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only
1.5 Conjugate Search Directions
Experience has shown that conjugate directions are much more effective as search
directions than arbitrarily chosen search directions, such as in univariate search, or
even orthogonal search directions.
Unit 2: Unconstrained Multivariable Optimization
1. Methods using function values only: Conjugate Search Directions
H.W.
Terms:
X1 = arbitrary starting point
si = search direction
α = optimal step length
Z = vector containing previous base point
n = cycles/search directions
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Gradient of a function: The gradient of a function is an n-component vector
given by
The gradient has a very important property:
If we move along the gradient direction from any point in n-
dimensional space, the function value increases at the fastest rate. Hence the
gradient direction is called the direction of steepest ascent. Unfortunately, the
direction of steepest ascent is a local property and not a global one.
• Since the gradient vector represents the direction of steepest ascent, the
negative of the gradient vector denotes the direction of steepest descent.
• Any method that makes use of the gradient vector can be expected to give
the minimum point faster than one that does not make use of the
gradient vector.
• All the descent methods make use of the gradient vector, either directly
or indirectly, in finding the search directions.
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent (also called Cauchy Method)
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives First, let us consider the perfectly scaled
quadratic objective function
2.1 Steepest Descent whose contours are concentric circles as
shown in figure.
Suppose we calculate the gradient at the
point
Disadvantage: The basic difficulty with the steepest descent method is that it is too sensitive to the
scaling of f(x), so that convergence is very slow and what amounts to oscillation in the x space can
easily occur.
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent
Unit 2: Unconstrained Multivariable Optimization
2. Methods that use first derivatives
2.1 Steepest Descent
Assignment #1: