0% found this document useful (0 votes)
13 views30 pages

0.Lecture4_Unconstrained2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

0.Lecture4_Unconstrained2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Unconstrained Optimization II

Asst. Prof. Dr.-Ing. Sudchai Boonto

Department of Control System and Instrumentation Engineering


King Mongkut’s Unniversity of Technology Thonburi
Thailand
Objective

At the end of this chapter you should be able to:


• Describe, implement, and use line-search-based methods.
• Explain the pros and cons of the various search direction methods.

Unconstrained Optimization II ◀ 2/30 ▶ ⊚


Unimodality

• Several of the algorithms assume unimodality of the objective function.


• A unimodal function f is one where there is a unique x∗ , such that f is
monotonically decreasing for x ≤ x∗ and monotonically increasing for x ≥ x∗ .
• It follows from this definition that the unique global minimum is at x∗ , and
there are no other local minima.
• Given a unimodal function, we can bracket and inter [a, c] containing the global
minimum if we can find three points a < b < c , such that f(a) > f(b) < f(c).

f(x) = x2 + 54=x
200

175

150

125
f(x)

100

75

50

25

0
0 1 2 3 4 5 6 7 8 9 10 11
x

Unconstrained Optimization II ◀ 3/30 ▶ ⊚


Finding an Initial Bracket

• When optimizing a function, we often start by first bracketing and interval


containing a local minimum.
• After that, we then successively reduce the size of the bracketed interval to
converge on the local minimum.
• We choose a starting point 1 with coordinate x1 and a step size ∆ in the
positive direction. The distance we take is a hyperparameter to this algorithm.
The step size ∆ is 1 × 10−2 .
• We than search in the downhill direction to find a new point that exceeds the
lowest point. With each step, we axpand the step size by some factor, which is
another hyperparameter to the to this algorithm that is often set to γ = 2.

Unconstrained Optimization II ◀ 4/30 ▶ ⊚


Finding the Initial Bracket cont.

Unconstrained Optimization II ◀ 5/30 ▶ ⊚


Finding the Initial Bracket cont.

Bracketing Algorithm / Three-Point Pattern


1. Set x2 = x1 + ∆
2. Evaluate f1 and f2
3. If f2 ≤ f1 Goto Step 5
4. Else Interchange f1 and f2 and x1 and x2 , and Set ∆ = −∆
5. Set ∆ = γ∆, x3 = x2 + ∆, and Evaluate f3 at x3
6. If f3 > f2 Goto Step 8
7. Else Rename f2 as f1 , f3 as f2 , x2 as x1 , x3 as x2 , Goto Step 5
8. Point 1, 2, and 3 satisfy f1 ≥ f2 < f3 (three-point pattern)

Unconstrained Optimization II ◀ 6/30 ▶ ⊚


Finding the Initial Bracket cont.
Example

Consider the problem:

54
minimize x2 +
x x

in the interval (0, 5).


f(x) = x2 + 54=x
200

175

150

125
Using a algorithm above we have the
interval (1.28, 5.12) by using
f(x)

100

75 ∆ = 1e − 2, γ = 2. The interval guarantees


50 that the minimum point lies in the interval.
25

0
0 1 2 3 4 5 6 7 8 9 10 11
x

Unconstrained Optimization II ◀ 7/30 ▶ ⊚


Fibonacci Search

If we have a unimodal f bracketed by the interval [a, b]. Given a limit on the number
of times we can query the objective function. Fibonacci search is guaranteed to
maximally shrink the bracketed interval.

1 2

__________ new interval if Y1 < Y2


new interval if Yl > Y2 ----------


,.,.._

-------- new interval if Y1 < Y2


new interval if Yl > Y2 --------

With three queries, we can shrink the interval by a factor of three. We first query f on
the one-third and two-third points on the interval, discard one-third of the interval,
and then sample just next to the better sample.

Unconstrained Optimization II ◀ 8/30 ▶ ⊚


Fibonacci Search cont.

For n queries, the interval lengths are related to the Fibonacci sequence:
1, 1, 2, 3, 5, 8, . . .. The first two terms are one, and the following terms are always the
sum of the previous two:


1, if n ≤ 2
Fn =
F + Fn−2 , otherwise
n−1

In−j = Fj+1 In j = 1, 2, . . . , n − 1

For j = n − 1 and n − 2,

I1 = Fn In I2 = Fn−1 In
I1 In 1
In = =⇒ =
Fn I1 Fn
Fn−1
I2 = I1
Fn
Unconstrained Optimization II ◀ 9/30 ▶ ⊚
Fibonacci Search cont.
Example

Consider the interval [0, 1], and number of trials n = 5.


• We have I2 = F4
I = 5
and I3 = F F2
I2 = 35 58 I1 = 38
F5 1 8 3
• The new interval is [0, 38 , 58 , 1]. The new interval will be either [0, 58 ] or [ 38 , 1]. If
the result is left hand side we have [0, 58 ]
• Set four points we have [0, 28 , 83 , 58 ], then we have [0, 28 , 83 ] and again set four
point [0, 81 , 28 , 38 ]
• We have [0, 81 , 18 , 28 ]. The central points coincide thus we should add a small
number ϵ = 1e − 2 or less. Then the interval is [0, 81 , 18 + ϵ, 28 ] The final stage
should be either [0, 81 + ϵ] or [ 18 , 28 ].
• Since n = 5, we have II5 = 18 = F1
1 5

Unconstrained Optimization II ◀ 10/30 ▶ ⊚


Fibonacci Search cont.

The Fibonacci sequence can be determined analytically using Binet’s formula:

φn − (1 − φ)n
Fn = √ ,
5


where φ = (1 + 5)/2 ≈ 1.61803 is the golden ratio.
The ratio between successive values in the Fibonacci sequence is:

Fn 1 − sn+1
=φ ,
Fn+1 1 − sn

√ √
where s = (1 − 5)/(1 + 5) ≈ −0.382.

Unconstrained Optimization II ◀ 11/30 ▶ ⊚


Fibonacci Search Algorithm
Fibonacci Search

1. Set the interval [a, b] and the 12. Set yc = f(c)


number of interval reductions n
13. If yc < yd
2. If ϵ is given find the smallest n such
1
14. b, d, yd = d, c, yc
that Fn
< ϵ.
15. Else
3. Set φ = 1.61803,
√ √ 16. a, b = b, c
s = (1 − 5)/(1 + 5),
ρ = 1/(φ(1 − sn+1 ))/(1 − sn )) 17. EndIf
4. Set d = ρb + (1 − ρ)a 18. Set
5. Set yd = f(d) ρ = 1/(φ(1 − sn−i+1 ))/(1 − sn−i )

6. For i In 1 To n − 1 19. EndFor


7. If i == n − 1 20. Return a < b ? (a, b) : (b, a)
8. c = ϵa + (1 − ϵ)d
9. Else
10. c = ρa + (1 − ρ)b
11. EndIf
Unconstrained Optimization II ◀ 12/30 ▶ ⊚
Fibonacci Search cont.
Example

In the interval reduction problem, the initial interval is given to be 4.68 units. The
final interval desired is 0.01 units. Find the number of interval reductions using
Fibonacci method
Solution: We need to choose the smallest n such that

1 0.01
< or Fn > 468
Fn 4.68

The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, we get
n = 14. The number of interval reductions is n − 1 = 13.
Note: Write a program to check the result by yourself.

Unconstrained Optimization II ◀ 13/30 ▶ ⊚


Fibonacci Search cont.
Example
A projectile released from a height h at an angle θ with respect to the horizontal in a
gravitational field g, shown in Fig. travels a distance D when it hits the ground. D is
given by

 √ 
( )2
V sin θ 2h V sin θ
D= + +  V cos θ
g g g

If h = 0.5 m, V = 90 m/s, g = 9.81 m/s2 ,


determine the angle θ in degrees for which
the distance D is a maximum. Also
calculate the maximum distance D in
meters. Using the range for θ of 0◦ to 80◦
and compare your results for 7 and 19
Fibonacci interval reductions. Note: We
are going to minimize V = −D. (See:
Bracket.jl)

Unconstrained Optimization II ◀ 14/30 ▶ ⊚


Golden Section Search
Example

If we take the limit of the Fibonacci Search for large n, we see that the ratio between
successive values of the Fibonacci sequence approaches the golden ratio
(https://en.wikipedia.org/wiki/Golden_ratio):

Fn
lim =φ
n→∞ Fn−1

Unconstrained Optimization II ◀ 15/30 ▶ ⊚


Golden Section Search Algorithm
Fibonacci Search

1. Set the interval [a, b] and the 9. If yc < yd


number of interval reductions n
10. b, d, yd = d, c, yc
2. If ϵ is given find the smallest n such
1 11. Else
that Fn
< ϵ.
12. a, b = b, c
3. Set φ = 1.61803, ρ = φ − 1
13. EndIf
4. Set d = ρb + (1 − ρ)a
14. EndFor
5. Set yd = f(d)
15. Return a < b ? (a, b) : (b, a)
6. For i In 1 To n − 1
7. c = ρa + (1 − ρ)b
8. Set yc = f(c)
You can test the algorithm with the following function with 5 interval reductions:

2
f(x) = −e−x
f(x) = (sin(x) + sin(x/2))/4
Unconstrained Optimization II ◀ 16/30 ▶ ⊚
Quadratic Fit Search Algorithm
• Quadratic fit search gives our ability to analytically solve for the minimum of a
quadratic function. Many local minima look quadratic when we zoom in close
enough.
• Quadratic fit search iteratively fits a quadratic function to three bracketing
points, solves for the minimum, chooses a new set of bracketing points, and
repeats as shown in Figure below:

Given bracketing points a < b < c, we wish


to find the coefficients p1 , p2 , and p3 for
the quadratic function q that goes through
(a, ya ), (b, yb ), and (c, yc ):

q(x) = p1 + p2 x + p3 x2
ya = p1 + p2 a + p3 a2
yb = p1 + p2 b + p3 b2
yc = p1 + p2 c + p3 c2

Unconstrained Optimization II ◀ 17/30 ▶ ⊚


Quadratic Fit Search Algorithm cont.
In matrix form, we have

        −1  
ya 1 a a2 p1 p1 1 a a2 ya
          
 yb  =  1 b b2  p2  ,  p2  =  1 b b2   yb 
yc 1 c c2 p3 p3 1 c c2 yc

The quadratic function is then

(x − b)(x − c) (x − a)(x − c) (x − a)(x − b)


q(x) = ya + yb + yc
(a − b)(a − c) (b − a)(b − c) (c − a)(c − b)

We can solve for the unique minimum by finding where the derivative is zero:

1 yz (b2 − c2 ) + yb (c2 − a2 ) + yc (a2 − b2 )


x∗ =
2 ya (b − c) + yb (c − a) + yc (a − b)

Quadratic fit search is typically faster than golden section search. It may need
safeguards for cases where the next point is very close to other points.

Unconstrained Optimization II ◀ 18/30 ▶ ⊚


Quadratic Fit Search Algorithm
Quadratic Fit Search

1. Set n is a number of iteration.


2. Set ya , yb , yc = f(a), f(b), f(c)
9. ElseIf x < b
3. For i In 1 To n − 3
10. If yx > yb
4. Set
11. a, ya = x, yx

1 (ya(b2− c2 )
+ yb(c2
− + a2 ) − yc(a2 b2 )) 12. Else
x=
2 (ya(b − c) + yb(c − a) + yc(a − b)) 13. c, yc, b, yb = b, yb, x, yx
yx = f(x) 14. EndIf
15. EndIf
5. If x > b
16. EndFor
6. If yx > yb
17. Return (a, b, c)
7. c, yc = x, yx
8. Else
9. a, ya, b, yb = b, yb, x, yx
10. EndIf
You can Optimization
Unconstrained test the algorithm
II with the following function with 5 interval reductions: ◀ 19/30 ▶ ⊚
Quadratic Fit Search Algorithm
Interation 1 Interation 2

40 40

20 20

f(x)

f(x)
0 0

-20 -20
0 3 6 9 0 3 6 9
x x

Interation 3 Interation 4

40 40

20 20
f(x)

0 f(x) 0

-20 -20
0 3 6 9 0 3 6 9
x x

See ch2\bracket.jl
Unconstrained Optimization II ◀ 20/30 ▶ ⊚
Shubert-Piyavskii Method
The Shubert-Piyavskii method is a global optimization method over a domain [a, b],
meaning it is guaranteed to converge on the global minimum of a function irrespective
of any local minima or whether the function is unimodal.
• The Shubert-Piyavskii method requires that the function be Lipschitz continous,
meaning that it is continuous and there is an upper bound on the magnitude of
its derivative. A function f is Lipshitz continuous on [a, b] if there exists an l > 0
such that:

|f(x) − f(y)| ≤ l|x − y| for all x, y ∈ [a, b]

l is as large as the largest unsigned instantaneous rate of change the function


attains on [a, b].
• Given a point (x0 , f(x0 )), we knwo that the lines f(x0 ) − l(x − x0 ) for x > x0 and
f(x0 ) + l(x − x0 ) for x < x0 form a lower bound of f.

Unconstrained Optimization II ◀ 21/30 ▶ ⊚


Optimization of non-unimodal problem
• The techniques presented above, namely Fibonacci, Golden Section, and
Polynomial fit method, require the function to be unimodal.
• However functions are multimodal and further, their modality cannot be
ascertained a priori.
• Techniques for finding the global minimum are few, and can be broadly
classified as based on deterministic or random search.
• We discuss some of them.

Unconstrained Optimization II ◀ 22/30 ▶ ⊚


Shubert-Piyavskii Method
• The Shubert-Piyavskii method iteratively builds a tighter and tighter lower
bound on the function.
• Given a valid Lipschitz constant l the algorithm begins by sampling the
midpoint, x(1) = (a + b)/2.
• A sawtooth lower bound is constructed using lines a slope ±l from this point.

• further iterations find the minimum point in the sawtooth, evaluate the function
at that x value, and then use the result to update the sawtooth.

Unconstrained Optimization II ◀ 23/30 ▶ ⊚


Shubert-Piyavskii Method

• The algorithm is stopped when the difference in height between the minimum
sawtooth value and the function evaluation at that point is less than a given
tolerance ϵ. For the minimum peak (x(n) , y(n) ) and function evaluation f(x(n) ),
we thus terminate if y(n) − f(x(n) ) < ϵ
• For every peak, an uncertainty region can be computed according to:

[ ]
1 1
x(i) − (ymin − y(i) ), x(i) + (ymin − y(i) )
l l

Unconstrained Optimization II ◀ 24/30 ▶ ⊚


Shubert-Piyavskii Method
• The main drawback of the Shubert-Piyavskii method is that it requires knowing
a valid Lipschitz constant. Large Lipschitz constants will result in poor lower
bounds.
• We can use upper bounds instead of lower bounds, as well. By changing the
minimum point to the maximum point in each step.

Unconstrained Optimization II ◀ 25/30 ▶ ⊚


Bisection Method
The bisection method can be used to find roots of the function, or points where the
function is zero. The root-finding methods can be used for optimization by applying
them to the derivative of the objective, locating where f′ (x) = 0. We must ensure that
the resulting points are indeed local minima. In this method:
• The bisection method cuts the bracketed region in half with every iteration.
• The midpoint (a + b)/2 is evaluated, and the new bracket is formed from the
midpoint and whichever side that continues to bracket a zero.
• We terminate immediately if the midpoint evaluates to zero. Otherwise we can
terminate after a fixed number of iterations.
( )
• The method is guaranteed to converge within ϵ of x∗ within log2 |b−a|
ϵ
iterations, where log2 denotes the base 2 logarithm.

Unconstrained Optimization II ◀ 26/30 ▶ ⊚


Bisection Method

Bisection Method

1. If a > b Then a, b = b, a EndIf 9. a, b = x, x


2. ya, yb = f(a), f(b) 10. ElseIf sign(y) == sign(ya)
3. If ya == 0 Then b = a EndIf 11. a=x
4. If yb == 0 Then a = b EndIf 12. Else
5. While b − a > ϵ 13. b=x
6. x = (a + b)/2 14. EndIf
7. y = f(x) 15. EndWhile
8. If y == 0 16. Return (a, b)

Unconstrained Optimization II ◀ 27/30 ▶ ⊚


MATLAB function fminbnd
fminbnd find minimum of single-variable function on fixed interval. It is a
one-dimensional minimizer that finds a minimum for a problem specified by

minimize f(x)
x

s.t. x1 < x < x2

% Matlab Example
f = @(x) 2 - 2*x + exp(x)
% Matlab Example
a = 9/7;
% tolX has a default value of 1.0e-4
fun = @(x)sin(x-a);
opts = optimset('tolX', 1.0e-6);
x = fminbnd(fun, 1, 2*pi)
[xopt, fopt, ifl, out] = fminbnd(f, 0, 2,opts)
x = 5.9981
xopt = 0.6931
fopt = 2.6137

• The algorithms that used in this function are golden section search, and
quadratic interpolation.
• Try to use optimset('Display','iter'), and see results.
Unconstrained Optimization II ◀ 28/30 ▶ ⊚
Julie Optim.jl
using Optim

f = x -> sin(x - 9/7); x1 = 0; x2 = 2�


result1 = optimize(f, x1, x2, Brent(), show_trace=true))
xopt1, fopt1 = Optim.minimizer(result1), Optim.minimum(result1)

result2 = optimize(f, x1, x2, GoldenSection(), show_trace=true, abs_tol=1e-3)


xopt2, fopt2 = Optim.minimizer(result2), Optim.minimum(result2)

Unconstrained Optimization II ◀ 29/30 ▶ ⊚


Reference

1. Joaquim R. R. A. Martins, Andrew Ning, ”Engineering Design Optimization,”


Cambridge University Press, 2021.

2. Mykel J. kochenderfer, and Tim A. Wheeler, ”Algorithms for Optimization,”


The MIT Press, 2019.

3. Ashok D. Belegundu, Tirupathi R. Chandrupatla, ”Optimization Concepts and


Applications in Engineering,” Cambridge University Press, 2019.

4. Kalyanmoy D., ”Optimization for Engineering Design: Algorithms and


Examples,” 2nd, PHI Learning Private Limited, 2012.

Unconstrained Optimization II ◀ 30/30 ▶ ⊚

You might also like