0.Lecture4_Unconstrained2
0.Lecture4_Unconstrained2
f(x) = x2 + 54=x
200
175
150
125
f(x)
100
75
50
25
0
0 1 2 3 4 5 6 7 8 9 10 11
x
54
minimize x2 +
x x
175
150
125
Using a algorithm above we have the
interval (1.28, 5.12) by using
f(x)
100
0
0 1 2 3 4 5 6 7 8 9 10 11
x
If we have a unimodal f bracketed by the interval [a, b]. Given a limit on the number
of times we can query the objective function. Fibonacci search is guaranteed to
maximally shrink the bracketed interval.
1 2
€
,.,.._
With three queries, we can shrink the interval by a factor of three. We first query f on
the one-third and two-third points on the interval, discard one-third of the interval,
and then sample just next to the better sample.
For n queries, the interval lengths are related to the Fibonacci sequence:
1, 1, 2, 3, 5, 8, . . .. The first two terms are one, and the following terms are always the
sum of the previous two:
1, if n ≤ 2
Fn =
F + Fn−2 , otherwise
n−1
In−j = Fj+1 In j = 1, 2, . . . , n − 1
For j = n − 1 and n − 2,
I1 = Fn In I2 = Fn−1 In
I1 In 1
In = =⇒ =
Fn I1 Fn
Fn−1
I2 = I1
Fn
Unconstrained Optimization II ◀ 9/30 ▶ ⊚
Fibonacci Search cont.
Example
φn − (1 − φ)n
Fn = √ ,
5
√
where φ = (1 + 5)/2 ≈ 1.61803 is the golden ratio.
The ratio between successive values in the Fibonacci sequence is:
Fn 1 − sn+1
=φ ,
Fn+1 1 − sn
√ √
where s = (1 − 5)/(1 + 5) ≈ −0.382.
In the interval reduction problem, the initial interval is given to be 4.68 units. The
final interval desired is 0.01 units. Find the number of interval reductions using
Fibonacci method
Solution: We need to choose the smallest n such that
1 0.01
< or Fn > 468
Fn 4.68
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, we get
n = 14. The number of interval reductions is n − 1 = 13.
Note: Write a program to check the result by yourself.
√
( )2
V sin θ 2h V sin θ
D= + + V cos θ
g g g
If we take the limit of the Fibonacci Search for large n, we see that the ratio between
successive values of the Fibonacci sequence approaches the golden ratio
(https://en.wikipedia.org/wiki/Golden_ratio):
Fn
lim =φ
n→∞ Fn−1
2
f(x) = −e−x
f(x) = (sin(x) + sin(x/2))/4
Unconstrained Optimization II ◀ 16/30 ▶ ⊚
Quadratic Fit Search Algorithm
• Quadratic fit search gives our ability to analytically solve for the minimum of a
quadratic function. Many local minima look quadratic when we zoom in close
enough.
• Quadratic fit search iteratively fits a quadratic function to three bracketing
points, solves for the minimum, chooses a new set of bracketing points, and
repeats as shown in Figure below:
q(x) = p1 + p2 x + p3 x2
ya = p1 + p2 a + p3 a2
yb = p1 + p2 b + p3 b2
yc = p1 + p2 c + p3 c2
−1
ya 1 a a2 p1 p1 1 a a2 ya
yb = 1 b b2 p2 , p2 = 1 b b2 yb
yc 1 c c2 p3 p3 1 c c2 yc
We can solve for the unique minimum by finding where the derivative is zero:
Quadratic fit search is typically faster than golden section search. It may need
safeguards for cases where the next point is very close to other points.
1 (ya(b2− c2 )
+ yb(c2
− + a2 ) − yc(a2 b2 )) 12. Else
x=
2 (ya(b − c) + yb(c − a) + yc(a − b)) 13. c, yc, b, yb = b, yb, x, yx
yx = f(x) 14. EndIf
15. EndIf
5. If x > b
16. EndFor
6. If yx > yb
17. Return (a, b, c)
7. c, yc = x, yx
8. Else
9. a, ya, b, yb = b, yb, x, yx
10. EndIf
You can Optimization
Unconstrained test the algorithm
II with the following function with 5 interval reductions: ◀ 19/30 ▶ ⊚
Quadratic Fit Search Algorithm
Interation 1 Interation 2
40 40
20 20
f(x)
f(x)
0 0
-20 -20
0 3 6 9 0 3 6 9
x x
Interation 3 Interation 4
40 40
20 20
f(x)
0 f(x) 0
-20 -20
0 3 6 9 0 3 6 9
x x
See ch2\bracket.jl
Unconstrained Optimization II ◀ 20/30 ▶ ⊚
Shubert-Piyavskii Method
The Shubert-Piyavskii method is a global optimization method over a domain [a, b],
meaning it is guaranteed to converge on the global minimum of a function irrespective
of any local minima or whether the function is unimodal.
• The Shubert-Piyavskii method requires that the function be Lipschitz continous,
meaning that it is continuous and there is an upper bound on the magnitude of
its derivative. A function f is Lipshitz continuous on [a, b] if there exists an l > 0
such that:
• further iterations find the minimum point in the sawtooth, evaluate the function
at that x value, and then use the result to update the sawtooth.
• The algorithm is stopped when the difference in height between the minimum
sawtooth value and the function evaluation at that point is less than a given
tolerance ϵ. For the minimum peak (x(n) , y(n) ) and function evaluation f(x(n) ),
we thus terminate if y(n) − f(x(n) ) < ϵ
• For every peak, an uncertainty region can be computed according to:
[ ]
1 1
x(i) − (ymin − y(i) ), x(i) + (ymin − y(i) )
l l
Bisection Method
minimize f(x)
x
% Matlab Example
f = @(x) 2 - 2*x + exp(x)
% Matlab Example
a = 9/7;
% tolX has a default value of 1.0e-4
fun = @(x)sin(x-a);
opts = optimset('tolX', 1.0e-6);
x = fminbnd(fun, 1, 2*pi)
[xopt, fopt, ifl, out] = fminbnd(f, 0, 2,opts)
x = 5.9981
xopt = 0.6931
fopt = 2.6137
• The algorithms that used in this function are golden section search, and
quadratic interpolation.
• Try to use optimset('Display','iter'), and see results.
Unconstrained Optimization II ◀ 28/30 ▶ ⊚
Julie Optim.jl
using Optim