0% found this document useful (0 votes)
95 views

Fibonacci Method

Fibonacci sequence

Uploaded by

Nestor
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

Fibonacci Method

Fibonacci sequence

Uploaded by

Nestor
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

POLYNOMIAL APPROXIMATION METHODS

Another class of methods of unidimensional minimization locates a point x near x*, the value of
the independent variable corresponding to the minimum of f(x), by extrapolation and
interpolation using polynomial approximations as models off (x). Both quadratic and cubic
approximation have been proposed using function values only and using both function and
derivative values. In functions where f '(x) is continuous, these methods are much more
efficient than other methods and are now widely used to do line searches within multivariable
optimizers.

a) Quadratic Interpolation

We start with three points x1, x2, and x3 in increasing order that might be equally spaced, but the
extreme points must bracket the minimum. We know that a quadratic function, f(x) = a+bx + cx2
can be passed exactly through the three points, and that the function can be differentiated and the
derivative set equal to 0 to yield the minimum of the approximating function:

(1)

Suppose that f(x) is evaluated at x1, x2 and x3 to yield f(x1) ≡ f1, f(x2) ≡f2, f(x2)≡ f3. The
coefficients b and c can be evaluated from the solution of the three linear equations:

(2)

(3)

(4)

via determinants or matrix algebra. Introduction of b and c expressed in terms of x1, x2, x3,, fl, f2,
and f3, into Equation (1) gives :

(5)

To illustrate the first stage in the search procedure, examine the four points in Figure 1 for stage
1.
Figure 1: Two stages of quadratic interpolation.

We want to reduce the initial interval [x1, x3]. By examining the values of f(x) [with the
assumptions that f(x) is unimodal and has a minimum], we can discard the interval from x1 to x2
and use the region (x2, x3) as the new interval. The new interval contains three points, (x1,, , x3)
that can be introduced into Equation (5) to estimate a x*, and so on. In general, you evaluate
f(x*) and discard from the set {x1, x2, x3} the point that corresponds to the greatest value of f(x),
unless a bracket on the minimum of f(x) is lost by so doing, in which case you discard the x so as
to maintain the bracket. The specific tests and choices of xi to maintain the bracket are illustrated
in Figure 2
Figure 2: How to maintain a bracket on the minimum in quadratic interpolation

Example 1

Starting with the following three points


bracketing the minimum: [2, 4, 6]

Solution.

x1 = 2; x2 = 4; and x3 = 6

f(x1) = 3; f(x2) = 3 and f(x3) = 11

Applying Eq (5), recall

=3

2. We eliminate x3 since f(x3) has the highest value.

The stage 2 bracket to consider is [2, 3, 4]

x1 = 2; x2 = 3; and x3 = 4
And f(x1) = 3; f(x2) = 2 and f(x3) = 3

Applying Eq (5), recall

=3

The difference between optimum point in stage 1 and stage 2 = 0

Hence the critical point is x* = 3

Hence the minimum of the function,

Example 2

Starting with the following three points


bracketing the minimum: [-2, 1, 5] .

Solution.

x1 = -2; x2 = 1; and x3 = 5

f(x1) = -19; f(x2) = 2 and f(x3) = -26

Recall

= 1.25

2.125

Eliminate x3 since f(x3) has the least value. Hence stage 2 three points are:

x1 = -2 ; x2 = 1; x3 = 1.25

f(x1) = -19; f(x2) = 2 and f(x3) = 2.125

= 1.25

The difference between optimum point in stage 1 and stage 2 = 0


Hence the critical point is x* = 1.25

Hence the maximum value of the function,

b) Cubic Interpolation

Cubic interpolation to find the minimum of f(x) is based on approximating the objective function
by a third-degree polynomial within the interval of interest and then determining the associated
stationary point of the polynomial:

f(x) = a1x3 + a2x2 + a3x + a4

Four points must be computed (that bracket the minimum) to estimate the minimum, either four
values of f(x), or the values of f(x) and the derivative of f(x), each at two points.

In the former case four linear equations are obtained with the four unknowns being the desired
coefficients. Let the matrix X be:

F = X*A

Then the extremum of f(x) is obtained by setting the derivative of f(x) equal to zero and solving
for .

The sign to use before the square root is governed by the sign of the second derivative of f( ),
that is, whether a minimum or maximum is sought. The vector A can be computed from XA = F
or

A = X-1F
After the optimum point i is predicted, it is used as a new point in the next iteration and the point
with the highest [lowest value of f(x) for maximization] value of f(x) is discarded.

If the first derivatives of f(x) are available, only two points are needed, and the cubic function
can be fitted to the two pairs of the slope and function values. These four pieces of information
can be uniquely related to the four coefficients in the cubic equation, which can be optimized for
predicting the new, nearly optimal data point. If (x1,f1, f’1)and (x2,f2, f’2) are available, then the
optimum is :

In a minimization problem, you require x1 < x2, f’1 < 0, and f’2> 0 (x1 and x, bracket the
minimum). For the new point ( ), calculate f’( ) to determine which of the previous two points
to replace.

GOLDEN SECTION SEARCH TECHNIQUE


In this search technique, the only assumption is that the objective function is unimodal, which
means that it has only one local minimiser. Typically a closed interval is given within which the
optimum condition lies. The method is based on evaluating the objective function at different
points in the interval. We choose these points in such a way that an approximation to the
minimiser may be achieved in as few evaluations as possible.
The search range is progressively narrowed until the minimiser is “boxed in” with sufficient
accuracy.

Given a normalized interval where should two points be chosen such that:

1.) size of reduced interval is independent of function.


2.) only one function evaluation per interval reduction.

0 x1 x2 1

Figure 1: Golden section interval reduction, initial interval is [0,1]


Therefore by condition 1 above:

x2 – 0 = 1- x1 or x1 = 1 – x2 (1)

and by condition 2 :
(2) [constant ratio]

(3a)

(3b)

Substituting (1) into (3a)

r = 0.61803

(1-r) = 0.38197

In the figure below, the search was started within an interval [ao, bo]

ao x1 x2 bo

We have to evaluate the function, f at two intermediate points. We choose the intermediate
points in such a way that the reduction in the range is symmetric.

The interval is broken up in the way that we will be able to limit ourselves to one single
calculation our new position, xi and one of the evaluation per iteration

The optimization algorithm is :

i. Using the interval limit ak and bk, determine x1 and x2


ii. Evaluate f(x1) and f(x2)
iii. Eliminate the part of the interval in which the optimum is not located
iv. Repeat step i through step iii until the desired accuracy is obtained

How to determine which part of the interval to eliminate


For the minimization case, consider:

1. If f(x1) > f(x2), eliminate the interval to the left of x1


2. If f(x1) < f(x2), eliminate the interval to the right of x2

NOTE

1. Only one new point needs to be evaluated during each iteration


2. The new point for the current iteration is:

Where r, the golden ratio is = 0.618034

Example

The function f(x) = ex+2-Cos(x) has a minimum value within [-3,1] .Use the golden section
elimination method to find the minimum to within 0.004

K ao bo x1 x2 f(x1) f(x2) Range


0 -3 1 -1.47214 -0.52786 2.1309345 1.7259787 4
1 -1.47214 1 -0.52787 0.05573 1.7259784 2.0588609 2.47214
2 -1.47214 0.05573 -0.88855 -0.52786 1.7807119 1.7259786 1.52787
3 -0.88855 0.05573 -0.52787 -0.30495 1.7259784 1.7832970 0.94428
4 -0.88855 -0.30495 -0.66563 -0.52787 1.7274222 1.7259785 0.5836
5 -0.66563 -0.30495 -0.52786 -0.44272 1.7259788 1.7386978 0.36068
6 -0.66563 -0.44272 -0.58049 -0.52786 1.7234301 1.7259786 0.22291
7 -0.66563 -0.52786 -0.61301 -0.58048 1.7237977 1.7234301 0.13777
8 -0.61301 -0.52786 -0.58049 -0.56038 1.7234301 1.7239387 0.08515
9 -0.61301 -0.56038 -0.59291 -0.58048 1.7233984 1.7234301 0.05263
10 -0.61301 -0.58048 -0.60058 -0.59291 1.7234855 1.7233984 0.03253
11 -0.60058 -0.58048 -0.59290 -0.58816 1.7233983 1.7233852 0.0201
12 -0.5929 -0.58048 -0.588156 -0.585224 1.72338521 1.72339271 0.01242
13 -0.5929 -0.58522 -0.5899665 -0.5881535 1.72338654 1.72338521 0.00768
14 -0.58997 -0.58522 -0.5881557 -0.5870343 1.72338521 1.72338667 0.00475
15 -0.58997 -0.58703 -0.588847 -0.588153 1.72338518 1.72338521 0.00294

Stopped because bo – ao< tolerance

Hence the solution is within [0.58997, 0.58703]


(b) Find the minimum of the function in the interval [0, 2] with
accuracy in xopt of 0.04. Using the following methods:

(i) Fibonacci search approach


(ii) Golden section search technique

FIBONACCI SEARCH TECHNIQUE

In the golden search method a constant ratio is used at each iterative step, but in the Fibonacci
method the ratio for the reduction of interval at each iteration is varied . Just as it in the golden
section search technique, two function evaluations are made at the first iteration, thereafter only
one function evaluation is made for subsequent iterations. The number of iterations is
predetermined and based on the specified tolerance.

Fibonacci Number

The Fibonacci search is based on the sequence of Fibonacci numbers which are defined by the
equations:

F0 = 1, F1 = 1

FN+1 = FN + FN-1 for N = 1, 2, ...

Thus the Fibonacci numbers are:

1 , 1, 2, 3, 5, 8, 13, 21, 34 respectively for F0, F1, F2, F3, F4, F5, F6, F7, F8, ...

Fibonacci Search

( Where N is the number of iterations predetermined)

rN is the ratio at the final iteration


In the final iteration, there is an anomaly because

Recall that we need two intermediate points at each stage, one comes from a previous iteration
and the other is a new evaluation point. However, with rN = ½ the two intermediate points
coincide in the middle of the uncertainty interval, and thus we cannot further reduce the
uncertainty range.

To get around this problem, we perform the new evaluation for the last iteration using rN =
½ - εo

The new evaluation point is just to the left or right of the midpoint of the uncertainty interval.

As a result of the modification, the reduction in the uncertainty range at the last iteration may be
either rN = 1/2

Or ½ + εo (εo is a small number, number, about 10% of rN, that is 0.05)


depending on which of the two points has the smaller objective function value. Therefore, in the
worst case, the reduction factor in the uncertainty range for the Fibonacci method is:

The number of iterations to be carried out is predetermined by evaluating:

a0| a1 b1 b0

Note that the intermediate positions are determined thus:

Note also that the value of the ratio varies with the iteration number

Example

Consider the function f(x) = x4-14x3+60x2-70x.. Use the Fibonacci search method to find the
value of x that minimizes the function over the range [0,2]. Locate this value to within the range
or tolerance of 0.3

Solution

We first determine the number of iterations required.

We need to choose the number of iterations, N such that :


Choosing ε=0.1

Hence N= 4 will do

0 ¾ 5/4 2

a0 a1 b1 b0

Since f(b1) > f(a1), we eliminate b0, so that the range is now reduced to [a0, b1] = [0, 5/4]

Note that as the upper boundary moved to b1, a1 is now retained as the new b1, hence we
are left with determining ONLY the new a1

0 ½ ¾ 5/4

a0 a1 b1 b0
Since f(a1) > f(b1), we eliminate a0, so that the range is now reduced to [a1, b0] = [1/2, 5/4]

Note that as the lower boundary moved to a1, b1 is now retained as the new a1, hence we are
left with determining ONLY the new b1

1/2 ¾ 1 5/4

a0 a1 b1 b0

Since f(b1) > f(a1), we eliminate b0, so that the range is now reduced to [a1, b0] = [1/2, 1]

Note that as the upper boundary moved to b1, a1 is now retained as the new b1, hence we
are left with determining ONLY the new a1

The value of ε chosen is 0.05

1/2 0.725 3/4 1

a0 a1 b1 b0

Since f(a1) > f(b1), we eliminate a0, so that the range is now reduced to [a1, b0] = [0.725 1]

The final boundary is [0.725,1] thus giving a range of .275 which is less than 0.3

Summary of iteration

a0 b0 a1 b1 f(a1) f(b1)
0 2 ¾ 5.4 -24.34 -18.65
0 5/4 ½ ¾ -21.69 -24.34
½ 5/4 ¾ 1 -24.34 -23
½ 1 0.725 3/4 -24.27 -24.34

You might also like