0% found this document useful (0 votes)
291 views

Elimination Methods

This document discusses various numerical optimization methods for solving non-linear programming problems, including the Newton method. It describes the Newton method as starting with an initial point and iteratively finding a suitable direction and step length to get a new, improved approximation until reaching an optimal point. It also discusses one-dimensional minimization methods used to determine the optimal step length in Newton's method. Finally, it explains unimodal functions and several search methods for finding minima of unimodal functions, including unrestricted search, exhaustive search, dichotomous search, and others.

Uploaded by

manank
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
291 views

Elimination Methods

This document discusses various numerical optimization methods for solving non-linear programming problems, including the Newton method. It describes the Newton method as starting with an initial point and iteratively finding a suitable direction and step length to get a new, improved approximation until reaching an optimal point. It also discusses one-dimensional minimization methods used to determine the optimal step length in Newton's method. Finally, it explains unimodal functions and several search methods for finding minima of unimodal functions, including unrestricted search, exhaustive search, dichotomous search, and others.

Uploaded by

manank
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

Introduction…

Non-Linear programming is complex in nature


compare to Linear programming.

Numerical methods of optimizations are used to solve


Non-Linear functions.

The basic philosophy of using numerical methods to


produce a sequence of improved approximations to
get optimum result.
Classification of Optimization Methods…
Steps to perform NT…

1. Start with an initial trial point X1.


2. Find a suitable direction Si (i = 1 to start with) that points
in the general direction of the optimum.
3. Find an appropriate step length λ∗i for movement along the
direction Si .
4. Obtain the new approximation
Xi+1 as Xi+1 = Xi + λ∗i Si ..........(1)
5. Test whether Xi+1 is optimum. If Xi+1 is optimum, stop the
procedure. Otherwise, set a new i = i + 1 and repeat step
(2) onward.
Steps to perform NT…
If f(X) is the objective function to be minimized, the
problem of determining λ∗i reduces to finding the value
λi = λ∗i that minimizes f (Xi+1) = f (Xi + λiSi) = f (λi) for
fixed values of Xi and Si .
Since f becomes a function of one variable λi only, the
methods of finding λ∗i in Eq. (1) are called
one-dimensional minimization methods.
One Dimensional Minimization Methods…
Unimodal Function…
A unimodal function is one that has only one peak (maximum)
or valley (minimum)in a given interval.
Thus a function of one variable is said to be unimodal if, given
that two values of the variable are on the same side of the
optimum, the one nearer the optimum gives the better
functional value (i.e., the smaller value in the case of a
minimization problem).
Unimodal Function…
This can be stated mathematically as follows:
A function f (x) is unimodal if
(i) x1 < x2 < x∗ implies that f (x2) < f (x1), and
(ii) x2 >x1 >x∗ implies that f (x1) < f (x2),
where x∗ is the minimum point.
Unimodal Function…
For example, consider the normalized interval [0, 1] and
two function evaluations within the interval as shown in
Fig.
There are three possible outcomes f1 < f2, f1 >f2, or f1 = f2.
Unimodal Function…
Outcome = f1 < f2,
The minimum x cannot lie to the right of x2.
Thus part of the interval [x2, 1] can be discarded and
A new smaller interval of uncertainty, [0, x2] ( Fig. a.)
Outcome = f (x1)>f (x2)
The interval [0, x1] can be discarded to obtain a new smaller
interval of uncertainty, [x1, 1] (Fig b)
Outcome = f (x1) = f (x2),
Intervals [0, x1] and [x2, 1] can both be discarded to obtain
the new interval of uncertainty as [x1, x2] (Fig. c).
Unrestricted search Method…
The most elementary approach with a fixed step size
and move from an initial guess point in a favorable
direction (positive or negative).

The step size used must be small in relation to the


final accuracy desired.

Although this method is very simple to implement, it


is not efficient in many cases.
Unrestricted search Method (With fixed step size)…
1. Start with an initial guess point, say, x 1.
x1= 2.000
2. Find f1 = f (x1)., say 5
3. Assuming a step size s, find x2 = x1 + s.,
where s = 0.001, x2 = 2.001
4. Find f2 = f (x2)., say 4
5. If f2 < f1, and if the problem is one of minimization, the assumption of unimodality
indicates that the desired minimum cannot lie between [0,x 1] or x < x1.
Hence the search can be continued further along points x 3, x4, . . . using the unimodality
assumption while testing each pair of experiments. This procedure is continued until a point,
xi = x1+ (i − 1)s, shows an increase in the function value.
6. The search is terminated at xi , and either xi−1 or xi can be taken as the optimum point.
7. Originally, if f2 > f1, the search should be carried in the reverse direction at points x−2, x−3, .
. . , where x−j = x1 − (j − 1)s.
8. If f2 = f1, the desired minimum lies in between x 1 and x2, and the minimum point can be
taken as either x1 or x2.
9. If it happens that both f2 and f−2 are greater than f1, it implies that the desired minimum
will lie in the double interval x−2 < x < x2.
Find the minimum of f = x(x − 1.5) by starting from 0.0 with
an initial step size of 0.05 (With accelerated step size)

Let x1 = 0 , s = 0.05 , f1 = 0.0.


If we try to start moving in the negative x
direction,
x−2 = x1 − 0.05= - 0.05 and f−2 = 0.0775.
Since f−2 >f1, the assumption of unimodality
indicates that the minimum cannot lie toward the
left of x−2.
Thus we start moving in the positive x direction
and obtain the following results:
 From these results, the optimum point can be seen to be xopt ≈ x6 = 0.8. In
this case, the points x6 and x7 do not really bracket the minimum point but
provide information about it. If a better approximation to the minimum is
desired, the procedure can be restarted from x5 with a smaller step size.
Exhaustive Search Method…
The exhaustive search method can be used to solve
problems where the interval in which the optimum value
is finite.
Let xs and xf denote, respectively, the starting and final
points of the interval of uncertainty.
The exhaustive search method determines the objective
function at a predetermined number of equally spaced
points in the interval (xs , xf ).
 It reduces the interval of uncertainty using the
assumption of unimodality.
Exhaustive Search Method…
The objective function is defined for interval[xs,xf].
It is evaluated at eight equally spaced points from x1 to x8.
It shows that minimum value is lying at x6 between
points x5 and x7.
So interval [x5,x7] is considered as the interval of
uncertainty of length Lo= xf - xs
Exhaustive Search Method…
If the function is evaluated at “n” equally spaced points in the
original interval of uncertainty of length L0 = xf − xs , and if the
optimum value of the function (among the n function values) is
lying at point xj , then final interval of uncertainty is given by

The final interval of uncertainty obtainable for different


number of trials in the exhaustive search method is given as

Since the function is calculated at all “n” points simultaneously,


this method can be called as Simultaneous Search Method.
Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.00) to
within10% of the exact value.
 If the middle point of the final interval of uncertainty is taken as the approximate
optimum point, the maximum deviation could be 1/(n + 1) times the initial interval
of uncertainty.
 Thus to find the optimum within 10% of the exact value, we should have

 By taking n = 9, the following function values can be calculated

 From above , final interval of uncertainty is [x7,x8]. Middle point of this interval
give the optimum approximation.
Dichotomous Search Method…
 The exhaustive search method is a simultaneous search
method in which all the experiments are conducted before any
judgement is made regarding the location of the optimum
point.
 The dichotomous search method , as well as the Fibonacci
and the golden section methods discussed in subsequent
sections, are sequential search methods in which the result of
any experiment influences the location of the subsequent
experiment.
 In the dichotomous search, two experiments are placed as close
as possible at the center of the interval of uncertainty.
 Based on the relative values of the objective function at the
two points, almost half of the interval of uncertainty is
eliminated.
Dichotomous Search Method…
 Let the positions of the two
experiments be given by:
L0 
x1  
2 2
L0 
x2  
2 2

where  is a small positive


number chosen such that the
two experiments give
significantly different results.
Dichotomous Search Method…
 Then the new interval of uncertainty is given by (L0/2+/2).
 conduct a pair of experiments at the center of the current interval of
uncertainty.
 The next pair of experiments is, therefore, conducted at the center of the
remaining interval of uncertainty.
 This results in the reduction of the interval of uncertainty by nearly a
factor of two.
Dichotomous Search
Example: Find the minimum of f = x(x-1.5) in the interval (0.0,1.0) to within
10% of the exact value.

Solution: The ratio of final to initial intervals of uncertainty is given by:


Ln 1   1 
 n / 2  1  n / 2 
L0 2 L0  2 

where  is a small quantity, say 0.001, and n is the number of experiments. If


the middle point of the final interval is taken as the optimum point, the
requirement can be stated as:
1 Ln 1

2 L0 10
i.e.
1   1  1
 1  n / 2  
2n / 2 L0  2  5
Dichotomous Search
Solution: Since  = 0.001 and L0 = 1.0, we have

1 1  1  1
 1  n / 2  
2n / 2 1000  2  5
i.e.
999 1 995 999
n/2
 or 2 n/2
  5.0
1000 2 5000 199

Since n has to be even, this inequality gives the minimum admissable value of
n as 6. The search is made as follows: The first two experiments are made at:

L0 
x1    0.5  0.0005  0.4995
2 2
L 
x2  0   0.5  0.0005  0.5005
2 2
Dichotomous Search
with the function values given by:
f1  f ( x1 )  0.4995(1.0005)  0.49975
f 2  f ( x2 )  0.5005(0.9995)  0.50025

Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The
second pair of experiments is conducted at :
1.0  0.4995
x3  (0.4995  )  0.0005  0.74925
2
1.0  0.4995
x4  (0.4995  )  0.0005  0.75025
2
which gives the function values as:
f 3  f ( x3 )  0.74925(0.75075)  0.5624994375
f 4  f ( x4 )  0.75025(0.74975)  0.5624994375
Dichotomous Search
Since f3 > f4 , we delete (0.4995,x3) and obtain the new interval of
uncertainty as:
(x3,1.0)=(0.74925,1.0)
The final set of experiments will be conducted at:
1.0  0.74925
x3  (0.74925  )  0.0005  0.874125
2
1.0  0.74925
x4  (0.74925  )  0.0005  0.875125
2
which gives the function values as:
f 5  f ( x5 )  0.874125(0.625875)  0.5470929844
f 6  f ( x6 )  0.875125( 0.624875)  0.5468437342
Dichotomous Search
Since f5 < f6 , the new interval of uncertainty is given by (x3, x6)
(0.74925,0.875125). The middle point of this interval can be taken as
optimum, and hence:
xopt  0.8121875
f opt  0.5586327148
Dichotomous Search Method…
1. Choose interval (a b), choose a small number  and
small number  (0< <(b-a)). Calculate xm = (a+b)/2
2. Calculate number of iterations required to achieve
desired level of accuracy. ln  b  a  
n
3. Set k = 1 to n ln 2

4. Calculate x1 = xm - /2 and x2 = xm + /2. evaluate f(x1)


and f(x2).
5. If f(x1) < f(x2); a = a & b = x2; k = k + 1
else If f(x1) > f(x2); a = x1 & b = b; k = k + 1
6. If b-a <  or k > n terminate else go to step 4.
Fibonacci method…
The Fibonacci method can be used to find the minimum of a
function of one variable even if the function is not continuous.
This method, like many other elimination methods, has the
following limitations:
1. The initial interval of uncertainty, in which the optimum lies, has to be
known.

2. The function being optimized has to be unimodal in the initial interval of


uncertainty.

3. The exact optimum cannot be located in this method. Only an interval


known as the final interval of uncertainty will be known.

4. The final interval of uncertainty can be made as small as desired by using


more computations.
Fibonacci method…
This method makes use of the sequence of Fibonacci
numbers, {Fn}, for placing the experiments.
These numbers are defined as F0 = F1 = 1
Fn = Fn−1 + Fn−2, n = 2, 3, 4, . . .
which yield the sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, 55,
89,. . .
Fibonacci method…

Leonardo of Pisa
(Fibonacci)
Italian Mathematician
A new rabbit offspring
matures in a certain
period and that each
mature rabbit delivers an
offspring in same period
and remain mature at the
end of the period.

Fi = Fi-1 + Fi-2
Starting Interval = 1-4
Select two points symmetrically spaced
I3
= 2-3
Two possibility:
f2 < f3  1-3 is new interval
f2 > f4  2-4 is new interval

New point is introduced such that


length 1-3 = length 2-4
Fibonacci method…
I1  I 2  I 3 I n 1  2 I n
I2  I3  I4 I n  2  I n 1  I n  3I n
 I n 3  I n  2  I n 1  2 I n  3I n  5 I n
I n  4  I n  3  I n  2  5 I n  3I n  8 I n
I j  I j 1  I j  2

 I n  j  F j 1 I n j  1,2, , n  1
I n  2  I n 1  I n
I n 1  I n  I n  2 I n
Fibonacci method…
I n  j  F j 1 I n j  1,2, , n  1
For j = n-1 and n-2
I1  Fn I n
I 2  Fn 1 I n 2
L
I1 I2 Fn 1
In  
Fn Fn 1 To get value of Fn+1 get the number
of iterations required can be
Fn 1 evaluated.
I2  I1
Fn
Fibonacci method
Procedure:
Let L0 be the initial interval of uncertainty defined by
a x  b and n be the total number of experiments to be
conducted. Define
Fn 2
L 
*
2 L0
Fn

and place the first two experiments at points x1 and x2,


which are located at a distance of L2* from each end of L0.
Fibonacci method
Procedure:
This gives
Fn  2
x1  a  L*2  a  L0
Fn
Fn  2 Fn 1
x2  b  L  b 
*
2 L0  a  L0
Fn Fn
Discard part of the interval by using the unimodality
assumption. Then there remains a smaller interval of
uncertainty L2 given by:
 F  Fn 1
L2  L0  L*2  L0 1  n  2   L0
 Fn  Fn
Fibonacci method
Procedure:
The only experiment left in will be at a distance of
Fn  2 F
L*2  L0  n  2 L2
Fn Fn 1

from one end and


Fn 3 F
L2  L*2  L0  n 3 L2
Fn Fn 1
from the other end. Now place the third experiment in the interval
L2 so that the current two experiments are located at a distance of:
Fn 3 F
L*3  L0  n 3 L2
Fn Fn 1
Fibonacci method
Procedure:
 This process of discarding a certain interval and placing a new
experiment in the remaining interval can be continued, so that the
location of the jth experiment and the interval of uncertainty at the end
of j experiments are, respectively, given by:

Fn  j
L 
*
j L j 1
Fn ( j  2 )
Fn ( j 1)
Lj  L0
Fn
Fibonacci method
Procedure:
 The ratio of the interval of uncertainty remaining after conducting
j of the n predetermined experiments to the initial interval of
uncertainty becomes:
Lj Fn ( j 1)

L0 Fn

and for j = n, we obtain

Ln F1 1
 
L0 Fn Fn
Fibonacci method
 The ratio Ln/L0 will permit us to determine n, the required number
of experiments, to achieve any desired accuracy in locating the
optimum point.Table gives the reduction ratio in the interval of
uncertainty obtainable for different number of experiments.
Fibonacci method
Position of the final experiment:
 In this method, the last experiment has to be placed with some
care. Equation
Fn  j
L 
*
j L j 1
Fn ( j  2 )

gives
L*n F 1
 0  for all n
Ln 1 F2 2

 Thus, after conducting n-1 experiments and discarding the


appropriate interval in each step, the remaining interval will
contain one experiment precisely at its middle point.
Fibonacci method…
Advantages:
For large number of trials this method can give maximum
reduction of the interval of uncertainty.
Only one function is required to be evaluated in each
iteration.
Disadvantages:
Fibonacci numbers are required to be calculated and stored
Eliminated region changes every iteration
Golden Section Method
Golden Section is a technique to find out the maximum or
minimum of a strictly unimodal function by successively
narrowing the range of values.

It is developed by an American statistician Jack Carl Kiefer in


1956 .

The golden section method is same as the Fibonacci method


except that in the Fibonacci method, the total number of
experiments to be conducted has to be specified before
beginning the calculation, whereas this is not required in the
golden section method.
Golden Section Method
In the Fibonacci method, the location of the first two
experiments is determined by the total number of
experiments, n.

In the golden section method, we start with the


assumption that we are going to conduct a large number
of experiments.

Of course, the total number of experiments can be


decided during the computation.
Golden Section Method
 The intervals of uncertainty remaining at the end of different number
of experiments can be computed as follows:
F
L2  lim N 1 L0
N  F
N

FN  2 F F
L3  lim L0  lim N 2 N 1 L0
N  F N  F
N N 1 FN
2
 FN 1 
 lim   L0
N 
 FN 

 This result can be generalized to obtain


k 1
 FN 1 
Lk  lim   L0
N 
 FN 
Golden Section Method
The equation
FN FN  2
 1
FN 1 FN 1
can be expressed as:
1
 1

that is:
 2   1  0 Quadratic Formula
Golden Section Method
This gives the root =1.618, and hence the equation
k 1
F 
Lk  lim  N 1  L0
N 
 FN 
yields: 1
k 1

Lk    L0  (0.618) k 1 L0
 
2
F 
In the equation L3  lim  N 1  L0
N 
 FN 

the ratios FN-2/FN-1 and FN-1/FN have been taken to be same


for large values of N. The validity of this assumption can be
seen from the table:
Value of N or 2 3 4 5 6 7 8 9 10 
k

Ratio FN-1/FN 0.5 0.667 0.6 0.625 0.6156 0.619 0.6177 0.6181 0.618 0.618
4
Golden Section Method
The ratio  has a historical background. Ancient Greek architects
believed that a building having the sides d and b satisfying the relation

d b d
 
d b
will be having the most pleasing properties. It is also found in Euclid’s
geometry that the division of a line segment into two unequal parts so
that the ratio of the whole to the larger part is equal to the ratio of the
larger to the smaller, being known as the golden section, or golden
mean-thus the term golden section method.
Interval halving method
In the interval halving method, exactly one half of the current
interval of uncertainty is deleted in every stage. It requires three
experiments in the first stage and two experiments in each
subsequent stage.

The procedure can be described by the following steps:


1. Divide the initial interval of uncertainty L0 = [a,b] into four equal
parts and label the middle point x0 and the quarter-interval points
x1 and x2.
2. Evaluate the function f(x) at the three interior points to obtain f1 =
f(x1), f0 = f(x0) and f2 = f(x2).
Interval halving method
(cont’d)
3. (a) If f1 < f0 < f2 as shown in the figure, delete the interval
( x0,b), label x1 and x0 as the new x0 and b, respectively,
and go to step 4.
Interval halving method
(cont’d)
3. (b) If f2 < f0 < f1 as shown in the figure, delete the interval
( a, x0), label x2 and x0 as the new x0 and a, respectively,
and go to step 4.
Interval halving method
(cont’d)
3. (c) If f0 < f1 and f0 < f2 as shown in the figure, delete both
the intervals ( a, x1), and ( x2 ,b), label x1 and x2 as the
new a and b, respectively, and go to step 4.
Interval halving method
(cont’d)
4. Test whether the new interval of uncertainty, L = b - a,
satisfies the convergence criterion L  ϵ where ϵ
is a small quantity. If the convergence criterion is
satisfied, stop the procedure. Otherwise, set the new L0
= L and go to step 1.

Remarks
1. In this method, the function value at the middle point of
the interval of uncertainty, f0, will be available in all the
stages except the first stage.
Interval halving method
(cont’d)
Remarks
2. The interval of uncertainty remaining at the end of n
experiments ( n 3 and odd) is given by
( n 1) / 2
1
Ln    L0
2
Example
Find the minimum of f = x (x-1.5) in the interval (0.0,1.0) to within 10% of the
exact value.

Solution: If the middle point of the final interval of uncertainty is taken as the
optimum point, the specified accuracy can be achieved if:
( n 1) / 2
1 L 1 L0
Ln  0 or   L0  (E1)
2 10 2 5
Since L0=1, Eq. (E1) gives

1 1
( n 1) / 2
 or 2(n -1)/2  5 (E2)
2 5
Example
Solution: Since n has to be odd, inequality (E2) gives the minimum permissable
value of n as 7. With this value of n=7, the search is conducted as follows. The
first three experiments are placed at one-fourth points of the interval L0=[a=0,
b=1] as
x1  0.25, f1  0.25(1.25)  0.3125
x0  0.50, f 0  0.50(1.0)  0.5000
x2  0.75, f 0  0.75(0.75)  0.5625

Since f1 > f0 > f2, we delete the interval (a,x0) = (0.0,0.5), label x2 and x0 as the
new x0 and a so that a=0.5, x0=0.75, and b=1.0. By dividing the new interval of
uncertainty, L3=(0.5,1.0) into four equal parts, we obtain:

x1  0.625, f1  0.625(0.875)  0.546875


x0  0.750, f 0  0.750(0.750)  0.562500
x2  0.875, f 2  0.875(0.625)  0.546875
Example
Solution: Since f1 > f0 and f2 > f0, we delete both the intervals (a,x1) and (x2,b),
and label x1, x0 and x2 as the new a,x0, and b, respectively. Thus, the new interval
of uncertainty will be L5=(0.625,0.875). Next, this interval is divided into four
equal parts to obtain:

x1  0.6875, f1  0.6875(0.8125)  0.558594


x0  0.75, f 0  0.75(0.75)  0.5625
x2  0.8125, f 0  0.8125(0.6875)  0.558594

Again we note that f1 > f0 and f2>f0, and hence we delete both the intervals
(a,x1) and (x2,b) to obtain the new interval of uncertainty as L7=(0.6875,0.8125).
By taking the middle point of this interval (L7) as optimum, we obtain:

xopt  0.75, and f opt  0.5625


This solution happens to be the exact solution in this case.
Comparison of elimination methods
 The efficiency of an elimination method can be measured in terms of the ratio of the final and the initial
intervals of uncertainty, Ln/L0
 The values of this ratio achieved in various methods for a specified number of experiments (n=5 and n=10)
are compared in the Table below:

 It can be seen that the Fibonacci method is the most efficient method, followed by the golden section
method, in reducing the interval of uncertainty.

You might also like