0% found this document useful (0 votes)
10 views

Chapter (2) :a

Uploaded by

salmabinescobar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Chapter (2) :a

Uploaded by

salmabinescobar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Open Methods

These methods are based on formulas that require only one or two
starting values of x that could be on the same side of the root. As so,
they sometimes diverge or move away from the true root as the
computations progress. However, when the open methods converge,
they usually do so much more quickly than the bracketing methods
such as: bisection and false position method.
Fixed point iteration method
The method known as fixed point iteration [we also call it the 𝑥 = 𝑔 𝑥
method] can be useful way to get a root of 𝑓 𝑥 = 0. To use the method,
we rearrange 𝑓(𝑥) into an equivalent form 𝑥 = 𝑔 𝑥 , which usually can be
done in several ways. Observe that if 𝑓 𝑝 = 0, where 𝑝 is a root of 𝑓 𝑥 , it
follows that 𝑝 = 𝑔(𝑝).
The meaning of 𝒑 = 𝒈(𝒑)
Example 1:
The function 𝑔(𝑥) = 𝑥 2 − 2, for −2 ≤ 𝑥 ≤ 3, has fixed points at
𝑥 = −1 and 𝑥 = 2 since
When 𝑥 = −1
𝑔 𝑥 = 𝑔(−1) = −1 2 − 2 = −1
When 𝑥 = 2
𝑔 𝑥 = 𝑔(2) = 22 − 2 = 2.
Definition: Whenever we have 𝑝 = 𝑔 𝑝 , 𝑝 is said to be fixed point for the
function 𝑔.
Under suitable conditions, the iterative form is:
𝑥𝑖+1 = 𝑔(𝑥𝑖 ), 𝑖 = 0, 1, 2, … .
When 𝑥0 is starting value
𝑥1 = 𝑔(𝑥0 )
𝑥2 = 𝑔 𝑥1

𝑥𝑖+1 = 𝑔(𝑥𝑖 )
Converges to the fixed point 𝑝, a root of 𝑓(𝑥).
Note: This transformation can be done either by algebraic manipulation
(𝑒. 𝑔. , 𝑥 3 − 2𝑥 + 3 = 0 ⇒ 𝑥 = (𝑥 3 + 3)/2) or

by simply adding x to both sides of the equation (e.g., sin 𝑥 = 0 ⇒


𝑥 = sin 𝑥 + 𝑥).
sin 𝑥 = 0
𝑥 + sin 𝑥 = 0 + 𝑥
𝑥 = 𝑥 + sin(𝑥)
And we have 𝑥 = 𝑔(𝑥)
Then 𝑔 𝑥 = 𝑥 + sin(𝑥)
Graphs
Example 2: Let 𝑓 𝑥 = 𝑥 2 − 2𝑥 − 3 = 0 which has roots at 𝑥 = 3 and
𝑥 = −1, find some 𝑔 functions.
Solution: suppose we rearrange to give this equivalent form:
𝑥 2 = 2𝑥 + 3
𝑥 = 2𝑥 + 3
𝑥 = 𝑔1 𝑥 ⟹ 𝑔1 𝑥 = 2𝑥 + 3
Then 𝑥𝑖+1 = 𝑔1 (𝑥𝑖 ) = 2𝑥𝑖 + 3 for 𝑖 = 0, 1,2, …
Let 𝑥0 = 4 and iterate with fixed point
𝑥1 = 𝑔1 (𝑥0 ) = 2(4) + 3 = 11 = 3.3166
𝑥2 = 𝑔1 (𝑥1 ) = 2 3.3166 + 3 = 3.1037
𝑥3 = 𝑔1 (𝑥2 ) = 2 3.1037 + 3 = 3.0344
𝑥4 = 3.0114
𝑥5 = 3.0038
And it appears that the values
are converging on the root at 𝑥 = 3.
Another rearrangement of 𝑓 𝑥 = x 2 − 2x − 3 = 0 is:
𝑥 2 − 2𝑥 = 3 ⟹ 𝑥 𝑥 − 2 = 3
3
𝑥= we have
𝑥−2
𝑥 = 𝑔2 𝑥
3 3
𝑔2 (𝑥) = so 𝑥𝑖+1 = 𝑔2 (𝑥𝑖 ) = for 𝑖 = 0, 1, 2, …
𝑥−2 𝑥𝑖 −2
Let us start the iteration again with 𝑥0 = 4, then
3
𝑥1 = 𝑔2 𝑥0 = = 1.5
4−2

3
𝑥2 = 𝑔2 𝑥1 = = −6
1.5−2
𝑥3 = 𝑔2 𝑥2 = −0.375
𝑥4 = −1.263
𝑥5 = −0.919
𝑥6 = −1.028
𝑥7 = −0.991
𝑥8 = −1.003
And it seems that we now converge to the other root, at 𝑥 = −1.
Consider the third rearrangement for 𝑥 2 − 2𝑥 − 3 = 0 :

2𝑥 = 𝑥 2 − 3
𝑥 2 −3
𝑥= ⟹ 𝑥 = 𝑔3 𝑥
2
𝑥 2 −3
𝑔3 𝑥 = then
2
𝑥𝑖2 −3
𝑥𝑖+1 = 𝑔3 𝑥𝑖 = , 𝑖 = 0, 1, 2, … .
2
Starting again with 𝑥0 = 4, we get:
16−3
𝑥1 = 𝑔3 𝑥0 = = 6.5
2
6.52 −3
𝑥2 = 𝑔3 𝑥1 = = 19.625
2
𝑥3 = 𝑔3 𝑥2 = 191.07

Which is divergent
Theorem:
a) If 𝑔 ∈ 𝐶[𝑎, 𝑏] and 𝑔(𝑥) ∈ [𝑎, 𝑏] for all 𝑥 ∈ [𝑎, 𝑏], then g has a fixed point in [𝑎, 𝑏].
b) If, in addition, 𝑔′ (𝑥) exists on (𝑎, 𝑏) and a positive constant 𝑘 < 1 exists with
|𝑔′ (𝑥)| ≤ 𝑘, for all 𝑥 ∈ (𝑎, 𝑏),
then the fixed point in [𝑎, 𝑏] is unique.
In the example 2 when we have:
𝒈𝟏 𝒙 = 𝟐𝒙 + 𝟑 then
𝟏 𝟏 𝟏
𝒈′𝟏 𝒙 = ⟹ 𝒈′𝟏 𝒙𝟎 = = = 𝟎. 𝟑𝟎𝟏𝟓 < 𝟏 convergent.
𝟐𝒙+𝟑 𝟐 𝟒 +𝟑 𝟏𝟏
𝟑
𝒈𝟐 𝒙 = then
𝒙−𝟐
−𝟑 −𝟑 −𝟑
𝒈′𝟐 𝒙 = ⟹ 𝒈′𝟐 𝒙𝟎 = = = 𝟎. 𝟕𝟓 < 𝟏 is convergent.
𝒙−𝟐 𝟐 𝟒−𝟐 𝟐 𝟒
𝒙𝟐 −𝟑
And 𝒈𝟑 𝒙 = then
𝟐
𝒈′𝟑 𝒙 = 𝒙 ⟹ 𝒈′𝟑 𝒙𝟎 = 𝒙𝟎 = 𝟒 > 1 is divergent.
Example 3:

𝑓 𝑥 =𝑥−𝑒 𝑥 with initial value 𝑥0 = 1.5.
1Τ 1Τ
𝑥=𝑒 𝑥 ⟹𝑔 𝑥 =𝑒 𝑥

1 1
−𝑒 ൗ𝑥 −𝑒 ൗ1.5
𝑔′ 𝑥 = ⟹ 𝑔′ 𝑥0 = = −0.8657 = 0.8657 < 1
𝑥2 1.5 2

So it is convergent.
Algorithm for fixed point iteration:
To determine a root of 𝑓 𝑥 = 0 , given initial approximate
𝑥0 rearrange the equation 𝑓(𝑥) to an equivalent form 𝑥 = 𝑔(𝑥).
1. Input the function 𝑔 𝑥 , the initial approximation 𝑥0 and
tolerance 𝜖.
2. set 𝑖 = 1
3. compute 𝑥 = 𝑔 𝑥0
4. if 𝑥 − 𝑥0 ≤ 𝜖 then 𝑥 is the root and stop
else 𝑖 = 𝑖 + 1.
1. Set 𝑥0 = 𝑥 return to three.
1
Example (4): Find a root of f x = 𝑥 − (𝑠𝑖𝑛𝑥+ 𝑐𝑜𝑠𝑥) = 0 using
2
the Fixed-Point Iteration starting at 𝑥0 = 0 to an accuracy of
1 ∗ 10−10 .
Solution:
1
𝑥 − (𝑠𝑖𝑛𝑥 + 𝑐𝑜𝑠𝑥) =0
2
1
𝑥 = (𝑠𝑖𝑛𝑥 + 𝑐𝑜𝑠𝑥)
2
1
𝑔 𝑥 = (𝑠𝑖𝑛𝑥 + 𝑐𝑜𝑠𝑥)
2
′ 1
and 𝑔 𝑥 = (𝑐𝑜𝑠𝑥 − 𝑠𝑖𝑛𝑥)
2
′ 1 1 1
so 𝑔 𝑥0 = 𝑐𝑜𝑠𝑥0 − 𝑠𝑖𝑛𝑥0 = 𝑐𝑜𝑠0 − 𝑠𝑖𝑛0 = <1
2 2 2
convergent
1
𝑋𝑖+1 = 𝑔(𝑥𝑖 ) = 𝑠𝑖𝑛𝑥𝑖 + 𝑐𝑜𝑠𝑥𝑖 , 𝑖 = 0, 1, 2, … .
2
the initial guess x0 = 0 , we get
1 1 1 1
x1 = 𝑔(𝑥0 ) = (sin 0 + cos0) = , x1 − x0 = − 0 = ≮ϵ
2 2 2 2
1
x2 = 𝑔(𝑥1 ) = (sin 0 . 5 + cos0.5) = 0.6785040502, x2 − x1 ≮ ϵ
2
x3 = g(x2 ) = 0.7030708011, |x3 − x2 | ≮ ϵ
x4 = 0.7047118221, |x4 − x3 | ≮ ϵ
x5 = 0.7048062961, |x5 − x4 | ≮ ϵ

𝑥9 = 0.7048120019
𝑥10 = 0.7048120020
𝑥10 − 𝑥9 < 𝜖 then 𝑥10 is the root of 𝑓(𝑥) and 𝑥 = 𝑔(𝑥) at 𝑥10 .
Newton Raphson Method
Is also called Newton’s method, it is a simple and fast method for finding the
root of 𝑓 𝑥 .
This method is based on the Taylor series expansion:
2 3

𝑥 − 𝑥 0 ′′
𝑥 − 𝑥0
𝑓 𝑥 = 𝑓 𝑥0 + 𝑥 − 𝑥0 𝑓 𝑥0 + 𝑓 𝑥0 + 𝑓 ′′′ 𝑥0 +
𝑛
2! 3!
𝑥−𝑥0
…+ 𝑓 (𝑛) 𝑥0 . …………………………………………(1)
𝑛!

Let f ∈ 𝐶 2 [𝑎, 𝑏] and 𝑥0 ∈ [𝑎, 𝑏] be an approximation to the root 𝑝 of 𝑓(𝑥) s.t


𝑓 ′ (𝑝) ≠ 0 and 𝑥0 − 𝑝 is small.
Let 𝑝 = 𝑥0 + ℎ ………………………………….(2)
Put (2) in (1)
ℎ2 ′′ ℎ𝑛 (𝑛)
𝑓 𝑝 = 𝑓 𝑥0 + ℎ𝑓 ′ 𝑥0 + 𝑓 𝑥0 + ⋯ + 𝑓 𝑥0 .
2! 𝑛!
Since 𝑝 is the root, then 𝑓 𝑝 = 0 and we consider ℎ to be small then
ℎ2 is much smaller, so we neglect it.
𝑓 𝑝 = 𝑓 𝑥0 + ℎ𝑓 ′ 𝑥0
0 = 𝑓 𝑥0 + ℎ𝑓 ′ 𝑥0
−𝑓(𝑥0 )
ℎ= ′ , put in (2)
𝑓 (𝑥0 )
𝑓(𝑥0 )
𝑝 = 𝑥0 − ′ set 𝑥1 = 𝑝, so
𝑓 (𝑥0 )
𝑓(𝑥0 )
𝑥1 = 𝑥0 − ′ , we start with 𝑥0 and generate 𝑥1
𝑓 (𝑥0 )
Continuing this process, we can generate 𝑥2 , 𝑥3 , … , 𝑥𝑛 .
The Newton Raphson rule will be:
𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 − ′
𝑓 (𝑥𝑖 )
This process will stop when |𝑥𝑖+1 − 𝑥𝑖 | ≤ 𝜖.
Derivation of Newton Raphson Method
To find the Newton’s formula, we begin with an initial guess 𝑥0 , sufficiently close to the root
𝑝. The next approximation 𝑥1 is given by the point at which the tangent line to 𝑦 = 𝑓(𝑥) at
(𝑥0 , 𝑓(𝑥0 )) crosses the 𝑥 −axis. It is clear that the value 𝑥1 is much closer to 𝑝 than the
original guess 𝑥0 .
𝑦−𝑦0
𝑚=
𝑥−𝑥0
𝑦−𝑓(𝑥0 )
𝑓′ 𝑥0 =
𝑥−𝑥0

𝑓 ′ 𝑥0 𝑥 − 𝑥0 = 𝑦 − 𝑓 𝑥0
If 𝑥1 denotes the point where this line
intersect the 𝑥 −axis then 𝑦 = 0 at 𝑥 = 𝑥1 ,
that is
𝑓 ′ 𝑥0 𝑥1 − 𝑥0 = −𝑓(𝑥0 )
−𝑓(𝑥0 )
𝑥1 − 𝑥0 =
𝑓′ (𝑥0 )
𝑓(𝑥0 )
𝑥1 = 𝑥0 −
𝑓′ (𝑥0 )

By repeating the process using the tangent line at 𝑥1 , 𝑓 𝑥1 , we obtain for 𝑥2


𝑓(𝑥1 )
𝑥2 = 𝑥1 −
𝑓′ (𝑥1 )

By writing the preceding equation in more general term, one gets equation:
𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 − , 𝑖 = 0, 1, 2, … .
𝑓′ (𝑥𝑖 )
The steps of algorithm of N.R. method
1. Input the functions 𝑓 𝑥 , 𝑓 ′ 𝑥 and the value of the initial
approximation 𝑥0 and 𝜖
2. Set 𝑖 = 1
𝑓(𝑥0 )
3. Compute 𝑥 = 𝑥0 −
𝑓′ (𝑥0 )
4. If 𝑥 − 𝑥0 ≤ 𝜖 then 𝑥 is the root and stop else 𝑖 = 𝑖 + 1
5. Set 𝑥0 = 𝑥 and return to step 3.
Example: Find a root of 𝑒 −𝑥 𝑐𝑜𝑠𝑥 = 0 using Newton-Raphson method starting at
𝑥0 = 0 to an accuracy of 0.001.
Solution:
𝑓 ′ 𝑥 = −𝑒 −𝑥 𝑠𝑖𝑛𝑥 − 𝑒 −𝑥 𝑐𝑜𝑠𝑥
′ −𝑥
𝑓 𝑥 = −𝑒 (𝑠𝑖𝑛𝑥 + 𝑐𝑜𝑠𝑥), 𝑥0 = 0 and 𝜖 = 0.001
𝑓 𝑥𝑖
So Newton’s formula is : 𝑥𝑖+1 = 𝑥𝑖 − ′ , 𝑖 = 0, 1, 2, … , 𝑛
𝑓 𝑥𝑖
the 1st iteration
𝑓 𝑥0 𝑒 0 cos(0)
𝑥1 = 𝑥0 − ′ ⟹0− 0 =1
𝑓 𝑥0 −𝑒 (sin(0)+cos(0))
|𝑥1 − 𝑥0 | ≮ 𝜖
then the 2nd iteration
𝑓 𝑥1 𝑒 −1 cos(1)
𝑥2 = 𝑥1 − ′ ⟹ 1 − −1 = 1.3910
𝑓 𝑥1 −𝑒 (sin(1)+cos(1))
𝑥2 − 𝑥1 = 0.3910 ≮ 𝜖,
so the 3rd iteration will be
𝑓 𝑥2
𝑥3 = 𝑥2 −
𝑓′ 𝑥2
𝑒 −1.3910 cos 1.3910
𝑥3 = 1.3910 − = 1.5448
−𝑒 −1.3910 sin 1.3910 +cos 1.3910
𝑥3 − 𝑥2 ≮ 𝜖, so
the 4th iteration is
𝑓 𝑥3
𝑥4 = 𝑥3 − = 1.5701
𝑓′ 𝑥3
𝑥4 − 𝑥3 ≮ 𝜖, then 5th iteration is
𝑓 𝑥4
𝑥5 = 𝑥4 − = 1.5708
𝑓′ 𝑥4
𝑥5 − 𝑥4 < 𝜖, so 𝑥5 it the root for 𝑓(𝑥).
Secant Method
One limitation of Newton’s method is that it requires computing the derivative of 𝑓 at each
iteration. So one alternative is to approximate the derivative using the secant line of the curve,
a line passing through (or interpolating) two points on the function. As these two points
iteratively become closer to the root and to each other, the secant line will become an
approximation of the tangent near the root and this secant method will approximate Newton’s
method. If 𝑥𝑖 and 𝑥𝑖−1 are two good approximation to the solution of the equation 𝑓 𝑥 = 0,
then
𝑓 𝑥𝑖 −𝑓 𝑥𝑖−1
𝑓 ′ 𝑥𝑖 ≅ , put this in newton’s formula
𝑥𝑖 −𝑥𝑖−1

𝑓(𝑥𝑖 )
𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 − 𝑓 𝑥
𝑖 −𝑓 𝑥𝑖−1

𝑓(𝑥𝑖−1 )
𝑥𝑖 −𝑥𝑖−1

𝑓(𝑥𝑖 )(𝑥𝑖 −𝑥𝑖−1 )


𝑥𝑖+1 = 𝑥𝑖 − , 𝑖 = 1, 2, 3, … .
𝑓 𝑥𝑖 −𝑓(𝑥𝑖−1 )
So secant method is another open method to solve 𝑓 𝑥 = 0. We Start with two
initial points 𝑥0 and 𝑥1 , locate the points (𝑥0 , 𝑓(𝑥0 )) and (𝑥1 , 𝑓(𝑥1 )) on the
curve, and draw the secant line connecting them. The 𝑥 −intercept of this secant
line is 𝑥2 . Next, use 𝑥1 and 𝑥2 to define a secant line and let the 𝑥 −intercept of
this line be 𝑥3 . Continue the process until the sequence converges to the root.
𝑓(𝑥𝑖 )(𝑥𝑖 −𝑥𝑖−1 )
𝑥𝑖+1 = 𝑥𝑖 − ,
𝑓 𝑥𝑖 −𝑓(𝑥𝑖−1 )

where
𝑖 = 1, 2, 3, … , 𝑥1 , 𝑥2 = initial points
The steps of algorithm of the secant method
1. Input the function 𝑓 and the values of initial approximation 𝑥0 , 𝑥1 and 𝜖.
2. Set 𝑖 = 1 and compute 𝑓 𝑥0 , 𝑓 𝑥1 .
𝑓 𝑥1 𝑥1 −𝑥0
3. Compute 𝑥2 = 𝑥1 − , f(𝑥2 )
𝑓 𝑥1 −𝑓 𝑥0
4. If |𝑥2 − 𝑥1 | ≤ 𝜖 then 𝑥2 is the root and stop
else set 𝑖 = 𝑖 + 1.
5. Set 𝑥0 = 𝑥1 , 𝑥1 = 𝑥2 , 𝑓 𝑥0 = 𝑓 𝑥1 , 𝑓 𝑥1 = 𝑓(𝑥2 )
and return to step 3
Example: Find a root of 𝑓(𝑥) = 𝑥 𝑒 𝑥 − 1 using the secant method accurate to 0.002. The
initial approximations 𝑥0 = 0 and 𝑥1 = 1.
Solution:
𝑓 𝑥 = 𝑥𝑒 𝑥 − 1, 𝑥0 = 0, 𝑥1 = 1 and 𝜖 = 0.002
𝑓 𝑥0 = 𝑓 0 = −1, 𝑓 𝑥1 = 𝑓 1 = 1.718
𝑓(𝑥𝑖 )(𝑥𝑖 −𝑥𝑖−1 )
The secant formula is 𝑥𝑖+1 = 𝑥𝑖 − , 𝑖 = 1, 2, 3, … .
𝑓 𝑥𝑖 −𝑓(𝑥𝑖−1 )
For 𝑖 = 1
𝑓 𝑥1 𝑥1 −𝑥0 1.718 1−0
𝑥2 = 𝑥1 − =1− = 0.368
𝑓 𝑥1 −𝑓 𝑥0 1.718+1
𝑥2 − 𝑥1 = 0.368 − 1 = 0.632 ≮ ϵ
𝑖 =𝑖+1=2
𝑓 𝑥2 𝑥2 −𝑥1 −0.468 0.368−1
𝑥3 = 𝑥2 − = 0.368 − = 0.503
𝑓 𝑥2 −𝑓 𝑥1 −0.468−1.718
𝑥3 − 𝑥2 ≮ 𝜖
𝑖=3
𝑥4 = 0.5786 , 𝑥4 − 𝑥3 ≮ 𝜖
𝑖=4
𝑥5 = 0.5665 , 𝑥5 − 𝑥4 ≮ 𝜖
𝑖=5
𝑥6 = 0.5671 , 𝑥6 − 𝑥5 < 𝜖
So when 𝑖 = 5, 𝑥𝑖+1 − 𝑥𝑖 = |𝑥6 − 𝑥5 | < 𝜖 then
𝑥6 is the root of 𝑓(𝑥).

You might also like