0% found this document useful (0 votes)
33 views11 pages

Nonlinear Equation Root-Finding Methods

Uploaded by

tesewaka3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views11 pages

Nonlinear Equation Root-Finding Methods

Uploaded by

tesewaka3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Computational Methods CNCS-MU

CHAPTER 2 : Nonlinear equations

2.1 Introduction
In this chapter we will learn methods for approximating solutions of nonlinear algebraic
equations. We will limit our attention to the case of finding roots of a single equation of one
variable. Thus, given a function, ( ), we will be interested in finding point(s) , for which
( ) . A classical example that we are all familiar with is the case in which ( ) is a quadratic
equation which have a formula to find all of its roots. Also, There exist formulas for finding roots
of polynomials of degree 3 and 4, but these are rather complex. In more general cases, when
( ) is a polynomial of degree that is > 5, formulas for the roots no longer exist. Of course, there
is no reason to limit ourselves to study polynomials, and in most cases, when ( ) is an arbitrary
function, there are no analytic tools for calculating the desired roots. Instead, we must use
approximation methods.
In fact, even in cases in which exact formulas are available (such as with polynomials of degree
3 or 4) an exact formula might be too complex to be used in practice, and approximation
methods may quickly provide an accurate solution. An equation ( ) may or may not have
solutions. We are not going to focus on finding methods to decide whether an equation has a
solutions or not, but we will look for approximation methods assuming that solutions actually
exist. We will also assume that we are looking only for real roots. There are extensions of some
of the methods that we will describe to the case of complex roots but we will not deal with this
case in this chapter. We will also not deal with general methods for finding all the solutions of a
given equation. Rather, we will focus on approximating one of the solutions. The methods that
we will describe, all belong to the category of iterative methods. Such methods will typically
start with an initial guess of the root (or of the neighborhood of the root) and will gradually
attempt to approach the root. In some cases, the sequence of iterations will converge to a limit, in
which case we will then ask if the limit point is actually a solution of the equation. If this is
indeed the case, another question of interest is how fast does the method converge to the
solution?

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 1


Computational Methods CNCS-MU

We consider methods for determining the roots of nonlinear equations of the form ( ) --(*)
Where ( ) may be given explicitly as a polynomial equation of degree ‘n’ in which is called
algebraic equation or it may be given as a transcendental equation.
There are two types of methods to find roots of equation (*)
1 Direct methods : Methods which give exact values of the roots in a finite number of steps.
These methods determine all the roots at the same time and are called analytical methods.
The limitation with these methods is that they fail to solve most of the real-world problems.
2 Indirect methods/iterative methods
Are based on the idea of successive approximations, starting with one approximation to the
root. These methods determine only one root at a time. These method can solve almost all
problems but they need use of computers to produce accurate solutions.
Theorem 1 (Intermediate Value Theorem)
If f is a continuous function for all x in the closed interval [a,b] and w is between f(a) and f(b),
then there is a number c in (a,b) such that f(c)=w.
Graphically ,

Figure : 2.1 Intermediate Value Theorem


Corollary : If a function ( ) is continouse in the interval [a,b] and if ( ) and ( ) are opposite
in signs , that is ( )* ( )<0 , then there exist at least a real number ( ) such that
( )
This means , if in the above graph , the horizontal line passing through w is considered as the x-
axis ,the we see that f(a) is negative ad f(b) is positive , so if f is smooth on (a ,b) ,then we can
conclude that the graph of f must cross the horizontal line, In which In our case it means the x-
axis. In the proceeding sections we will made use of this theorem to derive the different iterative
methods.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 2


Computational Methods CNCS-MU

2.2 Bisection method


This method is based on the repeated application of the intermediate value theorem above.
Let be a function defined on an interval [a,b] such that ( ) is negative and ( ) is positive ,

then there is a root in (a,b) and we let the first approximate to the root be .

Now if ( ) is zero , then we conclude is the exact root , otherwise, the root lies either
between a and or and b depending on whether ( ) is positive or negative respectively.

We take the new interval [ whose length is as before this new interval is bisected at

. this new interval [ will be of length ( ) ,repeat the

process until the lattest interval which contains the root is as small as desired.

Graphically,

Figure: 2.2 Bisection method


Pcdo code Algorithm
Step-1 : Take an interval [a,b] , a function f(x) and error tolerance tol.

Step-2 : If f(a)< 0 and f(b) > 0 ,then set .

Otherwise , go to 1.
Step-3 : If f( ) = 0, display is exact root.
else if f( )<0,set a=
else , set b=
Step-4 : If | |< tol ,display is the approximate root.
Go to 5.
Otherwise, go to 2.
Step-5: Stop.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 3


Computational Methods CNCS-MU

There are several advantages to the Bisection method. The method is guaranteed to converge.
The method always converges to an answer, provided a root was bracketed in the interval (a, b)
to start with. The disadvantage of the Bisection method is that it generally converges more
slowly than most other methods. For functions ( ) that have a continuous derivative, other
methods are usually faster.
Example 2.1
Use the Bisection method to find a root of the equation on the interval
accurate to three decimal places using the Bisection method.
Solution
Here, ( )
( ) ( )
( ) ( )
Hence, by the intermediate value theorem above a root lies between 2 and 3.
Let this root be , Tol = (3-2) = 1.00
( ) ( ) , then set a=m=2.5

, Tol = (3-2.5) = 0.5

( ) ( ) ,then set b=m=2.75

, Tol =(2.75-2.5)= 0.25

And so on .The successive approximates are tabulated below.


n a b m tol f(m)
1 2 3 2.5 0.5 -3.25
2 2.5 3 2.75 0.25 0.84688
3 2.5 2.75 2.625 0.125 -1.36211
4 2.75 2.625 2.6875 -0.0625 -0.28911
5 2.75 2.6875 2.71875 -0.03125 0.27092
6 2.6875 2.71875 2.70310 0.01563 -0.01108
7 2.71875 2.70313 2.71094 -0.00781 0.12942
8 2.71875 2.71094 2.71484 -0.00391 0.20005
9 2.71094 2.71484 2.71289 0.00195 0.16470

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 4


Computational Methods CNCS-MU

10 2.71094 2.71289 2.71191 0.00098 0.14706


11 2.71094 2.71191 2.71143 0.00049 0.13824

Hence , the root of ( ) is 2.71143 accurate up to three decimal places.


Exercise : Use the bisection method to find a root of the equation ( ) on
[0,2] with an error tolerance of . also find the number of function evaluations performed to
guarantee this accuracy?

2.3 Interpolation and secant methods

2.3.1 Interpolation/regular falsi method


In this method we choose two points a and b such that f(a) and f(b) are opposite in signs, hence a
root of f(x) must be between a and b.
Now, the equation of a cord joining the two points (a,f(a)) and (b,f(b)) is given by
( ) ( ) ( )
--------------------------------------------------------------------(2.1)

This method consists of replacing the part of the cord between the points ( ( )) and ( ( ))
by means of the cord with the x-axis as an approximation to the root. The point of intersection in
this case is obtained by putting y=0 in (1) above ,thus we obtain
( )( )
= a- ( ) ( )
-----------------------------------------------------------------(2.2)

Which is the first approximation to the root of f(x)=0.


If now f( ) and f(a) are opposite in signs , then the root lies between a and and we replace b
by otherwise , the root lies between and b and we replace a by in the next
approximation, the procedure is repeated till we get a root to the desired accuracy.
Geometrically,

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 5


Computational Methods CNCS-MU

Figure : 2.3 Method of false position


Algorithm for the method of false position
1. Define the first interval (a,b) such that solution exists between [Link] ( ) ( ) .
2. Compute the first estimate of the numerical solution using the above equation
3. Find out whether the actual solution is between a and or between and [Link] is
accomplished by checking the sign of the product ( ) ( )
- If ( ) ( ) ,the solution is between a and
- If ( ) ( ) , the solution is between and b
4. select the subinterval that contains the solution (a to to b ) is the new interval
- If |a-b|< tol , is the approximate solution go to 5.

Go to step 2and repeat the process


5. Stop.
It is very similar to Bisection method with the exception that it uses a different strategy to end up
with its new root estimate. Rather than bisecting the interval (a, b), it locates the root by joining
f(a) and f(b) with a straight line. The intersection of this line with the x-axis represents an
improved estimate of the root.
Example 2.3
Using the False Position method, find a root of the function ( ) to an accuracy of 5
digits. The root is known to lie between 0.5 and 1.0.
Solution
Here , a= 0.5 and b =1.0
Here, ( )
( )
( )
Hence , there is a root of f(x) on (0.5,1.0)
( )( ) ( )
= a- ( ) ( )
= 0.5 - =0.88067

We continue the iteration till the criteria is satisfied.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 6


Computational Methods CNCS-MU

2.3.2 Secant method


In this method we approximate graph of the function y= f(x) in the neighborhood of the root by a
secant passing through the points ( ) and ( ) and take the point of intersection of
this line with the x-axis.
The next iterative method we thus obtained is
( ) , k =1,2,3,…

Where , ( ), and are two conmsucative iterates.


Example 2.4
perform three iterations of the secant method to obtain an approximate root of the equation
( ) on [0,2] and determine the error tolerance committed to determine this root.
Solution
( )

( )
( )
hence ,there is a root of f(x) on (o,2)
,thus for k=1, the secant formula gives
( ) ( )( )

( )
for k=2, we have
( ) ( )( )

In a similar way for k=2 we obtain the value of

2.4 Iteration method/successive approximation method


Suppose we are given an equation ( ) whose roots are to be determined. The equation can
be written as ( )
Let be an initial approximation to the desired root α. Then, the first approximation is
given by ( ) .The second approximation ( ).

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 7


Computational Methods CNCS-MU

The successive approximations are then given by ( ) ( ) ,…, ( )


The sequence of approximations of always converge to the root of ( ) and it
can be shown that if |φ′(x)| < 1, when x is sufficiently close to the exact value α of the root and
→ α as n → ∞. The following theorem presents the convergence criteria for the iterative
sequence of solution for the Successive Approximation method.
Theorem 2: Let α be a root of ( ) which is equivalent to x = φ(x), φ(x) is continuously
differentiable function in an interval I containing the root x = α, if |φ′(x)| < 1, then the sequence
of approximations will converge to the root α provided the initial approximation
I.
Example 2.5
Find a real root of , correct to three decimal places using the Successive
Approximation method.
Solution
Here ( ) )………….(*)
Also ( )
and ( ) ( )
Therefore, root of eq(*) lies between 1 and 2. Since f(1) < f (2), we can take the initial
approximation = 1.
Now, Eq. (E.1) can be rewritten as
( ) ( )
Here, ( ) ( ) is continuously differentiable on[1,2]

More over ( )
( ) ⁄

And we see that =1 ,therefore the method converges and


The successive approximations of the root are given by
( ) ( ) ( ( ) )
( ) ( ) ( ( ) )
( ) ( ) ( ( ) )
( ) ( ) ( ( ) )
( ) ( ) ( ( ) )
Hence, the real root of ( ) is 1.324 correct to three decimal places.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 8


Computational Methods CNCS-MU

2.5 Conditions for convergence


Definition : an iterative method is said to be order p or has the rate of convergence p, if p is the
largest positive real number for which | | | | , Where is the error in the k-
th iteration. The constant is called the asymptotic error [Link] depends on various order
derivatives of ( ) evaluated at and is independent of k.
The relation +0( ) is called the error equation.
By substituting for all I in any iterative method and simplifying we obtain the error
equation for that method. The value of p thus obtained is called the order of the method.
Convergence of the bisection method
Note that in general, the error does not necessarily always reduce monotonically. However, as
in each iteration the search interval is always halved, the worst possible error is also halved:
| | | | | |
| | | |
| | | |
i.e., | | | |

We see that the convergence is linear (of order p=1) and the rate of convergence is 1/2.
Compared to other methods , the bisection method converges rather slowly, but one of the
advantages of the bisection method is that no derivative of the given function is needed. This
means the given function ( ) does not need to be differentiable.
Similarly , it can be shown that
 The order of convergence of secant method is , which is better than the linear
convergence of the bisection method, but worse than quadratic convergence of the
Newton-Raphson method.
 The Iteration method is first order convergence.

2.6 Newton’s method


Newton’s method is a relatively simple, practical, and widely-used root finding method. It is easy
to see that while in some cases the method rapidly converges to a root of the function, in some
other cases it may fail to converge at all. This is one reason as of why it is so important not only
to understand the construction of the method, but also to understand its limitations.
As always, we assume that ( ) has at least one (real) root, and denote it by r.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 9


Computational Methods CNCS-MU

We start with an initial guess for the location of the root, say . We then let ( ) be the tangent
line to ( ) at ,
i.e. ( ) ( ) ( )( )
The intersection of ( ) with the x-axis serves as the next estimate of the root. We denote this
point by and write
0- ( ) ( )( ) which means that
( )
( )

In general, the Newton method (also known as the Newton-Raphson method) for finding a root is
given by iterating (2.9) repeatedly,
( )
i.e., ( )

Two sample iterations of the method are shown in Figure 2.3. Starting from a point , we find
the next approximation of the root , from which we find and so on. In this case, we do
converge to the root of ( ) .
Geometrically,

Figure 2.5: Newton’s method


The above figure shows two iterations in Newton’s root-finding method. r is the root of ( ) we
approach by starting from ,, computing ,, then , etc.
Algorithm for newton raphsos method
1. select a point as a initial guess of the solution
2. for i=1,2,…, until the error is smaller than the specified value ,compute y using the equa
( )
tion .
( )

the iterations are stopped when the estimated relative error | | ,where

is a specified error.

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 10


Computational Methods CNCS-MU

Remark:
 The Newton-Raphson method ,when successful, works well and converges fast.
convergence problem occur when the value of ( ) is close to zero in the vicinity of the
solution, where ( )
 Newton-Raphson’s method has a quadratic convergence.
Example 2.8
Use Newton-Raphson method to find the real root near 2 of the equation
accurate to five decimal places.
Solution
Here , we have ( )
( ) , now taking
( ) ( )
( ) ( )
Therefore,
( )
( )

( ) ( ) ( )
( ) ( )

( ) ( ) ( )
( ) ( )

( ) ( ) ( )
( ) ( )

Hence ,the root of the equation is .

Tuemay T (lecturer In Mathematics) [Link]@[Link] Page 11

You might also like