0% found this document useful (0 votes)
4 views

numerics_all

Uploaded by

tedyyo974
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

numerics_all

Uploaded by

tedyyo974
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Introduction

Locating Roots
Method for Finding the Roots of Equations
Direct Method
Indirect or Iterative Methods
False Position method
Secant Method
Newton-Raphson Method

1 — Solution of Nonlinear Equations

1.1 Introduction
One of the most common problem encountered in applied mathematics is: Given a function
f (x), find the values of x for which

f (x) = 0

The solution of this equation is known as the root of the equation, or the zeroes of the function
f (x).
Definition 1.1.1 Consider an expression of the form

f (x) = an xn + an−1 xn−1 + . . . + a2 x2 + x1 + a0

Where a0 s ∈ R (an 6= 0) and n ∈ Z+ is called a polynomial in x of degree n, and the equation


f (x) = 0 is called an algebraic equation of degree n. If f (x) contains some other functions
like a logarithmic functions, exponential functions, trigonometric functions, etc., then the
f (x) = 0 is called a transcedental equation.

 Example 1.1
1. x3 − 3x + 6 = 0 ⇒ algebraic equation.
2. 2x4 + ex sin 2x = 0 ⇒ transcedential equation.


In each kind, if the coefficients are pure numbers, then they are called numerical equations.

1.2 Locating Roots


Definition 1.2.1 A number r , real or complex, for which f (r) = 0 is called a root of that
equation or a zero of f .
Geometrically, a root of an equation f (x) = 0 is the value of x at which the graph of the
equation y = f (x) intersects the x-axis.
2 Solution of Nonlinear Equations

Theorem 1.2.1 Intermediate Value Theorem (IVT)


If f ∈ C[a, b] and k is any number between f(a) and f(b), then there exist a number c in (a, b)
for which f (c) = k

Theorem 1.2.2 Locating root


If f ( x ) is continuous on [a , b], f (a) and f (b) are of opposite signs, then there exists at least
one number x0 in (a , b) such that f (x0 ) = 0.

Figure 1.1: Solution of f (x) = 0 between x = a and x = b

1.3 Method for Finding the Roots of Equations


Methods for finding roots of an equation can be classified in to two parts:
1. Direct Methods
2. Iterative methods

1.3.1 Direct Method


• Give exact value of the roots in a finite number of steps.
• Assumed to be free of round off errors.
• Determines all the roots at the same time.
• Are also called a closed form solution.
 Example 1.2 The roots of f (x) = a2 x2 + a1 x + a0 = 0 is obtained by
q q
−a1 + a21 − 4a2 a0 −a1 − a21 − 4a2 a0
x1 = and x2 =
2a2 2a2


1.3.2 Indirect or Iterative Methods


• This methods are based on the concept of successive approximations
• The general procedure is to start with one or more initial approximation to the root and
obtain a sequence of iterates, xk , which in the limit converges to the actual or true solution
to the root.
• The indirect or iterative methods are further divided into two categories:
– bracketing: require the limits between which the root lies.

Computational Methods Numerical Methods


1.3 Method for Finding the Roots of Equations 3

 Example 1.3 Bisection and False position methods




– open methods: require the initial estimation of the solution.


 Example 1.4 Newton-Raphson and Fixed-point iteration. 

Order of Convergence of the Iterative Method


Convergence of an iterative method is judged based on the order at which the error between
successive approximations to the root decreases.
Definition 1.3.1 The order of convergence of an iterative method is said to be kth order
convergent if k is the largest positive real number such that
ei+1
| |≤A
ei

Where: A ∈ R − is asymptotic error constant


ei , ei+1 − are the errors at successive approximations.

Note: Physically, the kth order convergent indicates the number of significant digits each
approximations increase at each iterations.

Bisection Or Interval Halving Method


Suppose that f (x) is continuous on an interval [a, b], and

f (a) f (b) < 0 (1.1)

Then f (x) changes sign on [a, b], and f (x) = 0 has at least one root on the interval by IVT.

Figure 1.2: A sketch indicating the interval where the root is located.

Definition 1.3.2 The simplest numerical procedure for finding a root is to repeatedly halve
the interval [a, b], keeping the half for which f (x) changes sign. This procedure is called
the bisection method, and is guaranteed to converge to a root, denoted here by α

Suppose that we are given an interval [a, b] satisfying (1.1) and an error tolerance ε > 0. The
bisection method consists of the following steps:
1
1. Define x1 = (a + b)
2
2. If b − x1 ≤ ε, then accept x1 as the root and stop.

Computational Methods Numerical Methods


4 Solution of Nonlinear Equations

3. IF f (a) · f (b) < 0, then set a = x1 . Otherwise, set b = x1 . Return to step 1


 Example 1.5 Find the largest root of f (x) ≡ x6 − x − 1 = 0 accurate to within ε = 0.001. 

Solution Let interval be [1, 1.5] =⇒ f (1) = −1 and f (1.5) = 1.5647. Therefore, on this interval,

Figure 1.3: A sketch indicating the interval where the root is located.

condition (1.1) is satisfied. Thus, the 1st approximation of the root is:
1
x1 = (a + b)
2
1
= (1 + 1.5)
2
= 1.25
f (x1 ) = 1.256 − 1.25 − 1
= 1.5647
f (1) · f (1.25) < 0 =⇒ The new interval is [1, 1.25]
ε1 = |a − x1 | = |1 − 1.25| = 0.25

The second approximation to the root is:


1
x2 = (a + x1 )
2
1
= (1 + 1.25)
2
= 1.125
f (x2 ) = 1.1256 − 1.125 − 1
= −0.0977
f (1.25) · f (1.125) < 0 =⇒ The new interval is [1.25, 1.125]
ε2 = |x2 − x1 | = |1.125 − 1.25| = 0.125

Hence the algorithm will yield:


Since ε9 = 0.00098 < ε = 0.001, we stop at the 9th iteration.

Computational Methods Numerical Methods


1.3 Method for Finding the Roots of Equations 5

Table 1.1: Bisection Method for the root of x6 − x − 1 = 0

i a b xi εi = |xi − xi−1 | f (xi )


1 1.0000 1.5000 1.2500 0.2500 1.5647
2 1.0000 1.2500 1.1250 0.1250 -0.0977
3 1.1250 1.2500 1.1875 0.0625 0.6167
4 1.1250 1.1875 1.1562 0.0312 0.2333
5 1.1250 1.1562 1.1406 0.0156 0.0616
6 1.1250 1.1406 1.1328 0.0078 -0.0196
7 1.1328 1.1406 1.1367 0.0039 0.0206
8 1.1328 1.1367 1.1348 0.0020 0.0004
9 1.1328 1.1348 1.1338 0.00098 -0.0096

Convergence Analysis of the Bisection Method


a+b a + x1 x1 + b
Since x1 = =⇒ x2 = or x2 = ,
2 2 2
2 x x2
z }| { z }| {
a + x1 x1 + b b−a
∴ |x2 − x1 | = −x1 = −x1 =
2 2 2
x1 + x2
x3 =
2
x1 + x2 x2 − x1 b − a
=⇒ |x3 − x2 | = − x2 = =
2 2 4

Continuing in the same fashion, we have

b−a
|xi+1 − xi | = =⇒ lim |xi+1 − xi | = 0
2i →∞

where b − a denotes the length of the original interval with which we started. Since the root
α ∈ [xi+1 , xi ] or [xi , xi+1 ], we know that

|α − xi+1 | ≤ |xi+1 − xi |

Thus bisection method converges to α, the solution.


To see how many iterations will be necessary, suppose we wan

|α − xi | ≤ ε

This will be satisfied if


b−a
≤ε
2i
Taking logarithms both sides, we can solve this to give
 
b−a
log
ε
i≥
log 2

Computational Methods Numerical Methods


6 Solution of Nonlinear Equations

For the previous example (7.3), this results in


 
0.5
log
0.001
i≥ = 8.965784285
log 2

i.e., we need i = 9 iterates, exactly the number computed.

ADVANTAGE
There are several advantages to the bisection method:
1. It is guaranteed to converge.
2. The error bound is guaranteed to decrease by one-half with each iteration

DISADVANTAGE
The principal disadvantage of the bisection method is that generally converges more slowly than
most other methods.

1.4 False Position method


The method is also called linear interpolation method or chord method or regula-falsi method.
At the start of all iterations of the method, we require the interval in which the root lies. Let
the root of the equation f (x) = 0, lie in the interval (xl , xu ), that is, f (xl ) f (xu ) < 0. Rather than
bisecting the interval (a, b), it locates the root by joining f (xl ) and f (xu ) with a straight line.
The intersection of this line with the x-axis represents an improved estimate of the root. Using

Figure 1.4: False-Position Method

similar triangles, shown in Figure (1.4), the intersection of the straight line with the x axis can be
estimated as
f (xu ) − 0 0 − f (xl )
=
xu − xr xr − xl
which can be solved for
xl f (xu ) − xu f (xl )
xr =
f (xu ) − f (xl )

Computational Methods Numerical Methods


1.4 False Position method 7

Procedure for the False Position Method to Find the Root of the Equation f (x) = 0
Step 1: Choose two initial guess values (approximations) x0 and x1 (where x1 > x0 ) such that
f (x0 ). f (x1 ) < 0.
Step 2: Find the next approximation x2 using the formula
x0 f (x1 ) − xu f (x0 )
x2 =
f (x1 ) − f (x0 )
Step 3: If f (x2 ) f (x1 ) < 0, then go to the next step. If not, rename x0 as x1 and then go to the next
step.
Step 4: Evaluate successive approximations using the formula
xn−1 f (xn ) − xn f (xn−1 )
xn+1 = , where n= 2,3,...
f (xn ) − f (xn−1 )
But before applying the formula for xn+1 , ensure whether f (xn−1 ). f (xn ) < 0; if not,
rename xn−2 as xn−1 and proceed.
Step 5: Stop the evaluation when |xn − xn−1 | < ε, where ε is the prescribed accuracy (tolerance
error).
 Example 1.6 Using the False Position method, find a root of the function f (x) = ex − 3x2
correct to three decimal place. The root is known to lie between 0.5 and 1.0. 

Solution: Let x0 = 0.5, x1 = 1. then we have, f (0.5) = 0.8987, f (1) = −0.2817. Thus,
x0 f (x1 ) − xu f (x0 )
x2 = = 0.88067
f (x1 ) − f (x0 )
f (x2 ) = 0.0858 Since f (x1 ) f (x2 ) < 0, the root lies in the interval (0.88067, 1), then next
approximation is,
x1 f (x2 ) − xu f (x1 )
x3 = = 0.90852
f (x2 ) − f (x1 )
f (x3 ) = 0.00441. Since f (1) f (0.90852) < 0, the root lies in the interval (0.90852, 1), then next
approximation is,
x2 f (x3 ) − xu f (x2 )
x4 = = 0.90993
f (x3 ) − f (x2 )
f (x4 ) = 0.00022. Since f (1) f (0.90993) < 0, the root lies in the interval (0.90993, 1), then next
approximation is,
x3 f (x4 ) − xu f (x3 )
x5 = = 0.91000
f (x4 ) − f (x3 )
f (x5 ) = 0.00001. Since f (1) f (0.91000) < 0, the root lies in the interval (0.91000, 1), then next
approximation is,
x4 f (x5 ) − xu f (x4 )
x6 = = 0.91001
f (x5 ) − f (x4 )
f (x6 ) = 0.00000
Therefore, the root is 0.91 correct to four decimal places.

Computational Methods Numerical Methods


8 Solution of Nonlinear Equations

Exercise 1.1 1. Find a real root of cos x − 3x + 5 = 0. Correct to four decimal places
using the method of False Position method.
2. Locate the intervals which contain the positive real roots of the equation x3 − 3x + 1 = 0.
Obtain these roots correct to three decimal places, using the method of false position.


1.5 Secant Method


The Secant method is similar to the Regula-Falsi method, except for the fact that we drop
the condition that f(x) should have opposite signs at the two points used to generate the next
approximation. Instead, we always retain the last two points to generate the next. Thus, if xn−1
and xn are two approximations to the root, then the next approximation xn+1 to root is given by
xn−1 f (xn ) − xn f (xn−1 ) (xn − xn−1 )
xn+1 = = xn − f (xn )
f (xn ) − f (xn−1 ) f (xn ) − f (xn−1 )
 Example 1.7 Determine a root of the equation sin x + 3 cos x − 2 = 0 using the secant method.


Solution: Let x0 = 1, x1 = 1.5


The formula for x2 is given by
x0 f (x1 ) − x1 f (x0 )
x2 = = 1.1846
f (x1 ) − f (x0 )
x1 f (x2 ) − x2 f (x1 )
x3 = = 1.2056
f (x2 ) − f (x1 )
x4 = 1.2078
x5 = 1.2078

Thus the required root is x= 1.2078

Exercise 1.2 1. Find a root of the equation x3 − 8x − 5 = 0 using the secant method.
3
2. Calculate an approximate value for 4 4 using one step of the secant method with x0 = 3
and x1 = 2.
3. What is the appropriate formula for finding square roots using the secant method?
xn )(x −x
4. If xn+1 = xn + (2−e n
(exn −exn−1 )
n−1 )
with x0 = 0 and x1 = 1, what is lim xn ?
n→∞


1.6 Newton-Raphson Method


The Newton-Raphson method is the best-known method of finding roots of a function f (x). The
method is simple and fast. Assume that f(x) is continuous and differentiable and the equation is
known to have a solution near a given point.
If the initial guess at the root is xi , a tangent can be extended from the point [xi , f (xi )]. The point
where this tangent crosses the x axis usually represents an improved estimate of the root.
As in Fig. 1.5, the first derivative at x is equivalent to the slope:
f (xi ) − 0
f 0 (x) =
xi−1 − xi
which can be rearranged to yield
f (xi )
xi+1 = xi −
f 0 (xi )

Computational Methods Numerical Methods


1.6 Newton-Raphson Method 9

Figure 1.5: Newton Method

which is called the Newton-Raphson formula.


 Example 1.8 Find the root of the equation x3 − x − 4 = 0 correct to three places of decimal by
using NR-method. 

Solution: Let x0 = 2. Then


f (x0 )
x1 = x0 − = 1.81818, and f (x1 ) = 0.192336
f 0 (x0 )
f (x1 )
x2 = x1 − 0 = 1.796613, and f (x2 ) = 0.002527
f (x1 )
f (x2 )
x3 = x2 − 0 = 1.796322, and f (x3 ) = 0.000000457
f (x2 )
We have
|x3 − x2 | = |1.796322 − 1.796613| = 0.000291 < 0.5 × 10−3
Thus, the required root is x = 1.796322 correct to three decimal place.
 Example 1.9 compute 171/3 correct to four decimal places using NR-method, assuming the
initial approximation as x0 = 2. 

Solution: Let x = 171/3 ⇒ x3 − 17 = 0. f (x) = x3 − 17 ⇒ f 0 (x) = 3x2 . Then


f (x0 )
x1 = x0 − = 2.75, and f (x1 ) = 3.7969
f 0 (x0 )
f (x1 )
x2 = x1 − 0 = 2.5826, and f (x2 ) = 0.2264
f (x1 )
f (x2 )
x3 = x2 − 0 = 2.5713, and f (x3 ) = 0.00099
f (x2 )
f (x3 )
x4 = x3 − 0 = 2.57128, and f (x4 ) = 0.000000019
f (x3 )
We have
|x3 − x2 | = |2.57128 − 2.5713| = 0.00002 < 0.5 × 10−4
Thus, the required root is x = 2.571281592 correct to four decimal place.

Computational Methods Numerical Methods


10 Solution of Nonlinear Equations

Exercise 1.3 Use the NR method to determine the root of the function with acuracy 10−4 .
1. f (x) = x6 − x − 1
2. f (x) = cos x − 3x + 5
3. x log x = 1.2
4. f (x) = cos x − xex


Exercise 1.4 1. Derive the Newton’s method for finding the qth root of a positive number
N, N 1/q , where N > 0, q > 0.
3
2. Calculate an approximate value for 4 4 using NR method with x0 = 3


Computational Methods Numerical Methods


Introduction
Solution Methods
Direct Methods:
Iterative Methods

2 — System of Linear Equations

2.1 Introduction
This chapter is about solving system of linear equations that has many application in engineering
and science. Consider n-linear system of equations as shown below.
a11 x1 + a12 x2 + a13 x3 + . . . + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + . . . + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + . . . + a3n xn = b3 (2.1)
.. .. .. .. .. ..
. . . . . .
an1 x1 + an2 x2 + an3 x3 + . . . + ann xn = bn
This equation can be written in a matrix form as

AX = B

Where A is an n × n coefficient matrix and X and B are column vectors of size n. Hence, matrix
multiplication is compatible with the solution of linear algebraic equations. If B = 0 then the
above system is called homogeneous system of equations. Otherwise it is non-homogeneous.
Homogeneous system of equations can be solved using eigen-value method leading to eigen
value problem.
Definition 2.1.1 If system of equations are all satisfied simultaneously by at least one set of
values, then it is consistent.

There are three row operations that are useful when solving systems of linear algebraic equations.
These operations does not affect the solution of the system. Thus they can be used without any
hesitation during the solution process when necessary. These are:
1. Scaling: Any row of the above equation can be multiplied by a constant.
2. Pivoting: The order of rows can be interchanged as required.
3. Elimination: Any row of a system of linear equations can be replaced by any weighted
linear combination of that row with any other row.

2.2 Solution Methods


There are two major solution methods of an equation whether it is a system or not. These are:
12 System of Linear Equations

1. Direct Methods and


2. Indirect ( Iterative) methods

2.2.1 Direct Methods:


These are analytical methods by which we find an exact solution (if possible). These methods
include: Cramer’s rule, Gaussian elimination with and without pivoting and LU-factorization.
Cramer’s Rule
The equation given by (3.1) can be solved using Cramer’s Rule. Let us consider a 3 × 3 equations.

a11 x1 + a12 x2 + a13 x3 = b1


a21 x1 + a22 x2 + a23 x3 = b2
a31 x1 + a32 x2 + a33 x3 = b3

The coefficient matrix A of this system is


     
a11 a12 a13 x1 b1
A =  a21 a22 a23  with X =  x2  and B =  b2  =⇒ AX = B
a31 a32 a33 x3 b3

Let |A| be the determinant of A such that |A| 6= 0, then to find xi , i = 1, 2, 3, replace the ith
column of the matrix A by the right hand side vector B and evaluate the determinant and divide it
by |A|. i.e.

b1 a12 a13 a11 b1 a13 a11 a12 b1


b2 a22 a23 a21 b2 a23 a21 a22 b2
b3 a32 a33 a31 b3 a33 a31 a32 b3
x1 = , x2 = and x3 =
|A| |A| |A|

Exercise 2.1 Solve the system

x1 + x2 − x3 = 1
x1 + 2x2 − 2x3 = 0
−2x1 + x2 + x3 = 1

Using Cramer’s rule. 

The coefficient matrix A is:


 
1 1 −1
A =  1 2 −2  =⇒ |A| = 2
−2 1 1

Therefore,
1 1 −1 1 1 −1 1 1 1
0 2 −2 1 0 −2 1 2 0
1 1 1 −2 1 1 −2 1 1
x1 = = 2 , x2 = = 2 and x3 = =3
1 1 −1 1 1 −1 1 1 −1
1 2 −2 1 2 −2 1 2 −2
−2 1 1 −2 1 1 −2 1 1

Computational Methods Numerical Methods


2.2 Solution Methods 13
 
2
∴ X =  2  is the solution vector.
3

R [Limitation of Cramer’s Rule] Cramers rule may be used for a system of size n×n for n ≤ 5,
for n > 5 this rule is impractical. For n = 10 for instance, the number of multiplication
required will be 70,000,000 which is infeasible to compute. Hence use of numerical
methods to solve systems of linear equations are appropriate and efficient.

Gaussian Elimination
The aim of this method is to solve a system of n equations in n unknowns by reducing it to an
equivalent triangular set. Which is a preparation for a back substitution or forward substitution.
This method, hence consists of two steps:
1. Triangularization and
2. Back or forward substitution.
Triangularization
| |
   
a11 a12 a13 ... a1n b1 a11 a12 a13 ... a1n b1
 a21 a22 a23 ... a2n | b2  Triangularization  a22 a23 ... a2n | b2 
  −−−−−−−−−−−−→  

 a31 a32 a33 ... a3n | b3  by applying row operation 
  a33 ... a3n | b3 

 .. .. .. .. .. ..   .. .. .. 
 . . . . . | .   . . | . 
an1 an2 an3 ... ann | bn ann | bn
OR
| |
   
a11 a12 a13 ... a1n b1 a11 b1
 a21
 a22 a23 ... a2n | b2  Triangularization
 −−−−−−−−−−−−−−−→ 
 a21 a22 | b2 

 a31
 a32 a33 ... a3n | b3  by applying row operation  a31
  a32 a33 | b3 

 . .. .. .. .. ..  . .. .. .. ..
 ..  ..
 
. . . . | .  . . . | . 
an1 an2 an3 ... ann | bn an1 an2 an3 ... ann | bn


 a11 x1 + a12 x2 + a13 x3 = b1
Let a21 x1 + a22 x2 + a23 x3 = b2 be a system of equations in three unknowns.
a31 x1 + a32 x2 + a33 x3 = b3

(2.2)
a21
To reduce this system into an upper triangular matrix, multiply the first equation by and
a11
subtract it from the second equation to eliminate x1 from the second equation. Similarly, multiply
a31
the first equation by and subtruct it from the third equation to eliminate x2 from the third
a11
equation. As a result the above system becomes,

 a11 x1 + a12 x2 + a13 x3 = b1
a022 x2 + a023 x3 = b02
a032 x2 + a033 x3 = b03

a032
Again, from this last equation eliminate x2 by multiplying the the second equation by and
a022
subtruct it from the third equation. Thus, the above system of equations becomes,

 a11 x1 + a12 x2 + a13 x3 = b1
a022 x2 + a023 x3 = b02
a0033 x3 = b003

Which is an upper triangular system.

Computational Methods Numerical Methods


14 System of Linear Equations

Back Substitution
After the triangular set of equations has been found, the last equation in this equivalent set yields
the value of x3 directly from the triangular set of equations as
b002
x3 = .
a003 3
Thus, the values of x1 and x2 can be obtained from second and third equation respectively. In
the same way, Gauss-elimination method can be generalized to find the solution of n system of
equations in n unknowns.
Exercise 2.2 Solve

2x1 + 3x2 − x3 = 5 
4x1 + 4x2 − 3x3 = 3 using Gauss-Elimination method.
2x1 − 3x2 + 2x3 = 2

a21 4
To eliminate x1 from the second equation, multiply the first equation by = = 2 and subtract
a11 2
it from the second equation. Similarly eliminate x1 from the third equation by multiplying the
a31 2
first = = 1 and eliminate x2 from the third equation by multiplying the second equation
a11 2
a32 −6
by = = −3 and subtruct from the third equation to obtain
a22 2

2x1 + 3x2 − x3 = 5 
2x2 + x3 = 7
6x3 = 18

By back substitution, the solution is


   
x1 3
 x2  =  2 
x3 1

Exercise 2.3 Solve the following system of equations by Gauss-Elimination Method.



2x1 + x2 + x3 = 10 
3x1 + 2x2 + 3x3 = 18
x1 + 4x2 + 9x3 = 16

Gauss-Elimination With Pivoting Method


Definition 2.2.1 — Partial Pivoting Procedure. If a zero element or near to zero element
is found in diagonal position i.e. ai j f or i = j, which is a pivot element, interchange the
corresponding row with this element with an other row having the maximum value in absolute
value in that corresponding column. This process can be explained in the following steps.

step 1: Largest coefficients of x1 ( may be positive or negative) is selected from all the equations
then the first equation will be interchanged with the equation with this largest value. This
largest value is called the pivot element and the row containing this element is called pivot
row. This row will be used to eliminate the other coefficients of x1 .

Computational Methods Numerical Methods


2.2 Solution Methods 15

step 2: Numerically largest value of x2 is selected from the remaining equations. In this step we
wont consider the first equation. Then interchange the second equation with the row with
this largest value. The row containing this value is used to eliminate the other coefficients
of x2 except the previously selected rows. This procedure is continued until the upper
triangular system is obtained.

R The pivot element is selected to be the largest in absolute value to maximize the precision
of the solution.

Exercise 2.4 Solve the system of equation



4x1 + 4x2 + x3 + 4x4 = 12 

2x1 + 5x2 + 7x3 + 4x4 = 1

10x1 + 5x2 − 5x3 = 25 

−2x1 − 2x2 + x3 − 3x4 = −10

Using Gaussian Elimination with partial pivoting. 

R2 −→R2 −(1/5)R1
−5
   
4 4 1 4 12 10 5 0 25 R3 −→R3 −(2/5)R1
 2 5 7 4 1  R1 ←→R3  2 5 7 4 1  −−−−−−−−−−−−−−−→
−− −→  R −→ R4 + (1/5)R1
 10 5 −5 0 25  4 4 1 4 12  4
−2 −2 1 −3 −10 −2 −2 1 −3 −10

−5 −5
   
10 5 0 25 R3 −→R3 −(1/2)R2 10 5 0 25
 0 4 8 4 −4  −−−−−−−−−−−−→  0 4 8 4 −4 
R −→ R4 − (1/4)R2 
 0 2 3 4 2  4 0 0 −1 2 4 
0 −1 0 −3 −5 0 0 2 −2 −6
5 −5 10 5 −5
   
10 0 25 0 25
R4 ←→R3  0 4 8 4 −4  R4 −→R4 +(1/2)R3  0 4 8 4 −4 
−− −→  − − − − −− −→ 
0 0 2 −2 −6  0 0 2 −2 −6 
0 0 −1 2 4 0 0 0 1 1
Therefore, after back substitution, the solution of the system is
   1 
x1 2
 x2   2 
 x3  =  −2
   

x4 1

ILL-CONDITIONED SYSTEM OF EQUATION


Definition 2.2.2 The system of equation AX = B is said to be ill-conditioned or unstable
system if it is highly sensetive to small change in A or B. i.e. a small change in A or B causes
a big difference in the solution of the system.

On the other hand if small changes in A and/or B causes a small change in the solution of the
system, then it is said to be stable (well conditioned). Thus in an ill conditioned system, even the
round off errors affect the solution badly. And it is quite difficult to recognize an ill conditioned
system.

Computational Methods Numerical Methods


16 System of Linear Equations

Exercise 2.5 Consider the system


    
100 −200 x1 100
=
−200 401 x2 −100


By any of the methods, the solution is


 
201
X=
100

Suppose the elements of the coefficient matrix A are altered or perturbed slightly to yeild the
following generally similar system.
    
101 −200 x1 100
=
−200 400 x2 −100

Note that a11 has increased by 1% and a22 has been decreased by less than 0.254% with these
rather mdest changes in A, the exact solution of the perturbed system is
 
50.00
X=
24.75

Which means change by 1% in the equation generate change in the order of 70% in the system.
Thus, the system is clearly ill-conditioned.
Exercise 2.6 Show that the system of equations

     Making     
7 −10 x1 1 −−−−−−−−−−−−→ 7 −10 x1 1.01
= slight change as =
−5 7 x2 0.7 −5 7 x2 0.69

is ill-conditioned. 

LU-Factorization (Decomposition)
To solve the general linear system AX = B by this method factorize the coeffient matrix A into
product of two triangular matrices. This will reduce the problem to solving two linear systems.
This method is called triangular decomposition; which includes the variants of crout, Doolitte
and Cholesky.
Doolittile Version
Consider the system of equation AX = B. Now, we will factor matrix A into two other matrices.
A lower triangular matrix L and an upper triangular matrix U such that A = LU. The method
how to solve the given system is describe as follows.
Starting with the system AX = B, introduce a new variable Y such that Y = UX so that
AX = (LU)X = L(UX) = B ⇒ LY = B. Since L is a lower triangular matrix, the system is
almost trivial to solve for the unknown Y , once we found the vector Y , then we solve the system
Y = UX
Now, the only thing remained to find is the factors L = (`i j ) and U = (ui j ), the triangular
matrices. This is accomplished by the usual Gauss-Elimination Method. In the Gauss elimination
process we elimination the ith row and jth column entry of A (its equivalent) we replace the the ith
row by Ri − αR j (i.e. Ri → Ri − αR j ). α will be the ith row and jth column entry of L (`i j = α).
This will be illustrated by the following example.

Computational Methods Numerical Methods


2.2 Solution Methods 17

Exercise 2.7 Solve the following system using LU-Factorization (Doolittile)

    
1 2 3 x1 0
 3 4 1   x2  =  6 
1 0 1 x3 1


The main point here is decomposing the coefficient matrix A into lower and upper triangular
matrices L and U such that A = LU by applying the Gauss-Elimination.

  R2 −→R2 −3R1 =⇒`21 =3


  R3 −→R3 −R2  
1 2 3 −−−−−−−−−−−−−−−−−→ 1 2 3 −−−−−→ 1 2 3
A =  3 4 1  R3 −→ R3 − R1 =⇒ `31 = 3  0 −2 −8  =⇒ `32 = 1  0 −2 −8  = U
1 0 1 0 −2 −2 0 0 6

Therefore,
    
1 2 3 1 0 0 1 0 0
∴ U =  0 −2 −8  and L =  `21 1 0  =  3 1 0 
0 0 6 `31 `32 1 1 1 1

y1
Since AX = (LU)X = L(UX) = B, let Y = UX =  y2  ⇒ LY = B. Thus
y3
    
1 0 0 y1 0
 3 1 0   y2  =  6 
1 1 1 y3 1

Using forward substitution,


   
y1 0
 y2  =  6 
y3 −5

    
1 2 3 x1 0
But Y = UX ⇒  0 −2 −8   x2  =  6 
0 0 6 x3 −5

Using backward substitution,


   
x1 11/6
 x2  =  1/3 
x3 −5/6

Exercise 2.8 Cholesky Method is a reading assignment. 

Computational Methods Numerical Methods


18 System of Linear Equations

Crouts’s Method
Let us consider three equations in three nknown.

a11 x1 + a12 x2 + a13 x3 = b1 
Let a21 x1 + a22 x2 + a23 x3 = b2 ⇒ AX = B
a31 x1 + a32 x2 + a33 x3 = b3

   
`11 0 0 1 u12 u13
Let A = LU where L =  `21 `22 0  and U =  0 1 u23 
`31 `32 `33 0 0 1

    
`11 0 0 1 u12 u13 a11 a12 a13
∴ LU =  `21 `22 0   0 1 u23  =  a21 a22 a23  = A
`31 `32 `33 0 0 1 a31 a32 a33

Provided that all the principal minors of A are non singular.


Now, AX = B ⇒ LUX = B. Let UX = V ⇒ AX = LV , Thus LV = B becomes
    
`11 0 0 v1 b1
 `21 `22 0   v2  =  b2  Solve for v1 , v2 , v3 by forward substitution
`31 `32 `33 v3 b3

Since UX = V we have
    
1 u12 u13 x1 v1
 0 1 u23   x2  =  v2  and solve for x1 , x2 , x3 by backward substitution
0 0 1 x3 v3

Exercise 2.9 Using Crouts’s method solve the system


    
1 1 1 x1 1
 4 3 −1   x2  =  6 
3 5 3 x3 4


Let
    
`11 0 0 1 u12 u13 1 1 1
LU = A ⇒  `21 `22 0   0 1 u23  =  4 3 −1 
`31 `32 `33 0 0 1 3 5 3
   
`11 `11 u12 `11 u13 1 1 1
⇒  `21 `21 u12 + `22 `21 u13 + `22 u23  =  4 3 −1 
`31 `31 u12 + `32 `31 u13 + `32 u23 + `33 3 5 3
`11 = 1 `21 = 4 `31 = 3 `22 = −1 `33 = −10

`32 = 2 u12 = 1 u13 = 1 u23 = 5
Thus,
   
1 0 0 1 1 1
L =  4 −1 0  and U =  0 1 5 
3 2 −10 0 0 1

Computational Methods Numerical Methods


2.2 Solution Methods 19

Suppose V = Ux ⇒ LV = B, then
        
1 0 0 v1 1 v1 1
 4 −1 0   v2  =  6  ⇒  v2  =  −2  by forward substitution
3 2 −10 v3 4 v3 −0.5
Now since UX = B we have
        
1 1 1 x1 1 x1 −1
 0 1 5   x2  =  −2  ⇒  x2  =  0.5  by backward substitution
0 0 1 x3 −0.5 x3 −0.5

Exercise 2.10 Solve the system



2x1 + x2 + x4 = 1 
3x1 + 0.5x2 + x3 + x4 = 2

4x1 + 2x2 + 2x3 + x4 = −1 

x2 + x3 + 2x4 = 0

by the Crout’s method. 

2.2.2 Iterative Methods


The basic ingridients for this method are:
• Proper choice of initial values: for the iterative method to converge,quickly proper initial
value must be chosen. The physical problems help the analyst to choose the proper initial
values.
• Termination of the iterative process: In practive the true solution X will not be available
and hence the decision to terminate the process can not be based on the error norm which
is defined as the difference between the iterative value and the true value.
Suppose we want to solve
AX = B
a vector difference relative norm is defined as
xk − xk−1
xk
Thus iteratin may be terminated if
xk − xk−1
= ε (tolerance)
xk
Jacobi’s Method
In for the iterative scheme to converge to the true solution, the equations must satisfy diagonal
dominance criteria.
n
|aii | > ∑ ai j
j=1, j6=i

Consider the system of equation


    
a11 a12 a13 · · · a1n x1 b1
 a21 a22 a23 · · · a2n  x2   b2 
    
 a31 a32 a33 · · · a3n x3 b3
=
   
  
 .. .. .. .. ..  ..   .. 
 . . . . .  .   . 
an1 an2 an3 · · · ann xn bn

Computational Methods Numerical Methods


20 System of Linear Equations

Dividing each equation by the leading diagonal term, we get


n
b1 a1 j x j
x1 = − ∑
a11 j=2, j6=1 a11
n
b2 a2 j x j
x2 = − ∑
a22 j=1, j6=2 a22
n
b3 a3 j x j
x3 = − ∑
a33 j=1, j6=3 a33
..
.
n
bi ai j x j
xi = − ∑
aii j=1, j6=i aii

Procedure
(0) (1) (0)
Guess the initial values of all unknowns xi and solve each equation in turn for xi using xi as

n (0)
(1) bi ai j x j
xi = − ∑
aii j=1, j6=i aii

(r) (r−1)
Or the value of xi at r(th) iteration can be found using the values of xi of (r − 1)th iteration.
This iteration should be continued till the error
(r) (r−1)
xi − xi < ε1

Or the relative error


(r) (r−1)
xi − xi
(r)
< ε2
xi

If one needs p decimal places accuracy,

ε1 = 0.5 × 10−p
ε2 = 0.5 × 10−(p+1)

Exercise 2.11 Solve the system



5x1 − x2 + x3 = 10 
2x1 + 4x2 = 12
x1 + x2 + 5x3 = −1

using Jacobi’s Method. 

Solution:
Let

5 −1 1
A =  2 4 0  and
1 1 5

Computational Methods Numerical Methods


2.2 Solution Methods 21

|a11 | > |a12 | + |a13 | = | − 1| + |1| = 2


|a22 | > |a21 | + |a23 | = |2| + |0| = 2
|a33 | > |a31 | + |a32 | = |1| + |1| = 2

=⇒ diagonally dominant. Thus, the system is ready for the Jcob’s Method. Rearranging the
equations we get

(k) 1 (k−1) (k−1)



x1 = 10 + x2 − x3
5
(k) 1 
(k−1)

x2 = 12 − 2x1
4
(k) 1 (k−1) (k−1)

x3 = −1 − x1 − x2
5
Let the initial approximation be
 (0)   
x1 2
X (0) =  x2(0)  =  3 
 
(0) 0
x 3
   
(0) (0)
(1) (1/5) 10 + x2 − x3
     
x1     (1/5) (10 + 3 − 0) 2.6
(1)  (1)   (0)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(2))  =  2.0 
 
(1) (1/5) (−1 − 2 − 3) −1.2
   
x3 (0) (0)
(1/5) −1 − x1 − x2
   
(1) (1)
(1/5) 10 + x − x
 (2)     
x1   2  3  (1/5) (10 + 2 + 1.2) 2.64
(1)
X (2) =  x2(2)  =  (1/4) 12 − 2x1  =  (1/4) (12 − 2(2.6))  =  1.70 
   
(2) (1/5) (−1 − 2.6 − 2) −1.12
   
x3 (1) (1)
(1/5) −1 − x1 − x2
   
(2) (2)
(1/5) 10 + x2 − x3
 (3)     
x1     (1/5) (10 + 1.7 + 1.12) 2.564
(3)  (3)   (2)
X =  x2  =  (1/4) 12 − 2x1 = (1/4) (12 − 2(1.7))  =  1.680 
 
(3) (1/5) (−1 − 2.64 − 1.7) −1.068
   
x3 (2) (2)
(1/5) −1 − x1 − x2
   
(3) (3)
(1/5) 10 + x − x
 (4)     
x1   2  3  (1/5) (10 + 1.68 + 1.068) 2.5490
(3)
X (4) =  x2(4)  =  (1/4) 12 − 2x1  =  (1/4) (12 − 2(2.568))  =  1.7180 
   
(4) (1/5) (−1 − 2.564 − 1.68) −1.0488
   
x3 (3) (3)
(1/5) −1 − x1 − x2
   
(4) (4)
(1/5) 10 + x2 − x3
 (5)   
x1     2.553
(5)  (5)   (4)
X =  x2  =  (1/4) 12 − 2x1 = 1.725 
 
(5) −1.054
   
x3 (4) (4)
(1/5) −1 − x1 − x2
 
(5) (4)
x1 − x1  
  0.004
 (5) (4)  
X (5) − X (4) =  x2 − x2 = 0.007 
0.005
 
(5) (4)
x3 − x3

Thus correct to one decimal places the solution of the gien system is X 5

Exercise 2.12 Solve the system



3x1 + 4x2 + 15x3 = 54.8 
x1 + 12x2 + 3x3 = 39.66
10x1 + x2 − 2x3 = 7.74

Using Jacobi’s Method. 

Computational Methods Numerical Methods


22 System of Linear Equations

Gauss-Seidel Method
This method is a slight improvent of the Jacobi’s method. Unlike to the Jacobi’s method, updated values of xi ’s are
(0)
used instead of the values of the previous iterations. First get x1 using the formula
n a1 j x (0)
(1) b1 j
x1 = −∑
a11 j=2 a11
then after get the rest approcimations using the general iterative formula
(r)
(r) bi (i−1) ai j x j n
(r−1)
xi = − ∑ − ∑ ai j x j
aii j=1 aii j=i+1

R
1. The rate of convergence of Gauss-seidel method is roughly twice that of the Jacobi’s
method.
2. For this Method to converge, the coefficient matrix of the system should be diagonally
dominant.

Exercise 2.13 Solve


    
1 3 1 x1 10
 1 2 5   x2  =  12 
4 1 2 x3 16

using Gauss-seidel method. 

Solution: Rearranging the equation so that it is diagonally dominant as


    
4 1 2 x1 16
 1 3 1   x2  =  10 
1 2 5 x3 12
Let
(0)
   
x1 0
X (0) =  x2(0)  =  0 
 
(0) 0
x3
Rearranging the equation using the general formula
 (r)   (r−1) (r−1)
 
0.25(16 − x2 − 2x3 )

x1 0
X (r) =  x2(r)  =  (1/3)(10 − x1(r) − x3(r−1) )  =  0 
   
(r) (r) (r) 0
x3 0.2(12 − x1 − 2x2 )
Thus,
(1) (0) (0)
    
0.25(16 − x2 − 2x3 )
  
x1 0.25(16 − 0 − 2(0)) 4.0
X (1) =  x2(1)  =  (1) (0) 
(1/3)(10 − x1 − x3 )  =  (1/3)(10 − 4 − 0)  =  2.0 
  

x3
(1) (1) (1)
0.2(12 − x1 − 2x2 ) 0.2(12 − 4 − 2(2)) 0.8
 (2)   (1) (1)

0.25(16 − x2 − 2x3 )
   
x 0.25(16 − 2.0 − 2(0.8)) 3.1000
 1  
X (2) =  x2(2)  =  (2) (1)  
(1/3)(10 − x1 − x3 )  = (1/3)(10 − 3.1 − 0.8)  =  2.0333 
(2)
x3
(2) (2)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.1 − 2(2.0333)) 0.9667
 (3)   (2) (2)

0.25(16 − x2 − 2x3 )
   
x1 0.25(16 − 2.0333 − 2(0.9667)) 3.0083
(3)  (3)   (3) (2)  
X =  x2  =  (1/3)(10 − x1 − x3 )  = (1/3)(10 − 3.0083 − 0.9667)  =  2.012 
(3)
x3
(3) (3)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0083 − 2(2.012)) 0.9935
 (4)   (3) (3)

0.25(16 − x2 − 2x3 )
   
x1 0.25(16 − 3.0083 − 2(0.9935)) 3.0002
X (4) =  x2(4)  =  (1/3)(10 − x1(4) − x3(3) )  =  (1/3)(10 − 3.0083 − 0.9935)  =  2.002
    
(4)
x3
(4) (4)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0002 − 2(2.002)) 0.999916
 
3
Clearly as n → ∞ , X →  2 
1

Computational Methods Numerical Methods


Eigenvalues and Eigenvectors

3 — Solution for Eigenvalue Problems

3.1 Eigenvalues and Eigenvectors


Let A = (ai j ) be a square matrix of order n, we can find a non-zero column vector X and a constant λ such that

AX = λ X ⇐⇒ (A − λ In )X = 0

Which is written as:


(a − λ )x1

+ a12 x2 + a13 x3 + ... + a1n xn = 0
 11




 a21 x1 + (a22 − λ )x2 + a23 x3 + ... + a2n xn = 0
a31 x1 + a32 x2 + (a33 − λ )x3 + ... + a3n xn = 0 (3.1)

 .. .. .. .. .. ..


 . . . . . .
(ann − λ )xn

an1 x1 + an2 x2 + an3 x3 + ... + = 0
Where

λ is called the eigen-value of A and


X is the corresponding eigenvector of A

Here, the homogeneous equation will have a non-trivial solution if and only if the characteristic matrix is singular
(not invertible). i.e.
(a11 − λ ) a12 a13 ... a1n
a21 (a22 − λ ) a23 ... a2n
a31 a32 (a33 − λ ) ... a3n =0 (3.2)
.. .. .. .. ..
. . . . .
an1 an2 an3 ... (ann − λ )

R
1. Eigenvalues and eigenvectors are defined only for square matrices.
2. The zero vector cannot be an eigenvector even though
A·0 = λ ·0
An eigenvalue, however, can be zero.

The eigenvalue can be obtained by expanding the determinant and solving for λ . The degree of the polynomial
obtained is n. To each eigenvalue of a matrix A, there will correspond at least one eigenvector which can be found
by solving the appropriate set of homogeneous equation. Substituting the eigenvalue λi into eq.[3.1], we obtain
(A − λi I)X = 0. If X = Xi satisfies the equation (A − λi I)X = 0 then the eigenvector corresponding to λi is Xi .
24 Solution for Eigenvalue Problems
 
1 2
Exercise 3.1 Find the eigenvalues and the corresponding eigenvectors of A = 
4 3

Sol: The characteristic equation of A is


1−λ 2
=0 =⇒ (1 − λ )(3 − λ ) − 2(4) = 0
4 3−λ
=⇒ λ 2 − 4λ − 5 = 0
=⇒ λ1 = −1, λ2 = 5
 
x1
1. To find the eigenvector corresponding to λ1 = −1, let X1 = be the corresponding eigenvector. Then
x2
    
1 2 x1 x1
AX1 = λ1 X1 =⇒ = −1
4 3 x2 x2

2x1 + 2x2 = 0
=⇒
4x1 + 4x2 = 0
=⇒ x1 = −x2      
x1 −x2 −1
Hence, the corresponding eigenvector is: X1 = = = x2 taking x2 arbitrary.
x2 x2 1
 
−1
Choosing x2 = 1, X1 =
1
 
x1
2. To find the eigenvector corresponding to λ2 = 5, let X2 = be the corresponding eigenvector. Then
x2
    
1 2 x1 x1
AX2 = λ2 X2 =⇒ =5
4 3 x2 x2

−4x1 + 2x2 = 0
=⇒
4x1 − 2x2 = 0
1
=⇒ x1 = x2
2    1   1 
x1 2 x2 2
Hence, the corresponding eigenvector is: X2 = = = x2 taking x2 arbitrary. Choos-
x2 x2 1
 
1
ing x2 = 2, X2 =
2

Power Method
Power method is used to find the largest eigenvalue (in magnitude) and the corresponding eigenvector of a square
matrix A. This method is an iterative method.
Let A be the matrix whose eigenvalue and eigenvector are to be determine. The method requires writing AX = λ X.
Let X (0) be a non-zero initial vector, then we evaluate AX (0) and write as
AX (0) = λ (1) X (1)
Where λ (1) is the largest component in absolute value of the vector AX (0) . Then λ (1) will be the first approximated
eigenvalue and X (1) will be the first approximated value to the corresponding eigenvector. Similarly, we evaluate
AX (1) and write the result as
AX (1) = λ (2) X (2)
Which gives the second approximation. Repeating this process till |X (r) − X (r−1) | is negligible. Then λ (r) will be the
largest eigenvalue of A and X (r) is the corresponding eigenvector.
 Example 3.1 Find the largest eigenvalue and the corresponding eigenvector of the matrix
 
1 3 −1
A= 3 2 4 
−1 4 10

Starting with X (0) = (0, 0, 1)T as initial eigenvector take the tolerance limit as 0.01. 

Sol: First approximation:


        
1 3 −1 0 −1 −0.1 −0.1
(0) (1) (1)
AX =  3 2 4   0  =  4  = 10  0.4  =⇒ λ = 10 and X =  0.4 
−1 4 10 1 10 1.0 1.0

Computational Methods Numerical Methods


3.1 Eigenvalues and Eigenvectors 25

Second approximation:
        
1 3 −1 −0.1 0.1 0.009 0.009
AX (1) =  3 2 4   0.4  =  4.5  = 11.7  0.385  =⇒ λ (2) = 11.7 and X (2) =  0.385 
−1 4 10 1.0 11.7 1.000 1.000
      
1 3 −1 0.009 0.164 0.014
Third approximation: AX (2) =  3 2 4   0.385  =  4.797  = 11.531  0.416 
−1 4 10 1.000 11.531 1.000
 
0.014
=⇒ λ (3) = 11.531 and X (3) =  0.416  Repeating the above process, we get the largest eigenvalue, λ (5) =
1.000
 
0.024
11.560 and the corresponding eigenvector is X (5) =  0.421 
1.000

Computational Methods Numerical Methods


Introduction
Numerical Differentiation
Newton’s Forward Interpolation Formula
Newton’s Backward Difference
Numerical Integration
Newton-Cotes Formula
Trapezoidal Rule for Integration
Simpson 13 -Rule for Integration
Simpson’s 3/8–Rule For Integration

4 — Numerical Differentiation and Integration

4.1 Introduction
Z b
Several methods are available to find the derivative of a function f (x) and to evaluate the definite integral f (x) dx
a
for a, b ∈ R in a closed form. However, when f (x) is a complicated function or when it is given in a tabular form, we
Z b x
e sin x
use numerical methods. For instance, dx cannot be evaluated by other methods.
a 1 + x2

4.2 Numerical Differentiation


Numerical differentiation is required when a function is given in a tabular form as

x x0 x1 x2 ... xn
f (x) y0 y1 y2 ... yn

If this data is given by equidistant values, (for equally spaced data), the function f should be represented by
an interpolation formulas employing differences; such as Newton’s forward or backward interpolation formulas.
Otherwise we must represent by Lagrange’s or Newton’s divided difference formulas.

4.2.1 Newton’s Forward Interpolation Formula


x−x0
Consider Newton’s forward difference formula, putting u = h we get

u(u − 1) 2 u(u − 1)(u − 2) 3 u(u − 1) . . . (u − (n − 1)) n


f (x) = y = y0 + u∆y0 + ∆ y0 + ∆ y0 + . . . + ∆ y0
2! 3! n!
Then
dy dy du
f 0 (x) = = ·
dx du dx
1 dy
=
h du  
1 d u(u − 1) 2 u(u − 1)(u − 2) 3 u(u − 1) . . . (u − (n − 1)) n
= y0 + u∆y0 + ∆ y0 + ∆ y0 + . . . + ∆ y0
h du 2! 3! n!
1

2u − 1 2 2
3u − 6u + 2 3

= ∆y0 + ∆ y0 + ∆ y0 + . . . (4.1)
h 2! 3!

For tabular value of x the formula takes a simple form with a pattern, by setting x = x0 , we obtain u = 0 since
u = u−x
u
0
and hence
28 Numerical Differentiation and Integration

 
dy 1 1 1 1
= ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + . . .
dx x=x0 h 2 3 4

Formula for computing higher order derivatives may be obtained by successive differentiation.
The 2nd order derivatives, differentiating Equation 4.1 again, we obtain,

d2y 12u2 − 36u + 22 4


 
1 2 6u − 6 3
= ∆ y0 + ∆ y0 + ∆ y0 + . . . (4.2)
dx2 h2 3! 4!

R We can also derive different formula from other interpolation formulas.

4.2.2 Newton’s Backward Difference


Recall that Newton’s Backward Difference Formula is given by

p(p + 1) 2 p(p + 1)(p + 2) 3 p(p + 1) . . . (p + n − 1) n


y = yn + p∇yn + ∇ yn + ∇ yn + . . . ∇ yn (4.3)
2! 3! 2!
x−xn
Where p = h

Using this formula,

3p2 + 6p + 2 3
 
dy 1 2p + 1 2
= ∇yn + ∇ yn + ∇ yn + . . .
dx h 2! 3!

 
dy 1 1 1 1
=⇒ = ∇yn + ∇2 yn + ∇3 yn + ∇4 yn + . . .
dx x=xn h 2 3 4

The second order derivative is

d2y 12p2 + 36p + 22 4


 
1 2 6p + 6 3
= ∇ yn + ∇ yn + ∇ yn + . . . (4.4)
dx2 h2 3! 4!

d2y
 
1 2 11 5
=⇒ = ∇ yn + ∇3 yn + ∇4 yn + ∇5 yn + . . .
dx2 x=xn h 12 6
dy d2 y
 Example 4.1 Find dx and dx2
from the following data at

x 50 60 70 80 90
f (x) 19.96 36.65 58.81 77.21 94.61

a) x = 51
b) x = 65
c) x = 88


S OLUTION :
x−x0 51−50
Here h = 10, we use the Newton’s Formula taking x0 = 50 =⇒ u = h = 10 = 0.1 we need to construct Forward
Difference table

Computational Methods Numerical Methods


4.3 Numerical Integration 29

x − 50
x u= y ∆y ∆2 y ∆3 y ∆4 y
10
50 0 19.69
16.69
60 1 36.65 5.47
22.16 -9.23
70 2 58.81 -3.76 11.99
18.40 2.76
80 3 77.21 -1.00
17.40
90 4 94.61

3u2 − 6u + 2 3 4u3 − 18u2 + 22u − 6 4


 
dy dy 1 2u − 1 2
= = ∆y0 + ∆ y0 + ∆ y0 + ∆ y0
dx x=51 dx u=0.1 h 2! 3! 4!
1

2(0.1) − 1 2
3(0.1) − 6(0.1) + 2
= 16.69 + (5.47) + (−9.23)+
10 2! 3!
4(0.1)3 − 18(0.1)2 + 22(0.1) − 6

(11.99)
4!
= 1.0316
d2y d2y 12u3 − 36u + 22 4
 
1 2 6u + 6 3
= 2 = ∆ y0 + ∆ y0 + ∆ y0
dx2 x=51 dx u=0.1 h2 3! 4!
12(0.1)3 − 36(0.1) + 22
 
1 6(0.1) + 6
= 5.47 + (−9.23) + (11.99)
0.12 3! 4!
= 0.2305

Exercise 4.1 (b) and (c) are left as an exercise. 

4.3 Numerical Integration


The evaluation of integrals, the process known as integration or Quadrature, is required in many problems in
engineering and science.

Zb
I= f (x) dx (4.5)
a

The function f (x) which is to integrated, may be a known function or a set of discrete data. Many known functions,
however, do not have and explicit integral, and an approximate numerical procedure is required to evaluate Equation
4.5.
Numerical integration (Quadrature) formulas can be developed by fitting approximating function (e.g. Polynomi-
als) to discrete data and integrating the approximating function pn (x).

Zb Zb
I= f (x) dx = pn (x) dx (4.6)
a a

4.3.1 Newton-Cotes Formula


When the function to be integrated is known at equally spaced points, the Newton forward difference polynomial can
be fitted to the discrete data. The resulting formula is called Newton-cotes formula. Thus, in Equation 4.6, pn (x) is
Newton’s forward difference interpolating polynomial given by
u(u − 1) 2 u(u − 1) . . . (u − (n − 1)) n
pn (x) = y0 + u∆y0 + ∆ y0 + . . . + ∆ y0 (4.7)
2! n!

Computational Methods Numerical Methods


30 Numerical Differentiation and Integration
Where

x − x0
u= =⇒ dx = hdu
h

Hence, we observe that in Equation 4.6 the second integral must be transformed into an explicit function of u, so that
Equation 4.7 can be used directly. Thus,

Zb Zb u(b)
Z
I = f (x) dx = pn (x) dx = h pn (u) du (4.8)
a a u(a)
Zxn Zn
= pn (x) dx = h pn (u) du (4.9)
x0 0
n(n − 3)3
 
n n(2n − 3) 2
= nh y0 + ∆y0 + ∆ y0 + (4.10)
2 12 24

Equation 4.10 is called Gauss Legendre Quadrature formula.


Each choice of the degree n of the interpolating polynomial yields a different Newton-cotes formula.
For n = 1, we have
Z x1 Z x1 Z 1
f (x) dx ≈ (y0 + u∆y0 ) dx = h (y0 + u∆y0 ) du
x0 x0 0
1
u2

h
= h y0 u + ∆y0 = (y0 + y1 )
2 0 2
For n = 2, we have
Z x2 Z x1   Z 2 
u(u − 1) 2 u(u − 1) 2
f (x) dx ≈ y0 + u∆y0 + ∆ y0 dx = h y0 + u∆y0 + ∆ y0 du
x0 x0 2 0 2
2
u2 u2 (2u − 1) 2

h
= h y0 u + ∆y0 + ∆ y0 = (y0 + 4y1 + y2 )
2 12 0 3
For n =Z3, we have
x2 3h
f (x) dx ≈ (y0 + 3y1 + 3y2 + y3 )
x0 8

4.3.2 Trapezoidal Rule for Integration

Given (xi , yi ) a discrete data, over each interval [xi , xi+1 ], taking f (x) as a first degree newton’s forward difference
interpolating polynomial for i = 0, 1, 2, . . . , n we have

Z xn Z x1 Z x2 Z x3 Z xn
f (x) dx ≈ f (x) dx + f (x) dx + f (x) dx + . . . + f (x) dx
x0 x0 x1 x2 xn−1
Z 1 Z 2 Z 3 Z n
= h(y0 + u∆y0 ) dx + h(y1 + u∆y1 ) dx + (y2 + u∆y2 ) dx + . . . + (yn−1 + u∆yn ) dx
0 1 2 n−1
1 2 3 n
1 1 1 1
= (uy0 + u2 ∆y0 ) + (uy1 + u2 ∆y1 ) + (uy2 + u2 ∆y2 ) + . . . + (uyn−1 + u2 ∆yn−1 )
2 0 2 1 2 2 2 n−1
h h h h
= (y0 + y1 ) + (y1 + y2 ) + (y2 + y3 ) + . . . + (yn−1 + yn )
2 2 2 2
h
= (y0 + 2(y1 + y2 + . . . + yn−1 ) + yn )
2

which is known as Trapezoidal Rule.

Computational Methods Numerical Methods


4.3 Numerical Integration 31

4.3.3 Simpson 13 -Rule for Integration


Given (xi , yi ) a discrete data, over each interval [xi , xi+2 ], taking f (x) as a second degree) newton’s forward difference
interpolating polynomial for i = 0, 1, 2, . . . , n we have
Z xn Z x2 Z x4 Z x6 Z xn
f (x) dx ≈ f (x) dx + f (x) dx + f (x) dx + . . . + f (x) dx
x0 x0 x2 x4 xn−2
Z 2 Z 4
u(u − 1) 2 u(u − 1) 2
= h(y0 + u∆y0 + ∆ y0 ) dx + h(y1 + u∆y1 + ∆ y1 ) dx +
0 2 2 2
Z 6 Z n
u(u − 1) 2 u(u − 1) 2
(y2 + u∆y2 + ∆ y2 ) dx + . . . + (yn−1 + u∆yn−1 + ∆ yn−1 ) dx
4 2 n−2 2
h h h h
= (y0 + 4y1 + y2 ) + (y2 + 4y3 + y4 ) + (y4 + 4y5 + y6 ) + . . . + (yn−2 + 4yn−1 + 4yn )
3 3 3 3
h
= (y0 + 4(y1 + y3 + . . . + yn−1 ) + 2(y2 + y4 + . . . + yn−2 ) + yn )
3
which is known as Simpson’s 31 −Rule.

R To use Simpson’s 13 −rule the number of data points must be odd.

4.3.4 Simpson’s 3/8–Rule For Integration


Given (xi , yi ) a discrete data, over each interval [xi , xi+2 ], taking f (x) as a third degree newton’s forward difference
interpolating polynomial for i = 0, 1, 2, . . . , n we have
Z xn Z x3 Z x6 Z x9 Z xn
f (x) dx ≈ f (x) dx + f (x) dx + f (x) dx + . . . + f (x) dx
x0 x0 x3 x6 xn−3
3h
= (y0 + 3(y1 + y2 + y4 + . . . + yn−1 + yn−2 ) + 2(y3 + y6 + y9 + . . . + yn−3 ) + yn ))
8
which is known as Simpson’s 38 −Rule.

R To use Simpson’s 38 −rule the number of data points must be 3k + 1 for k ∈ Z+ .

Z2
 Example 4.2 1. Evaluate x2 cos x dx with h = 2−1
6 = 1
6 on [a, b] = [1, 2].
1
S OLUTION
Z2
Analytically, x2 cos x dx = −0.0851
1

7 4 3 5 11
x 1 6 3 2 3 6 2
f (x) 0.5403 0.5352 0.4182 0.1592 −0.2659 - 0.8723 -0.16646

(a) Trapezoidal Rule

Z2
h
x2 cos x dx = [y0 + 2(y1 + y2 + y3 + y4 + y5 ) + y6 ]
2
1
1
= [0.5403 + 2(0.5352 + 0.4182 + 0.1592 − 0.2659 − 0.8723) − 1.6646]
12
= −0.09796
(b) Simpson’s 13 − Rule

Z2
h
x2 cos x dx = [y0 + 4(y1 + y3 + y5 ) + 2(y2 + y4 ) + y6 ]
3
1
1
= [0.5403 + 4(0.5352 + 0.1592 − 0.8723) + 2(0.4182 − 0.2659) − 1.6646]
18
= −0.085072

Computational Methods Numerical Methods


32 Numerical Differentiation and Integration
(c) Simpson’s 13 − Rule

Z2
3h
x2 cos x dx = [y0 + 3(y1 + y2 + y4 + y5 ) + 2(y3 ) + y6 ]
8
1
1
= [0.5403 + 3(0.5352 + 0.4182 − 0.2659 − 0.8723) + 2(0.1592) − 1.6646]
18
= −0.08502
Z3
2. Using Simpson’s 13 –Rule, evaluate the integral I = f (x) dx from the data given by
−1
(Ans. 40.83 )

x -1 -0.5 0 0.5 1 1.5 2 2.5 3


f (x) 7 5 3.5 4 5.5 6 6.5 5 4.5

Z3
e2x
3. Evaluate the integral I = dx using Simpson’s three-eight rule with function values at 12 points
1 + x2
0
(i.e. h = 0.25)


Computational Methods Numerical Methods

You might also like