numerics_all
numerics_all
Locating Roots
Method for Finding the Roots of Equations
Direct Method
Indirect or Iterative Methods
False Position method
Secant Method
Newton-Raphson Method
1.1 Introduction
One of the most common problem encountered in applied mathematics is: Given a function
f (x), find the values of x for which
f (x) = 0
The solution of this equation is known as the root of the equation, or the zeroes of the function
f (x).
Definition 1.1.1 Consider an expression of the form
Example 1.1
1. x3 − 3x + 6 = 0 ⇒ algebraic equation.
2. 2x4 + ex sin 2x = 0 ⇒ transcedential equation.
In each kind, if the coefficients are pure numbers, then they are called numerical equations.
Note: Physically, the kth order convergent indicates the number of significant digits each
approximations increase at each iterations.
Then f (x) changes sign on [a, b], and f (x) = 0 has at least one root on the interval by IVT.
Figure 1.2: A sketch indicating the interval where the root is located.
Definition 1.3.2 The simplest numerical procedure for finding a root is to repeatedly halve
the interval [a, b], keeping the half for which f (x) changes sign. This procedure is called
the bisection method, and is guaranteed to converge to a root, denoted here by α
Suppose that we are given an interval [a, b] satisfying (1.1) and an error tolerance ε > 0. The
bisection method consists of the following steps:
1
1. Define x1 = (a + b)
2
2. If b − x1 ≤ ε, then accept x1 as the root and stop.
Solution Let interval be [1, 1.5] =⇒ f (1) = −1 and f (1.5) = 1.5647. Therefore, on this interval,
Figure 1.3: A sketch indicating the interval where the root is located.
condition (1.1) is satisfied. Thus, the 1st approximation of the root is:
1
x1 = (a + b)
2
1
= (1 + 1.5)
2
= 1.25
f (x1 ) = 1.256 − 1.25 − 1
= 1.5647
f (1) · f (1.25) < 0 =⇒ The new interval is [1, 1.25]
ε1 = |a − x1 | = |1 − 1.25| = 0.25
b−a
|xi+1 − xi | = =⇒ lim |xi+1 − xi | = 0
2i →∞
where b − a denotes the length of the original interval with which we started. Since the root
α ∈ [xi+1 , xi ] or [xi , xi+1 ], we know that
|α − xi+1 | ≤ |xi+1 − xi |
|α − xi | ≤ ε
ADVANTAGE
There are several advantages to the bisection method:
1. It is guaranteed to converge.
2. The error bound is guaranteed to decrease by one-half with each iteration
DISADVANTAGE
The principal disadvantage of the bisection method is that generally converges more slowly than
most other methods.
similar triangles, shown in Figure (1.4), the intersection of the straight line with the x axis can be
estimated as
f (xu ) − 0 0 − f (xl )
=
xu − xr xr − xl
which can be solved for
xl f (xu ) − xu f (xl )
xr =
f (xu ) − f (xl )
Procedure for the False Position Method to Find the Root of the Equation f (x) = 0
Step 1: Choose two initial guess values (approximations) x0 and x1 (where x1 > x0 ) such that
f (x0 ). f (x1 ) < 0.
Step 2: Find the next approximation x2 using the formula
x0 f (x1 ) − xu f (x0 )
x2 =
f (x1 ) − f (x0 )
Step 3: If f (x2 ) f (x1 ) < 0, then go to the next step. If not, rename x0 as x1 and then go to the next
step.
Step 4: Evaluate successive approximations using the formula
xn−1 f (xn ) − xn f (xn−1 )
xn+1 = , where n= 2,3,...
f (xn ) − f (xn−1 )
But before applying the formula for xn+1 , ensure whether f (xn−1 ). f (xn ) < 0; if not,
rename xn−2 as xn−1 and proceed.
Step 5: Stop the evaluation when |xn − xn−1 | < ε, where ε is the prescribed accuracy (tolerance
error).
Example 1.6 Using the False Position method, find a root of the function f (x) = ex − 3x2
correct to three decimal place. The root is known to lie between 0.5 and 1.0.
Solution: Let x0 = 0.5, x1 = 1. then we have, f (0.5) = 0.8987, f (1) = −0.2817. Thus,
x0 f (x1 ) − xu f (x0 )
x2 = = 0.88067
f (x1 ) − f (x0 )
f (x2 ) = 0.0858 Since f (x1 ) f (x2 ) < 0, the root lies in the interval (0.88067, 1), then next
approximation is,
x1 f (x2 ) − xu f (x1 )
x3 = = 0.90852
f (x2 ) − f (x1 )
f (x3 ) = 0.00441. Since f (1) f (0.90852) < 0, the root lies in the interval (0.90852, 1), then next
approximation is,
x2 f (x3 ) − xu f (x2 )
x4 = = 0.90993
f (x3 ) − f (x2 )
f (x4 ) = 0.00022. Since f (1) f (0.90993) < 0, the root lies in the interval (0.90993, 1), then next
approximation is,
x3 f (x4 ) − xu f (x3 )
x5 = = 0.91000
f (x4 ) − f (x3 )
f (x5 ) = 0.00001. Since f (1) f (0.91000) < 0, the root lies in the interval (0.91000, 1), then next
approximation is,
x4 f (x5 ) − xu f (x4 )
x6 = = 0.91001
f (x5 ) − f (x4 )
f (x6 ) = 0.00000
Therefore, the root is 0.91 correct to four decimal places.
Exercise 1.1 1. Find a real root of cos x − 3x + 5 = 0. Correct to four decimal places
using the method of False Position method.
2. Locate the intervals which contain the positive real roots of the equation x3 − 3x + 1 = 0.
Obtain these roots correct to three decimal places, using the method of false position.
Exercise 1.2 1. Find a root of the equation x3 − 8x − 5 = 0 using the secant method.
3
2. Calculate an approximate value for 4 4 using one step of the secant method with x0 = 3
and x1 = 2.
3. What is the appropriate formula for finding square roots using the secant method?
xn )(x −x
4. If xn+1 = xn + (2−e n
(exn −exn−1 )
n−1 )
with x0 = 0 and x1 = 1, what is lim xn ?
n→∞
Exercise 1.3 Use the NR method to determine the root of the function with acuracy 10−4 .
1. f (x) = x6 − x − 1
2. f (x) = cos x − 3x + 5
3. x log x = 1.2
4. f (x) = cos x − xex
Exercise 1.4 1. Derive the Newton’s method for finding the qth root of a positive number
N, N 1/q , where N > 0, q > 0.
3
2. Calculate an approximate value for 4 4 using NR method with x0 = 3
2.1 Introduction
This chapter is about solving system of linear equations that has many application in engineering
and science. Consider n-linear system of equations as shown below.
a11 x1 + a12 x2 + a13 x3 + . . . + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + . . . + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + . . . + a3n xn = b3 (2.1)
.. .. .. .. .. ..
. . . . . .
an1 x1 + an2 x2 + an3 x3 + . . . + ann xn = bn
This equation can be written in a matrix form as
AX = B
Where A is an n × n coefficient matrix and X and B are column vectors of size n. Hence, matrix
multiplication is compatible with the solution of linear algebraic equations. If B = 0 then the
above system is called homogeneous system of equations. Otherwise it is non-homogeneous.
Homogeneous system of equations can be solved using eigen-value method leading to eigen
value problem.
Definition 2.1.1 If system of equations are all satisfied simultaneously by at least one set of
values, then it is consistent.
There are three row operations that are useful when solving systems of linear algebraic equations.
These operations does not affect the solution of the system. Thus they can be used without any
hesitation during the solution process when necessary. These are:
1. Scaling: Any row of the above equation can be multiplied by a constant.
2. Pivoting: The order of rows can be interchanged as required.
3. Elimination: Any row of a system of linear equations can be replaced by any weighted
linear combination of that row with any other row.
Let |A| be the determinant of A such that |A| 6= 0, then to find xi , i = 1, 2, 3, replace the ith
column of the matrix A by the right hand side vector B and evaluate the determinant and divide it
by |A|. i.e.
x1 + x2 − x3 = 1
x1 + 2x2 − 2x3 = 0
−2x1 + x2 + x3 = 1
Therefore,
1 1 −1 1 1 −1 1 1 1
0 2 −2 1 0 −2 1 2 0
1 1 1 −2 1 1 −2 1 1
x1 = = 2 , x2 = = 2 and x3 = =3
1 1 −1 1 1 −1 1 1 −1
1 2 −2 1 2 −2 1 2 −2
−2 1 1 −2 1 1 −2 1 1
R [Limitation of Cramer’s Rule] Cramers rule may be used for a system of size n×n for n ≤ 5,
for n > 5 this rule is impractical. For n = 10 for instance, the number of multiplication
required will be 70,000,000 which is infeasible to compute. Hence use of numerical
methods to solve systems of linear equations are appropriate and efficient.
Gaussian Elimination
The aim of this method is to solve a system of n equations in n unknowns by reducing it to an
equivalent triangular set. Which is a preparation for a back substitution or forward substitution.
This method, hence consists of two steps:
1. Triangularization and
2. Back or forward substitution.
Triangularization
| |
a11 a12 a13 ... a1n b1 a11 a12 a13 ... a1n b1
a21 a22 a23 ... a2n | b2 Triangularization a22 a23 ... a2n | b2
−−−−−−−−−−−−→
a31 a32 a33 ... a3n | b3 by applying row operation
a33 ... a3n | b3
.. .. .. .. .. .. .. .. ..
. . . . . | . . . | .
an1 an2 an3 ... ann | bn ann | bn
OR
| |
a11 a12 a13 ... a1n b1 a11 b1
a21
a22 a23 ... a2n | b2 Triangularization
−−−−−−−−−−−−−−−→
a21 a22 | b2
a31
a32 a33 ... a3n | b3 by applying row operation a31
a32 a33 | b3
. .. .. .. .. .. . .. .. .. ..
.. ..
. . . . | . . . . | .
an1 an2 an3 ... ann | bn an1 an2 an3 ... ann | bn
a11 x1 + a12 x2 + a13 x3 = b1
Let a21 x1 + a22 x2 + a23 x3 = b2 be a system of equations in three unknowns.
a31 x1 + a32 x2 + a33 x3 = b3
(2.2)
a21
To reduce this system into an upper triangular matrix, multiply the first equation by and
a11
subtract it from the second equation to eliminate x1 from the second equation. Similarly, multiply
a31
the first equation by and subtruct it from the third equation to eliminate x2 from the third
a11
equation. As a result the above system becomes,
a11 x1 + a12 x2 + a13 x3 = b1
a022 x2 + a023 x3 = b02
a032 x2 + a033 x3 = b03
a032
Again, from this last equation eliminate x2 by multiplying the the second equation by and
a022
subtruct it from the third equation. Thus, the above system of equations becomes,
a11 x1 + a12 x2 + a13 x3 = b1
a022 x2 + a023 x3 = b02
a0033 x3 = b003
Back Substitution
After the triangular set of equations has been found, the last equation in this equivalent set yields
the value of x3 directly from the triangular set of equations as
b002
x3 = .
a003 3
Thus, the values of x1 and x2 can be obtained from second and third equation respectively. In
the same way, Gauss-elimination method can be generalized to find the solution of n system of
equations in n unknowns.
Exercise 2.2 Solve
2x1 + 3x2 − x3 = 5
4x1 + 4x2 − 3x3 = 3 using Gauss-Elimination method.
2x1 − 3x2 + 2x3 = 2
a21 4
To eliminate x1 from the second equation, multiply the first equation by = = 2 and subtract
a11 2
it from the second equation. Similarly eliminate x1 from the third equation by multiplying the
a31 2
first = = 1 and eliminate x2 from the third equation by multiplying the second equation
a11 2
a32 −6
by = = −3 and subtruct from the third equation to obtain
a22 2
2x1 + 3x2 − x3 = 5
2x2 + x3 = 7
6x3 = 18
step 1: Largest coefficients of x1 ( may be positive or negative) is selected from all the equations
then the first equation will be interchanged with the equation with this largest value. This
largest value is called the pivot element and the row containing this element is called pivot
row. This row will be used to eliminate the other coefficients of x1 .
step 2: Numerically largest value of x2 is selected from the remaining equations. In this step we
wont consider the first equation. Then interchange the second equation with the row with
this largest value. The row containing this value is used to eliminate the other coefficients
of x2 except the previously selected rows. This procedure is continued until the upper
triangular system is obtained.
R The pivot element is selected to be the largest in absolute value to maximize the precision
of the solution.
R2 −→R2 −(1/5)R1
−5
4 4 1 4 12 10 5 0 25 R3 −→R3 −(2/5)R1
2 5 7 4 1 R1 ←→R3 2 5 7 4 1 −−−−−−−−−−−−−−−→
−− −→ R −→ R4 + (1/5)R1
10 5 −5 0 25 4 4 1 4 12 4
−2 −2 1 −3 −10 −2 −2 1 −3 −10
−5 −5
10 5 0 25 R3 −→R3 −(1/2)R2 10 5 0 25
0 4 8 4 −4 −−−−−−−−−−−−→ 0 4 8 4 −4
R −→ R4 − (1/4)R2
0 2 3 4 2 4 0 0 −1 2 4
0 −1 0 −3 −5 0 0 2 −2 −6
5 −5 10 5 −5
10 0 25 0 25
R4 ←→R3 0 4 8 4 −4 R4 −→R4 +(1/2)R3 0 4 8 4 −4
−− −→ − − − − −− −→
0 0 2 −2 −6 0 0 2 −2 −6
0 0 −1 2 4 0 0 0 1 1
Therefore, after back substitution, the solution of the system is
1
x1 2
x2 2
x3 = −2
x4 1
On the other hand if small changes in A and/or B causes a small change in the solution of the
system, then it is said to be stable (well conditioned). Thus in an ill conditioned system, even the
round off errors affect the solution badly. And it is quite difficult to recognize an ill conditioned
system.
Suppose the elements of the coefficient matrix A are altered or perturbed slightly to yeild the
following generally similar system.
101 −200 x1 100
=
−200 400 x2 −100
Note that a11 has increased by 1% and a22 has been decreased by less than 0.254% with these
rather mdest changes in A, the exact solution of the perturbed system is
50.00
X=
24.75
Which means change by 1% in the equation generate change in the order of 70% in the system.
Thus, the system is clearly ill-conditioned.
Exercise 2.6 Show that the system of equations
Making
7 −10 x1 1 −−−−−−−−−−−−→ 7 −10 x1 1.01
= slight change as =
−5 7 x2 0.7 −5 7 x2 0.69
is ill-conditioned.
LU-Factorization (Decomposition)
To solve the general linear system AX = B by this method factorize the coeffient matrix A into
product of two triangular matrices. This will reduce the problem to solving two linear systems.
This method is called triangular decomposition; which includes the variants of crout, Doolitte
and Cholesky.
Doolittile Version
Consider the system of equation AX = B. Now, we will factor matrix A into two other matrices.
A lower triangular matrix L and an upper triangular matrix U such that A = LU. The method
how to solve the given system is describe as follows.
Starting with the system AX = B, introduce a new variable Y such that Y = UX so that
AX = (LU)X = L(UX) = B ⇒ LY = B. Since L is a lower triangular matrix, the system is
almost trivial to solve for the unknown Y , once we found the vector Y , then we solve the system
Y = UX
Now, the only thing remained to find is the factors L = (`i j ) and U = (ui j ), the triangular
matrices. This is accomplished by the usual Gauss-Elimination Method. In the Gauss elimination
process we elimination the ith row and jth column entry of A (its equivalent) we replace the the ith
row by Ri − αR j (i.e. Ri → Ri − αR j ). α will be the ith row and jth column entry of L (`i j = α).
This will be illustrated by the following example.
1 2 3 x1 0
3 4 1 x2 = 6
1 0 1 x3 1
The main point here is decomposing the coefficient matrix A into lower and upper triangular
matrices L and U such that A = LU by applying the Gauss-Elimination.
Therefore,
1 2 3 1 0 0 1 0 0
∴ U = 0 −2 −8 and L = `21 1 0 = 3 1 0
0 0 6 `31 `32 1 1 1 1
y1
Since AX = (LU)X = L(UX) = B, let Y = UX = y2 ⇒ LY = B. Thus
y3
1 0 0 y1 0
3 1 0 y2 = 6
1 1 1 y3 1
1 2 3 x1 0
But Y = UX ⇒ 0 −2 −8 x2 = 6
0 0 6 x3 −5
Crouts’s Method
Let us consider three equations in three nknown.
a11 x1 + a12 x2 + a13 x3 = b1
Let a21 x1 + a22 x2 + a23 x3 = b2 ⇒ AX = B
a31 x1 + a32 x2 + a33 x3 = b3
`11 0 0 1 u12 u13
Let A = LU where L = `21 `22 0 and U = 0 1 u23
`31 `32 `33 0 0 1
`11 0 0 1 u12 u13 a11 a12 a13
∴ LU = `21 `22 0 0 1 u23 = a21 a22 a23 = A
`31 `32 `33 0 0 1 a31 a32 a33
Since UX = V we have
1 u12 u13 x1 v1
0 1 u23 x2 = v2 and solve for x1 , x2 , x3 by backward substitution
0 0 1 x3 v3
Let
`11 0 0 1 u12 u13 1 1 1
LU = A ⇒ `21 `22 0 0 1 u23 = 4 3 −1
`31 `32 `33 0 0 1 3 5 3
`11 `11 u12 `11 u13 1 1 1
⇒ `21 `21 u12 + `22 `21 u13 + `22 u23 = 4 3 −1
`31 `31 u12 + `32 `31 u13 + `32 u23 + `33 3 5 3
`11 = 1 `21 = 4 `31 = 3 `22 = −1 `33 = −10
⇒
`32 = 2 u12 = 1 u13 = 1 u23 = 5
Thus,
1 0 0 1 1 1
L = 4 −1 0 and U = 0 1 5
3 2 −10 0 0 1
Suppose V = Ux ⇒ LV = B, then
1 0 0 v1 1 v1 1
4 −1 0 v2 = 6 ⇒ v2 = −2 by forward substitution
3 2 −10 v3 4 v3 −0.5
Now since UX = B we have
1 1 1 x1 1 x1 −1
0 1 5 x2 = −2 ⇒ x2 = 0.5 by backward substitution
0 0 1 x3 −0.5 x3 −0.5
Procedure
(0) (1) (0)
Guess the initial values of all unknowns xi and solve each equation in turn for xi using xi as
n (0)
(1) bi ai j x j
xi = − ∑
aii j=1, j6=i aii
(r) (r−1)
Or the value of xi at r(th) iteration can be found using the values of xi of (r − 1)th iteration.
This iteration should be continued till the error
(r) (r−1)
xi − xi < ε1
ε1 = 0.5 × 10−p
ε2 = 0.5 × 10−(p+1)
Solution:
Let
5 −1 1
A = 2 4 0 and
1 1 5
=⇒ diagonally dominant. Thus, the system is ready for the Jcob’s Method. Rearranging the
equations we get
Thus correct to one decimal places the solution of the gien system is X 5
Gauss-Seidel Method
This method is a slight improvent of the Jacobi’s method. Unlike to the Jacobi’s method, updated values of xi ’s are
(0)
used instead of the values of the previous iterations. First get x1 using the formula
n a1 j x (0)
(1) b1 j
x1 = −∑
a11 j=2 a11
then after get the rest approcimations using the general iterative formula
(r)
(r) bi (i−1) ai j x j n
(r−1)
xi = − ∑ − ∑ ai j x j
aii j=1 aii j=i+1
R
1. The rate of convergence of Gauss-seidel method is roughly twice that of the Jacobi’s
method.
2. For this Method to converge, the coefficient matrix of the system should be diagonally
dominant.
x3
(1) (1) (1)
0.2(12 − x1 − 2x2 ) 0.2(12 − 4 − 2(2)) 0.8
(2) (1) (1)
0.25(16 − x2 − 2x3 )
x 0.25(16 − 2.0 − 2(0.8)) 3.1000
1
X (2) = x2(2) = (2) (1)
(1/3)(10 − x1 − x3 ) = (1/3)(10 − 3.1 − 0.8) = 2.0333
(2)
x3
(2) (2)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.1 − 2(2.0333)) 0.9667
(3) (2) (2)
0.25(16 − x2 − 2x3 )
x1 0.25(16 − 2.0333 − 2(0.9667)) 3.0083
(3) (3) (3) (2)
X = x2 = (1/3)(10 − x1 − x3 ) = (1/3)(10 − 3.0083 − 0.9667) = 2.012
(3)
x3
(3) (3)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0083 − 2(2.012)) 0.9935
(4) (3) (3)
0.25(16 − x2 − 2x3 )
x1 0.25(16 − 3.0083 − 2(0.9935)) 3.0002
X (4) = x2(4) = (1/3)(10 − x1(4) − x3(3) ) = (1/3)(10 − 3.0083 − 0.9935) = 2.002
(4)
x3
(4) (4)
0.2(12 − x1 − 2x2 ) 0.2(12 − 3.0002 − 2(2.002)) 0.999916
3
Clearly as n → ∞ , X → 2
1
AX = λ X ⇐⇒ (A − λ In )X = 0
Here, the homogeneous equation will have a non-trivial solution if and only if the characteristic matrix is singular
(not invertible). i.e.
(a11 − λ ) a12 a13 ... a1n
a21 (a22 − λ ) a23 ... a2n
a31 a32 (a33 − λ ) ... a3n =0 (3.2)
.. .. .. .. ..
. . . . .
an1 an2 an3 ... (ann − λ )
R
1. Eigenvalues and eigenvectors are defined only for square matrices.
2. The zero vector cannot be an eigenvector even though
A·0 = λ ·0
An eigenvalue, however, can be zero.
The eigenvalue can be obtained by expanding the determinant and solving for λ . The degree of the polynomial
obtained is n. To each eigenvalue of a matrix A, there will correspond at least one eigenvector which can be found
by solving the appropriate set of homogeneous equation. Substituting the eigenvalue λi into eq.[3.1], we obtain
(A − λi I)X = 0. If X = Xi satisfies the equation (A − λi I)X = 0 then the eigenvector corresponding to λi is Xi .
24 Solution for Eigenvalue Problems
1 2
Exercise 3.1 Find the eigenvalues and the corresponding eigenvectors of A =
4 3
Power Method
Power method is used to find the largest eigenvalue (in magnitude) and the corresponding eigenvector of a square
matrix A. This method is an iterative method.
Let A be the matrix whose eigenvalue and eigenvector are to be determine. The method requires writing AX = λ X.
Let X (0) be a non-zero initial vector, then we evaluate AX (0) and write as
AX (0) = λ (1) X (1)
Where λ (1) is the largest component in absolute value of the vector AX (0) . Then λ (1) will be the first approximated
eigenvalue and X (1) will be the first approximated value to the corresponding eigenvector. Similarly, we evaluate
AX (1) and write the result as
AX (1) = λ (2) X (2)
Which gives the second approximation. Repeating this process till |X (r) − X (r−1) | is negligible. Then λ (r) will be the
largest eigenvalue of A and X (r) is the corresponding eigenvector.
Example 3.1 Find the largest eigenvalue and the corresponding eigenvector of the matrix
1 3 −1
A= 3 2 4
−1 4 10
Starting with X (0) = (0, 0, 1)T as initial eigenvector take the tolerance limit as 0.01.
Second approximation:
1 3 −1 −0.1 0.1 0.009 0.009
AX (1) = 3 2 4 0.4 = 4.5 = 11.7 0.385 =⇒ λ (2) = 11.7 and X (2) = 0.385
−1 4 10 1.0 11.7 1.000 1.000
1 3 −1 0.009 0.164 0.014
Third approximation: AX (2) = 3 2 4 0.385 = 4.797 = 11.531 0.416
−1 4 10 1.000 11.531 1.000
0.014
=⇒ λ (3) = 11.531 and X (3) = 0.416 Repeating the above process, we get the largest eigenvalue, λ (5) =
1.000
0.024
11.560 and the corresponding eigenvector is X (5) = 0.421
1.000
4.1 Introduction
Z b
Several methods are available to find the derivative of a function f (x) and to evaluate the definite integral f (x) dx
a
for a, b ∈ R in a closed form. However, when f (x) is a complicated function or when it is given in a tabular form, we
Z b x
e sin x
use numerical methods. For instance, dx cannot be evaluated by other methods.
a 1 + x2
x x0 x1 x2 ... xn
f (x) y0 y1 y2 ... yn
If this data is given by equidistant values, (for equally spaced data), the function f should be represented by
an interpolation formulas employing differences; such as Newton’s forward or backward interpolation formulas.
Otherwise we must represent by Lagrange’s or Newton’s divided difference formulas.
For tabular value of x the formula takes a simple form with a pattern, by setting x = x0 , we obtain u = 0 since
u = u−x
u
0
and hence
28 Numerical Differentiation and Integration
dy 1 1 1 1
= ∆y0 − ∆2 y0 + ∆3 y0 − ∆4 y0 + . . .
dx x=x0 h 2 3 4
Formula for computing higher order derivatives may be obtained by successive differentiation.
The 2nd order derivatives, differentiating Equation 4.1 again, we obtain,
3p2 + 6p + 2 3
dy 1 2p + 1 2
= ∇yn + ∇ yn + ∇ yn + . . .
dx h 2! 3!
dy 1 1 1 1
=⇒ = ∇yn + ∇2 yn + ∇3 yn + ∇4 yn + . . .
dx x=xn h 2 3 4
d2y
1 2 11 5
=⇒ = ∇ yn + ∇3 yn + ∇4 yn + ∇5 yn + . . .
dx2 x=xn h 12 6
dy d2 y
Example 4.1 Find dx and dx2
from the following data at
x 50 60 70 80 90
f (x) 19.96 36.65 58.81 77.21 94.61
a) x = 51
b) x = 65
c) x = 88
S OLUTION :
x−x0 51−50
Here h = 10, we use the Newton’s Formula taking x0 = 50 =⇒ u = h = 10 = 0.1 we need to construct Forward
Difference table
x − 50
x u= y ∆y ∆2 y ∆3 y ∆4 y
10
50 0 19.69
16.69
60 1 36.65 5.47
22.16 -9.23
70 2 58.81 -3.76 11.99
18.40 2.76
80 3 77.21 -1.00
17.40
90 4 94.61
Zb
I= f (x) dx (4.5)
a
The function f (x) which is to integrated, may be a known function or a set of discrete data. Many known functions,
however, do not have and explicit integral, and an approximate numerical procedure is required to evaluate Equation
4.5.
Numerical integration (Quadrature) formulas can be developed by fitting approximating function (e.g. Polynomi-
als) to discrete data and integrating the approximating function pn (x).
Zb Zb
I= f (x) dx = pn (x) dx (4.6)
a a
x − x0
u= =⇒ dx = hdu
h
Hence, we observe that in Equation 4.6 the second integral must be transformed into an explicit function of u, so that
Equation 4.7 can be used directly. Thus,
Zb Zb u(b)
Z
I = f (x) dx = pn (x) dx = h pn (u) du (4.8)
a a u(a)
Zxn Zn
= pn (x) dx = h pn (u) du (4.9)
x0 0
n(n − 3)3
n n(2n − 3) 2
= nh y0 + ∆y0 + ∆ y0 + (4.10)
2 12 24
Given (xi , yi ) a discrete data, over each interval [xi , xi+1 ], taking f (x) as a first degree newton’s forward difference
interpolating polynomial for i = 0, 1, 2, . . . , n we have
Z xn Z x1 Z x2 Z x3 Z xn
f (x) dx ≈ f (x) dx + f (x) dx + f (x) dx + . . . + f (x) dx
x0 x0 x1 x2 xn−1
Z 1 Z 2 Z 3 Z n
= h(y0 + u∆y0 ) dx + h(y1 + u∆y1 ) dx + (y2 + u∆y2 ) dx + . . . + (yn−1 + u∆yn ) dx
0 1 2 n−1
1 2 3 n
1 1 1 1
= (uy0 + u2 ∆y0 ) + (uy1 + u2 ∆y1 ) + (uy2 + u2 ∆y2 ) + . . . + (uyn−1 + u2 ∆yn−1 )
2 0 2 1 2 2 2 n−1
h h h h
= (y0 + y1 ) + (y1 + y2 ) + (y2 + y3 ) + . . . + (yn−1 + yn )
2 2 2 2
h
= (y0 + 2(y1 + y2 + . . . + yn−1 ) + yn )
2
Z2
Example 4.2 1. Evaluate x2 cos x dx with h = 2−1
6 = 1
6 on [a, b] = [1, 2].
1
S OLUTION
Z2
Analytically, x2 cos x dx = −0.0851
1
7 4 3 5 11
x 1 6 3 2 3 6 2
f (x) 0.5403 0.5352 0.4182 0.1592 −0.2659 - 0.8723 -0.16646
Z2
h
x2 cos x dx = [y0 + 2(y1 + y2 + y3 + y4 + y5 ) + y6 ]
2
1
1
= [0.5403 + 2(0.5352 + 0.4182 + 0.1592 − 0.2659 − 0.8723) − 1.6646]
12
= −0.09796
(b) Simpson’s 13 − Rule
Z2
h
x2 cos x dx = [y0 + 4(y1 + y3 + y5 ) + 2(y2 + y4 ) + y6 ]
3
1
1
= [0.5403 + 4(0.5352 + 0.1592 − 0.8723) + 2(0.4182 − 0.2659) − 1.6646]
18
= −0.085072
Z2
3h
x2 cos x dx = [y0 + 3(y1 + y2 + y4 + y5 ) + 2(y3 ) + y6 ]
8
1
1
= [0.5403 + 3(0.5352 + 0.4182 − 0.2659 − 0.8723) + 2(0.1592) − 1.6646]
18
= −0.08502
Z3
2. Using Simpson’s 13 –Rule, evaluate the integral I = f (x) dx from the data given by
−1
(Ans. 40.83 )
Z3
e2x
3. Evaluate the integral I = dx using Simpson’s three-eight rule with function values at 12 points
1 + x2
0
(i.e. h = 0.25)