0% found this document useful (0 votes)
30 views

Numerical Method

The document discusses numerical methods for finding approximate solutions to problems. It covers topics like significant digits, truncation error, rounding, Descartes' rule of signs for determining real zeros of polynomials, properties of polynomial roots, and solution of equations using iterative methods like bisection.

Uploaded by

JANA PARVATHI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Numerical Method

The document discusses numerical methods for finding approximate solutions to problems. It covers topics like significant digits, truncation error, rounding, Descartes' rule of signs for determining real zeros of polynomials, properties of polynomial roots, and solution of equations using iterative methods like bisection.

Uploaded by

JANA PARVATHI
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

byjusexamprep.

com

1
byjusexamprep.com

CHAPTER ENGINEERING MATHEMATICS

8 NUMERICAL METHOD

1. NUMERICAL METHOD AND ANALYTIC METHOD

We use numerical method to find approximate solution of problems by numerical calculations


with aid of calculator. For better accuracy we have to minimize the error.
Error = Exact value – Approximate value
Absolute error = Modulus of error
Relative error = Absolute error / (Exact value)
Percentage error = 100 X Relative error
1.1. Significant Digits
It is defined as the digits to the left of the first non-zero digit to fix the position of decimal
point.
1.2. Truncation Error
The term truncation error is used to denote error, which results from approximating a
smooth function by truncating its Taylor series representation to a finite number of terms.
Using truncation till e digits is given by
𝑓𝑙(𝑥) = (– 1) 𝑠 × (𝑑1 𝑑2 . . . . . . 𝑑𝑛 )10 × 10𝑒
1.3. Round off
Using roundoff or nearest to e digits is given by
10
(– 1) 𝑠 × (𝑑1 𝑑2 . . . . . . 𝑑𝑛 )10 × 10𝑒 , 0 ≤ 𝑑𝑛+1 <
𝑓𝑙(𝑥) = { 2
10
(– 1) 𝑠 × (𝑑1 𝑑2 . . . . . (𝑑𝑛 + 1)) × 10𝑒 , ≤ 𝑑𝑛+1 < 10
10 2

2. DESCARTES' RULE OF SIGN

Descartes' rule of sign is used to determine the number of real zeros of a polynomial function.
It tells us that the number of positive real zeroes in a polynomial function f(x) is the same or
less than by an even number as the number of changes in the sign of the coefficients. The
number of negative real zeroes of the f(x) is the same as the number of changes in sign of the
coefficients of the terms of f(-x) or less than this by an even number.
Example: Determine the number of positive and negative real zeros for the given function:
𝑓(𝑥) = 𝑥 5 + 2𝑥 4 −4𝑥 2 + 3𝑥 − 7

2
byjusexamprep.com
Sol.
Our function is arranged in descending powers of the variable, if it were not, we would have to
do that as a first step. Second, we count the number of changes in sign for the coefficients of
f(x).
Here are the coefficients of our variable in f(x): 1+2−4+3−7
Our variable goes from positive (1) to positive (2) to negative (-4) to positive (3) to negative
(-7).
Between the first two coefficients there are no change in signs but between our second and
third we have our first change, then between our third and fourth we have our second change
and between our 4th and 5th coefficients we have a third change of coefficients. Descartes´ rule
of signs tells us that the we then have exactly 3 real positive zeros or less but an odd number
of zeros. Hence our number of positive zeros must then be either 3, or 1.
In order to find the number of negative zeros we find f(-x) and count the number of changes
in sign for the coefficients:
𝑓(−𝑥) = −𝑥 5 + 2𝑥 4 −4𝑥 2 − 3𝑥 − 7
Here we can see that we have two changes of signs, hence we have two negative zeros or less
but an even number of zeros.
In total, we have 3 or 1 positive zeros or 2 or 0 negative zeros.

3. ROOTS OR ZEROS OF POLYNOMIAL

Suppose p(x) is n degree polynomial in x if and suppose 𝑥1 , 𝑥2 . . . . . . . . , 𝑥𝑛 are the roots of p(x)
then,
𝑝(𝑥) = ∑𝑛𝑖=0 𝑎𝑖 𝑥 𝑖 ; where 𝑎𝑛 ≠ 0
𝑝(𝑥) = 𝑎0 𝑥 0 + 𝑎1 𝑥 1 + 𝑎2 𝑥 2 +. . . . . . . . . . +𝑎𝑛 𝑥 𝑛 ; 𝑎𝑛 ≠ 0
𝑝(𝑥) = 𝑎𝑛 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) … … . . (𝑥 − 𝑥𝑛 );
𝑝(𝑥) = 𝑎0 𝑥 0 + 𝑎1 𝑥 1 + 𝑎2 𝑥 2 +. . . . . . . . . . +𝑎𝑛 𝑥 𝑛 ≡ 𝑎𝑛 (𝑥 − 𝑥1 )(𝑥 − 𝑥2 ). . . . . . . . (𝑥 − 𝑥𝑛 )
3.1. Properties of the roots:
𝑎𝑛−1
• Sum of roots taken one at a time: ∑𝑛𝑖=1 𝑥𝑖 = (−1)1 ;
𝑎𝑛
𝑎𝑛−2
• Sum of the roots taken two at a time: ∑1≤𝑖<𝑗≤𝑛 𝑥𝑖 𝑥𝑗 = (−1)2 ;
𝑎𝑛
𝑎𝑛−3
• Sum of the roots taken three at a time: ∑1≤𝑖<𝑗≤𝑘≤𝑛 𝑥𝑖 𝑥𝑗 𝑥𝑘 = (−1)3 ;
𝑎𝑛

Similarly,
𝑎0
• Sum of the roots taken all at a time or product of the roots: 𝑥1 ⋅ 𝑥2 ⋅ 𝑥3 ⋅⋅⋅⋅⋅⋅ 𝑥𝑛 = (−1)𝑛
𝑎𝑛

3
byjusexamprep.com
4. SOLUTION OF EQUATIONS BY ITERATION

Intermediate Value Theorem: If a function f(x) is continuous in closed interval [a,b] and
satisfies f(a)f(b) < 0 ; then there exists at least one real root of the equation f(x) = 0 in open
interval (a , b).
Algebraic Equations are equations containing algebraic terms (different powers of x). For
example: 𝑥 3 −4𝑥 2 + 3𝑥 − 8 = 0
Transcendental equations are equations containing non-algebraic terms like trigonometric,
exponential, logarithmic terms. For example: 𝑠𝑖𝑛𝑥 − 𝑒 𝑥 + 2𝑥 4 = 0
One of the most frequently occurring problems in scientific work is to find the roots of equations
of the form 𝑓(𝑥) = 0 ---------------(1)
In what follows, we always assume that f(x) is a continuously differentiable real-valued function
of a real variable x. We further assume that the equation (1) has only isolated roots, that is,
for each root of (1), there is a neighbourhood which does not contain any other roots of the
equation.
The key idea in approximating the isolated real roots of (1) consisting of two steps:
I. Initial guess: Establishing the smallest possible intervals [a, b] containing one and only
one root of the equation (1). Take a point c inside [a, b] as an approximation to the root of
(1).
II. Iteration Step: Improving the value of the root If this initial guess x0 is not in desired
accuracy, then devise a method to improve the accuracy.
This process of improving the value of the root is called the iterative process and such methods
are called iterative methods.
4.1. Bisection Method
This method is based on the theorem on continuity. Let f(x) = 0 has a root in [a, b], the
function f(x) being continuous in [a, b]. Then, f (a) and f (b) are of opposite signs, i.e.,
f (a). f (b) < 0.
𝒂+𝒃
Let 𝒙𝟏 = , the middle point of [a, b]. If f(x1) = 0, then x1 is the root of f(x) = 0.
𝟐

Otherwise, either f(a). f(x1) < 0, implying that the root lies in the interval [a, x1] or f(x1).
f (b) < 0, implying that the root lies in the interval [x 1, b]. Thus, the interval is reduced
from [a, b] to either [a, x1] or [x1,b]. We rename it [a1, b1].
𝒂𝟏 +𝒃𝟏
Let 𝒙𝟐 = , the middle point of [a1, b1]. If f (x2) = 0, then x2 is the root of f(x) = 0.
𝟐

Otherwise, either f(a1). f(x2) < 0 implying that the root ∈ [a1, x2] or f(x2). f (b1) < 0 ⇒
the root ∈ [x2, b1] and so on. We rename it [a2, b2]. We continue in this manner and the
process is repeated until the root is obtained to the desired accuracy.
Step 1. find a and b such that 𝒇(𝒂)𝒇(𝒃) <0

4
byjusexamprep.com
𝑎+𝑏
Step 2. Iteration step: 𝑐 =
2

Check if 𝑓(𝑐) = 0; Stop. c is root.


Else check if 𝑓(𝑐)𝑓(𝑎) < 0; then replace b by c
Else check if 𝑓(𝑐)𝑓(𝑏) < 0; then replace a by c
Step 3. Repeat the iteration step till we get root of the equation f(x) = 0 up to desired
accuracy
Note: 1. Method always converges to root.
2. Order of convergence is 1.
Example: Find by bisection method, a real positive root of 2x–log10(x) = 7.
Sol.
We first get an approximate location.
Here f(x) = 2x – log10(x) – 7. Now f(1) = –5, f(2) = –3.3, f(3) = –1.477,
f(4) = 0.3979 ⇒ f(3).f(4) <0, so that a root α lies in (3,4).
Computation of α (3 < α <4)
𝒂𝒏 + 𝒃𝒏
n an(–ve) bn(+ve) 𝒙𝒏+𝟏 = f(xn+1)
𝟐
0 3 4 3.5 –0.5441
1 3.5 4 3.75 –0.0740
2 3.75 4 3.875 0.1617
3 3.75 3.875 3.8125 0.0438
4 3.75 3.8125 3.7813 –0.0151
5 3.7813 3.8125 3.7969 0.0143
6 3.7813 3.7969 3.7891 –0.0004
7 3.7891 3.7969 3.7930 0.0070
8 3.7891 3.7930 3.7910 0.0033
Check 3.7891 3.7910 3.7910

In the 8th step, an, bn and xn+1 are equal to 3- significant figures. Therefore α = 3.79.
Correct to three significant figures.
4.2. Regula Falsi Method
In this method, we first find a sufficiently small interval [a 0, b0], such that f(a0).f(b0) <
0, by tabulation or graphical method, and which contains only one root α (say) of f(x) =
0, i. e. f’(x) maintains same sign in [a 0,b0].
This method is based on the assumption that the graph of y = f(x) in the small interval
[a0, b0] can be represented by the chord joining (a0, f(a0)) and (b0, f(b0,)). Therefore, at
the point x = x1 = a0 + h0, at which the chord meets the x-axis, we obtain two intervals
[a0, x1] and [x1, b0], one of which must contain the root α, depending upon the condition
f(a0)f(x1) < 0 or f(x1,)f(b0) < 0.
Let f(x1,)f(b0) < 0, then α lies in the interval [x1, b0] which we rename as [a1, b1] Again,
we consider that the graph of y = f(x) in [a 1,b1] as the chord joining (a1,f(a1)) and

5
byjusexamprep.com
(b1,f(b1)) . Thus, the point of intersection of the chord with the x-axis (say) x2 = a1 + h1
gives us an approximate value of the root α of the equation f(x) = 0.
Now we are going to establish an iteration formula which may generate a sequence of
successive approximations of an exact root α of the equation f(x) = 0. Geometrically, we
interpret it as follows:

In the above figure, we assume that one root α of f(x) = 0 lies in the small interval [an,
bn] and f(an) < 0 and f (bn) > 0. Let PRQ be the graph of y = f(x) in [a n,bn] intersecting
the x-axis at R.
Thus, x = OR (= α) gives the exact value of the root α. If we consider the curve PRQ as
the chord PQ, in the small interval [an, bn], which intersects the x-axis at C, then OC =
xn+1 = an + hn approximates the root α of the equation f (x) = 0.
Now from similar triangles AQC and CBP, we get,
𝐴𝐶 𝐶𝐵 𝐴𝑄 |𝑓(𝑎𝑛 )|
= 𝑜𝑟 𝐴𝐶 = 𝐶𝐵 = (𝐴𝐵 − 𝐴𝐶)
𝐴𝑄 𝐵𝑃 𝐵𝑃 |𝑓(𝑏𝑛 )|
|𝑓(𝑎𝑛 )| |𝑓(𝑎𝑛 )| |𝑓(𝑎𝑛 )|
𝑜𝑟, 𝐴𝐶 [1 + ]= . 𝐴𝐵 = (𝑏 − 𝑎𝑛 )
|𝑓(𝑏𝑛 )| |𝑓(𝑏𝑛 )| |𝑓(𝑏𝑛 )| 𝑛
|𝑓(𝑎𝑛 )|
∴ 𝐴𝐶 = ℎ𝑛 = (𝑏 − 𝑎𝑛 ).
|𝑓(𝑎𝑛 )| + |𝑓(𝑏𝑛 )| 𝑛
|𝒇(𝒂𝒏 )|
𝑇ℎ𝑢𝑠, 𝒙𝒏+𝟏 = 𝒂𝒏 + 𝒉𝒏 = 𝒂𝒏 + . (𝒃𝒏 − 𝒂𝒏 )
|𝒇(𝒂𝒏 )| + |𝒇(𝒃𝒏 )|
The above formula is known as the iteration formula for Regula-Falsi method.
Step 1. find a and b such that 𝒇(𝒂)𝒇(𝒃) <0
Step 2. Iteration step:
𝑏𝑓(𝑎) − 𝑎𝑓(𝑏)
𝑐=
(𝑏 − 𝑎)
Check if 𝑓(𝑐) =0; Stop. c is root.
Else check if 𝑓(𝑐). 𝑓(𝑎) < 0; then replace b by c
Else check if 𝑓(𝑐). 𝑓(𝑏) < 0; then replace a by c

6
byjusexamprep.com
Step 3. Repeat the iteration step till we get root of the equation f(x) = 0 up to desired
accuracy
Note: 1. Method always converges to root.
2. Rate of convergence is linear.
Example: Compute a root of x ln(x) = 1 by Regula-Falsi Method, correct to three decimal
places.
Sol.
Let f(x) = x ln(x) – 1. Here f(1) = –1 and f(2) = 0.39, therefore, f(x) = 0 has a root
between 1 and 2. Now we compute the successive approximations of the root as follows:

n an (–) fn(+) f(an) f(bn) hn xn+1 f(xn+1)

0 1.0 2.0 –1.0 0.39 0.72 1.72 –0.067 < 0

1 1.72 2.0 –0.067 0.39 107611 1.7611 –0.00333 < 0

2 1.7611 2.0 –0.00333 0.39 0.002022 1.763122 –0.000158 < 0

3 1.763122 2.0 –0.000158 0.39 0.000096 1.763218 –0.0000075 <0


|𝑓(𝑎𝑛 )|(𝑏𝑛 − 𝑎𝑛 )
𝐻𝑒𝑟𝑒, ℎ𝑛 = ; 𝑥 = 𝑎𝑛 + ℎ𝑛 .
|𝑓(𝑎𝑛 )| + |𝑓(𝑏𝑛 )| 𝑛+1
Thus, 1.763 is root of f(x) = 0, correct up to three decimal places.
4.3. Newton-Raphson Method
When the derivative of f(x) is of the simple form, the real root (non-repeated) of the
equation f(x) = 0, can be computed rapidly by a process known as the Newton Raphson
method. Usually the problem is to find a recurrence relation which enables us to find out
a sequence {xn} converging to the desired root α.
Let x0 be an approximation of the root of f(x) = 0, whose real root is α. Thus, α = x0 +
h, where h is the correction (small) to be applied to x 0 to give the exact value of the root.
Therefore,
f(α) = f(x0 + h) = 0
By Taylor series expansion we get,
ℎ2
𝑓(𝑥𝑜 ) + ℎ𝑓′(𝑥𝑜 ) + 𝑓′′(𝑥𝑜 )+. . . = 0
2!
Since h is small, (neglecting higher orders of h, i.e., h 2, h3,etc.) we get,
𝑓(𝑥𝑜 )
𝑓(𝑥𝑜 ) + ℎ𝑓′(𝑥𝑜 ) ≈ 0 ⇒ ℎ =–
𝑓′(𝑥𝑜 )
Substituting this value of h in α = xo + h, we get a better approximation to the root α of
f(x) = 0 as
𝑓(𝑥𝑜 )
𝑥1 = 𝑥𝑜 –
𝑓′(𝑥𝑜 )

7
byjusexamprep.com
Therefore, the successive approximations are
𝑓(𝑥1 )
𝑥2 = 𝑥1 –
𝑓′(𝑥1 )
𝑓(𝑥2 )
𝑥3 = 𝑥2 –
𝑓′(𝑥2 )
……………………
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 –
𝑓′(𝑥𝑛 )
This formula is known as the iteration formula for Newton Raphson method.
Note: 1. The method fails if f’(x) is zero or is very small in the neighbourhood of the
root.
2. The sufficient condition for convergence of Newton-Raphson method is |f(x) f’’(x) | <
[f’(x)]2
3. The Newton Raphson method is said to have a quadratic rate of convergence.
4. Method does not always converge to root.
5. Does not work for linear equations.
Step 1. Find 𝑓(𝑥) such that 𝑓(𝑥) =0; Now start with 𝑥0 (Starting point for the iteration.
Step 2. Iteration step:
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓′(𝑥𝑛 )
Check if 𝑓(𝑥𝑛+1 ) =0; Stop. 𝑥𝑛+1 is root
Step 3. Repeat the iteration step till we find the root till desired accuracy.
Example: Find a real root of xx + x – 4 = 0, by Newton-Raphson method, correct to six
decimal places.
Sol.
Let f(x) = xx + x – 4 and f’(x) = xx (1 + ln x) + 1
Now, f(1) = –2, f(1.5) = – 0.66, f(1.6) = – 0.27, f(1.7) = 0.16.
Therefore, f(x) = 0 has a root between 1.6 and 1.7. Also, f’(1.6) = 4.12.
Taking x0 = 1.6, the successive iterations are computed in the following table:
𝒇(𝒙)
n xn f(xn) f’(xn) 𝒉𝒏 =– xn+1 = xn + hn
𝒇′(𝒙)
0 1.6 –0.27 4.12 0.066 1.666
1 1.666 0.0065 4.5352 –0.00143 1.66457
2 1.66457 0.0000318 4.525536 –0.000007 1.664563
3 1.664563 0.0000002 4.5254887 –0.000000004 1.664563

Thus, 1.664563 is a root of f(x) = 0, correct up to six decimal places.


Example: Solve x - 2sin x - 3 = 0 correct to two significant figures by Newton Raphson
method correct up to 5 significant digits.
Sol.
Let f(x) = x - 2sin x - 3
f(0) = -3, f(1)= -2 - 2 Sin 1 , f(2)= -1 - 2 Sin 2 ,f(3)= - 2 Sin 3, f(4)= 1- 2 Sin 4

8
byjusexamprep.com
f(-2)= -5 + 2 Sin 2 ,f(-1)= -4 + 2 sin 1
As f(3).f(4)< 0 by Intermediate value Theorem, the real root of the equation f(x) = 0
lies between 3 and 4
Let Let x0 =4 be the initial guess to the equation.
Then
x1= x0 - [f(x0) / f΄(x0)] = 2- f(2)/ f΄(2) = 3.09900
x2= x1 - [f(x1) / f΄(x1)] = - 1.099- f(- 1.099)/ f΄(- 1.099) = 3.10448
x3= x2 - [f(x2) / f΄(x2)] = 3.10450
x4= x3 - [f(x3) / f΄(x3)] = 3.10451
which is the root of equation (2) correct to five significant digits.
4.4. Secant Method
In order to implement the Newton-Raphson method, f’(x) needs to be found analytically
and evaluated numerically. In some cases, the analytical (or its numerical) evaluation
may not be feasible or desirable.
The Secant method is to evaluate the derivative online using two-point differencing.

As shown in the figure,


𝑓(𝑥𝑖–1 )– 𝑓(𝑥𝑖 )
𝑓′(𝑥) ≈
𝑥𝑖–1 – 𝑥𝑖
Then the Newton-Raphson iteration can be modified as
𝑓(𝑥𝑖 )
𝑥𝑖+1 = 𝑥𝑖 –
𝑓′(𝑥𝑖 )
𝑓(𝑥𝑖 )
= 𝑥𝑖 –
𝑓(𝑥𝑖–1 )– 𝑓(𝑥𝑖 )
𝑥𝑖–1 – 𝑥𝑖
𝑓(𝑥𝑖 )(𝑥𝑖 – 𝑥𝑖–1 )
= 𝑥𝑖 –
𝑓(𝑥𝑖 )– 𝑓(𝑥𝑖–1 )
Which is the secant iteration.
The performance of the Secant iteration is typically inferior to that of the Newton-Raphson
method.
Step 1. find any two a and b
Step 2. Iteration step:
𝑏𝑓(𝑎) − 𝑎𝑓(𝑏)
𝑐=
(𝑏 − 𝑎)

9
byjusexamprep.com
Check if 𝑓(𝑐) = 0; Stop. c is root.
Else, replace a by b and b by c.
Step 3. Repeat the iteration step till we get root of the equation f(x) = 0 up to desired
accuracy.
Note: 1. Method does not always converge to root.
2. Rate of convergence is 1.62.
Example: f(x) = ex – 3x = 0, find x.
Sol.
𝑓(𝑥1 )(𝑥1 – 𝑥0 ) 𝑥2 – 𝑥1
𝑥0 =– 1.1, 𝑥1 =– 1, 𝑥2 = 𝑥1 – = 0.2709, 𝜀𝑎 = | | × 100% = 469.09%
𝑓(𝑥1 )– 𝑓(𝑥0 ) 𝑥2
𝑓(𝑥2 )(𝑥2 – 𝑥1 ) 𝑥3 – 𝑥2
𝑥1 =– 1, 𝑥2 = 0.2709, 𝑥3 = 𝑥2 – = 0.4917, 𝜀𝑎 = | | × 100% = 44.90%
𝑓(𝑥2 )– 𝑓(𝑥1 ) 𝑥3
𝑓(𝑥3 )(𝑥3 – 𝑥2 ) 𝑥4 – 𝑥3
𝑥2 = 0.2709, 𝑥3 = 0.4917, 𝑥4 = 𝑥3 – = 0.5961, 𝜀𝑎 = | | × 100% = 17.51%
𝑓(𝑥3 )– 𝑓(𝑥2 ) 𝑥4
x5 = 0.6170, ϵa = 3.4%
x6 = 0.6190, ϵa = 0.32%
x7 = 0.6191, ϵa = 5.93 × 10–5

Example: Solve cos x = x ex, correct to two significant figures by Secant method correct
up to 2 decimal places.
Sol.
Cos x = x ex …. (iii)
Let f(x) = cos x – x ex
f(0) = 1, f(1) = cos 1 – e = –2.178
As f(0) f(1)< 0 by Intermediate value Theorem the root of real root of the equation f(x)
= 0 lies between 0 to 1
Let x0 = 0 and x1 = 1 be two initial guesses to the equation (iii).
Then
(𝑥1 − 𝑥0 )𝑓(𝑥1 ) (1 − 0)𝑓(1) 2.178
𝑥2 = 𝑥1 − =1− = 1− = 0.31465
𝑓(𝑥1 ) − 𝑓(𝑥0 ) 𝑓(1) − 𝑓(0) −3.178
f(x2) = f(0.31465) cos (0.31465) – 0.31465 e0.31465 = 0.51987
(𝑥2 − 𝑥1 )𝑓(𝑥2 ) (0.31465 − 1)𝑓(0.31465)
𝑥3 = 𝑥2 = = 0.31465 − = 0.44672
𝑓(𝑥2 ) − 𝑓(𝑥1 ) 𝑓(0.31465) − 𝑓(1)
(𝑥3 − 𝑥2 )𝑓(𝑥3 )
𝑥4 = 𝑥3 = = 0.64748
𝑓(𝑥3 ) − 𝑓(𝑥2 )
(𝑥4 − 𝑥3 )𝑓(𝑥3 )
𝑥5 = 𝑥4 = = 0.44545
𝑓(𝑥4 ) − 𝑓(𝑥3 )
Which is the root of equation correct to two decimal places.

10
byjusexamprep.com
5. NUMERICAL INTEGRATION

𝑏 𝑥
Consider the integral 𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥 = ∫𝑥 𝑛 𝑦𝑑𝑥
0

• Where 𝑦 = 𝑓(𝑥) ; 𝑥𝑛 = (𝑥0 + 𝑛ℎ)


• Where integrand f(x) is a given function and a and b are knowns which are end points of
the interval [a, b]. Either f(x) is given or a table of values of f(x) are given.
𝑥𝑛 −𝑥0 𝑏−𝑎
• Let us divide the interval [a, b] into 𝑛 = = number of equal subintervals so that
ℎ ℎ
𝑏−𝑎
length of each subinterval is ℎ = (𝑥1 − 𝑥0 ) = (𝑥2 − 𝑥1 ) = … … . = (𝑥𝑛 − 𝑥𝑛−1 ) =
𝑛

• The end points of subintervals are (n+1) points (𝑎 = 𝑥0 , 𝑦0 ), (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ),……. (𝑏 = 𝑥𝑛 , 𝑦𝑛 ).
5.1. Trapezoidal Rule of integration
Let us approximate integrand f by a line segment in each subinterval. Then coordinate of
end points of subintervals are
(x0, y0), (x1, y1), (x2, y2), ............., (xn, yn )
Then from x=a to x=b the area under curve of y = f(x) is approximately equal to sum of
the areas of n trapezoids of each n subintervals.
so the integral
𝑏
I = ∫𝑎 𝑓(𝑥)𝑑𝑥 = (ℎ/2)[𝑦0 + 𝑦1 ] + (ℎ/2)[𝑦1 + 𝑦2] + (ℎ/2)[𝑦2 + 𝑦3 ]+. . . . . . +(ℎ/2)[𝑦𝑛−1 + 𝑦𝑛 ]

= (h/2) [y0 + y1 + y1 + y2 + y2 + y3 +….. + yn-1 + yn]


= (h/2) [y0 + yn + 2(y1 + y2 + y3 + ….. + yn–1)]
Which is called trapezoidal rule.
𝑏 𝑥 ℎ
So the integral ∫𝑎 𝑓(𝑥)𝑑𝑥 = ∫𝑥 𝑛 𝑦𝑑𝑥 = ((𝑦0 + 𝑦𝑛 ) + 2(𝑦1 + 𝑦2 … … … . 𝑦𝑛−1 ))
2
0

Note:
• Trapezoidal rule is known as 2 points formula.
• Is accurate till polynomial of degree 1.
𝑏−𝑎
• The error in trapezoidal rule is − ℎ2 𝑓′′(𝜃) where a < θ < b
12

5.2. Simpsons rule of Numerical integration (Simpsons 1/3rd rule)


𝑏
Consider the integral 𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥

Where integrand f(x) is a given function and a and b are known which are end points of
the interval [a, b]. Either f(x) is given or a table of values of f(x) are given.
Let us approximate integrand f by a line segment in each subinterval. Then coordinate of
end points of subintervals are (x0, y0), (x1, y1), (x2, y2),…., (xn, yn).
We are taking two strips at a time Instead of taking one strip as in trapezoidal rule. For
this reason, the number of intervals in Simpsons rule of Numerical integration must be
even.
The length of each subinterval is h = (b – a)/(2m)
The formula is if we take exactly Even number of intervals
𝒃
𝑰 = ∫𝒂 𝒇(𝒙)𝒅𝒙 = (𝒉/𝟑)[𝒚𝟎 + 𝒚𝟐𝒎 + 𝟒(𝒚𝟏 + 𝒚𝟑 +. . . . +𝒚𝟐𝒎−𝟏 ) + 𝟐(𝒚𝟐 + 𝒚𝟒 +. . . +𝒚𝟐𝒎−𝟐 )]

11
byjusexamprep.com
Generally
𝒃 𝒙 𝒉
∫𝒂 𝒇(𝒙)𝒅𝒙 = ∫𝒙 𝒚𝒅𝒙= 𝟑 ((𝒚𝟎 + 𝒚𝒏 ) + 𝟒(𝒚𝟏 + 𝒚𝟑 … ) + 𝟐(𝒚𝟐 + 𝒚𝟒 + ⋯ ))
𝒏
𝟎

Note:
• Is accurate till polynomial of degree 2.
𝑏−𝑎
• The error is Simpson 1/3rd rule is − ℎ4 𝑓 ′𝑣 (𝜃)where a < 𝜃 < b
180

5.3. Simpsons rule of Numerical integration (Simpsons 3/8th rule)


𝑏
Consider the integral 𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥

Where integrand f(x) is a given function and a, b are known which are end points of the
interval [a, b].
Either f(x) is given or a table of values of f(x) are given.
We are taking three strips at a time Instead of taking one strip as in trapezoidal rule. For
this reason the number of intervals in Simpsons 3/8th rule of Numerical integration must
be multiple of 3.
The length of each subinterval is h = (b – a)/(3m)
The formula is if number of intervals are multiple of 3.
𝑏
𝐼 = ∫ 𝑓(𝑥)𝑑𝑥 = (3ℎ/8)[𝑦0 + 𝑦3𝑚 + 3(𝑦1 + 𝑦2 + 𝑦4 + 𝑦5 +. . . +𝑦3𝑚−1 ) + 2(𝑦3 + 𝑦6 +. . . +𝑦3𝑚−3 )]
𝑎
𝑏 𝑥
Generally, the formula is ∫𝑎 𝑓(𝑥)𝑑𝑥 = ∫𝑥 𝑛 𝑦𝑑𝑥
0

3ℎ
= ((𝑦0 + 𝑦𝑛 ) + 3(𝑦1 + 𝑦2 + 𝑦4 + 𝑦5 + ⋯ ) + 2(𝑦3 + 𝑦6 + ⋯ ))
8

Note:
• Is accurate till polynomial of degree 3.
𝑏−𝑎
The error in Simpson 1/3rd rule is − ℎ4 𝑓 ′𝑣 (𝜃)where a<𝜃<b
80

12
byjusexamprep.com
6. SOLUTION OF SYSTEM OF LINEAR EQUATIONS BY ITERATIVE METHODS

6.1. Jacobi Method


The n × n linear system can also be solved using iterative procedures. The most
fundamental iterative method is the Jacobi iterative method, which we will explain in the
case of 3 × 3 system of linear equations.
Consider the 3 × 3 system
a11x1 + a12x2 + a13x3 = b1
a21x1 + a22x2 + a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
When the diagonal elements of this system are non-zero, we can rewrite the above
equation as
1
𝑥1 = (𝑏 − 𝑎12 𝑥2 − 𝑎13 𝑥3 )
𝑎11 1
1
𝑥2 = (𝑏 − 𝑎21 𝑥1 − 𝑎23 𝑥3 )
𝑎11 2
1
𝑥3 = (𝑏 − 𝑎31 𝑥1 − 𝑎32 𝑥2 )
𝑎33 3
Let x(0) = (x1(0), x2(0), x3(0)) be an initial guess to the true solution x, then define an
iteration sequence :
(𝑚+1) 1 (𝑚) (𝑚)
𝑥1 = (𝑏 − 𝑎12 𝑥2 − 𝑎13 𝑋3 )
𝑎11 1
(𝑚+1) 1 (𝑚) (𝑚)
𝑥2 = (𝑏 − 𝑎21 𝑥1 − 𝑎23 𝑋3 )
𝑎22 2
(𝑚+1) 1 (𝑚) (𝑚)
𝑥3 = (𝑏 − 𝑎31 𝑥1 − 𝑎32 𝑋2 )
𝑎33 3
for m = 0, 1, 2, …. This is called the Jacobi Iteration method.
6.2. Gauss -Seidel Method
A modified version of Jacobi method if the Gauss-Seidel method and is given by
(𝑚+1) 1 (𝑚) (𝑚)
𝑥1 = (𝑏 − 𝑎12 𝑥2 − 𝑎13 𝑋3 )
𝑎11 1
(𝑚+1) 1 (𝑚+1) (𝑚)
𝑥2 = (𝑏 − 𝑎21 𝑥1 − 𝑎23 𝑋3 )
𝑎22 2
(𝑚+1) 1 (𝑚+1) (𝑚+1)
𝑥3 = (𝑏 − 𝑎31 𝑥1 − 𝑎32 𝑋2 )
𝑎33 3
Example: Solve following linear equations using Gauss-Seidal iteration method starting
from 1, 1, 1.
x1 + x2 + 2 x3 = 8
2x1 + 3 x2 + x3 = 12
5x1 + x2 + x3 = 15

13
byjusexamprep.com
Sol.
Rewrite the given equations so that each equation for the variable that has coefficient
largest we get
5x1 + x2 + x3 = 15 (1)
2x1 + 3 x2 + x3 = 12 (2)
x1 + x2 + 2 x3 = 10 (3)
From equation (1) we get x1 in terms of other variables x2 and x3 as
5x1 = 1 5 - x2 - x3
x1 = (1 5 - x2 - x3 )/5 = 3 – 0.2 x2 – 0.2 x3 (4)
From equation (2) we get x2 in terms of other variables x1 and x3 as
2x1 + 3 x2 + x3 = 12
x2 = 4 - (2x1 + x3 )/3 (5)
From equation (3) we get x3 in terms of other variables x1 and x2 as
x1 + x2 + 2 x3 =10
x3 = 5 - 0.5 x1 - 0.5 x2 (6)
Step-1
Putting x2 = 1, x3 = 1 in equation (4) we get
x1 = 3 – 0.2 x2 – 0.2 x3 = 3 – 0.2 – 0.2 = 2.6
Putting x1 = 2.6, x3 = 1 in equation (5) we get
x2 = 4 - (2x1 + x3 )/3 = 4 – (5.2+1)/3 = 1.93333
Putting x2 = 1.93333, x1 = 2.6 in equation (6) we get
x3 = 5 - 0.5 x1 - 0.5 x2 = 5 - 0.5 (2.6) - 0.5 (1.93333) = 2.73333
Step-2
Putting x2 = 1.93333, x3 = 2.73333 in equation (4) we get
x1 = 3 – 0.2 x2 – 0.2 x3 = 3 – 0.2(1.93333) – 0.2 (2.73333 )= 2.066666
Putting x1 = 2.06666, x3 = 2.73333 in equation (5) we get
x2 = 4 - (2x1 + x3)/3 = 4 – (4.13333 + 2.73333 )/3 = 1.71111
Putting x2 = 1.71111, x1 = 2.066666 in equation (6) we get
x3 = 5 - 0.5 x1 - 0.5 x2 = 5 - 0.5 ( 2.066666 ) - 0.5 (1.71111) = 3 .11111
Step-3
Putting x2 = 1.71111, x3 = 3 .11111 in equation (4) we get
x1 = 3 – 0.2 x2 – 0.2 x3 = 3 – 0.2(1.71111) – 0.2 (3 .11111 )= 2.035555
Putting x1 = 2.035555 , x3 = 3 .11111in equation (5) we get
x2 = 4 - (2x1 + x3 )/3 = 4 – ( 4.07111 + 3 .11111)/3 = 1.605925
Putting x2 = 1.605925, x1 = 2.035555 in equation (6) we get
x3 = 5 - 0.5 x1 - 0.5 x2 = 5 - 0.5 (2.035555) - 0.5 (1.605925) = 3.17926

14
byjusexamprep.com
Step-4
Putting x2 = 1.605925, x3 = 3 .17926 in equation (4) we get
x1 = 3 – 0.2 x2 – 0.2 x3 = 3 – 0.2(1.605925) – 0.2 (3 .17926)= 2.042962
Putting x1 = 2.042962, x3 = 3 .17926 in equation (5) we get
x2 = 4 - (2x1 + x3 )/3 = 4 – ( 4.08592 + 3 .17926)/3 = 1.57827
Putting x2 = 1.57827, x1 = 2.042962 in equation (6) we get
x3 = 5 - 0.5 x1 - 0.5 x2 = 5 - 0.5 (2.042962) - 0.5 (1.57827) = 3.18938

7. NUMERICAL SOLUTION OF DIFFERENTIAL EQUATION

Many differential equations cannot be solved exactly. For these DE's we can use numerical
methods to get approximate solutions.
The Euler's method is the simplest numerical method, akin to approximating integrals using
rectangles, but it contains the basic idea common to all the numerical methods that will be
studied.
7.1. Euler Method (Forward or Explicit Method)
𝑑𝑦
Note: Differential Equation: = 𝑓(𝑥, 𝑦)
𝑑𝑥

Equation Here starting point is (𝑥0 , 𝑦0 ) where 𝑦0 =y(𝑥0 )


Also, 𝑥𝑛+1 = 𝑥𝑛 + ℎ
𝑦𝑛+1 = 𝑦𝑛 + 𝑘
Iterative formula to find 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
Here 𝑘 = ℎ𝑓(𝑥𝑛 , 𝑦𝑛 )
With starting step: 𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )
NOTE:
1. Also known as first order Runge – Kutta method.
2. Order of error is 𝑂(ℎ2 )
7.2. Euler Method (Backward or Implicit Method)
𝑑𝑦
Note: Differential Equation: = 𝑓(𝑥, 𝑦)
𝑑𝑥

Equation Here starting point is (𝑥0 , 𝑦0 ) where 𝑦0 =y(𝑥0 )


Also 𝑥𝑛+1 = 𝑥𝑛 + ℎ
𝑦𝑛+1 = 𝑦𝑛 + 𝑘
Iterative formula to find 𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓(𝑥𝑛+1 , 𝑦𝑛+1 )
Here 𝑘 = ℎ𝑓(𝑥𝑛+1 , 𝑦𝑛+1 )
With starting step: 𝑦1 = 𝑦0 + ℎ𝑓(𝑥1 , 𝑦1 )
NOTE:
1. Also known as first order Runge – kutta method.
2. Order of error is 𝑂(ℎ2 )

15
byjusexamprep.com
7.3. Modified Euler Method (Predictor-Corrector Method)
𝑑𝑦
Note: Differential Equation: = 𝑓(𝑥, 𝑦)
𝑑𝑥

With starting step: 𝑦1 = 𝑦0 + ℎ𝑓(𝑥0 , 𝑦0 )


(1) ℎ
Iterative formula: 𝑦1 = 𝑦0 + (𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ, 𝑦1 ))
2
(2) ℎ (1)
𝑦1 = 𝑦0 + (𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ,𝑦1 ))
2
(𝑛) ℎ (𝑛−1)
𝑦1 = 𝑦0 + (𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ,𝑦1 ))
2

Repeat till the desired accuracy


ℎ (𝑛−1)
Here k = (𝑓(𝑥0 , 𝑦0 ) + 𝑓(𝑥0 + ℎ,𝑦1 ))
2

NOTE:
Also known as Second order Runge – kutta method.
Order of error is 𝑂(ℎ3 )
7.4. Runge – Kutta Method (fourth order Method)
𝑑𝑦
Note: Differential Equation: = 𝑓(𝑥, 𝑦) ; ℎ𝑒𝑟𝑒 𝑦(𝑥0 ) = 𝑦0
𝑑𝑥

𝑘1 = ℎ𝑓(𝑥0 , 𝑦0 )
ℎ 𝑘1
𝑘2 = ℎ𝑓(𝑥0 + , 𝑦0 + )
2 2
ℎ 𝑘2
𝑘3 = ℎ𝑓(𝑥0 + , 𝑦0 + )
2 2

𝑘4 = ℎ𝑓(𝑥0 + ℎ, 𝑦0 + 𝑘3 )
1
Now k = (𝑘1 + 2𝑘2 + 2𝑘3 +𝑘4 )
6

Solution 𝑦1 = 𝑦0 + 𝑘
NOTE:
Order of error is 𝑂(ℎ4 )
Example : Use the Euler method to solve numerically the initial value problem
u′ = –2tu2, u(0) = 1
With h = 0.2 on the interval [0, 1]. Compute u (1.0)
We have
uj+1 = uj – 2htjuj2, j = 0, 1, 2, 3, 4. [Here x and y are replaced by t and u respectively)
With h=0.2. The initial condition gives u0=1
For j = 0: t0 = 0, u0 = 1
u (0.2) = u1 = u0 – 2ht0u02 = 1.0.
For j = 1: t1 = 0.2, u1 = 1
u (0.4) = u2= u1 – 2ht1u12 = 0.92.
For j = 2: t2 = 0.4, u2 = 0.92
u (0.6) = u3 = u2 – 2ht2u22 = 0.78458.
For j = 3: t3 = 0.6, u3 = 0.78458
u(0.8) = u4 = 0.63684.
Similarly, we get
u(1.0) = u5 = 0.50706.

16
byjusexamprep.com
8. REGRESSION ANALYSIS

Regression analysis is a mathematical measure of the average relationship between two or


more variables in terms of the original units of the data.
Linear regression
• If the variables in a bivariate distribution are related, we will find the points in the scatter
diagram will cluster around some curve called the “curve of regression”.
• If the curve is a straight line, it is called the line of regression and there is said to be linear
regression between two variables, otherwise regression is said to be curvilinear.
• The line of regression is the line which gives the best estimate to the value of one variable
for any specific value of other variable. Thus the line of regression is the line of “best fit”
and is obtained by the principle of least squares.
• Let us suppose that in the bivariate distribution (xi; yi);= 1, 2, _ _ _ ; n; Y is dependent
variable and X is independent variable. Let the line of regression of Y on X be
y = a + bx.
• Using Least square method, we get the line of regression of Y on X passes through the point
(x̅ ; y̅ ) as
y – y̅ = k1(x – x̅ );
where x̅ and y̅ are the means of the x– and y– values in our sample, and the slope k1 is called
the regression coefficient, is given by
𝑠𝑥𝑦
𝑘1 = 2 ,
𝑠𝑥
with the “sample covariance” Sxy given by
𝑛 𝑛 𝑛 𝑛
1 1 1
𝑠𝑥𝑦 = ∑(𝑥𝑗 − 𝑥) (𝑦𝑗 − 𝑦) = [∑ 𝑥𝑗 𝑦𝑗 − (∑ 𝑥𝑗 ) (∑ 𝑦𝑗 )],
𝑛−1 𝑛−1 𝑛
𝑗=1 𝑗=1 𝑗=1 𝑗=1

and the “sample variance of the x-values” s2xis given by


2
𝑛 𝑛 𝑛
1 1 1
𝑠𝑥2 = ∑(𝑥𝑗 − 𝑥)2 = [∑ 𝑥𝑗2 − (∑ 𝑥𝑗 ) ].
𝑛−1 𝑛−1 𝑛
𝑗=1 𝑗=1 𝑗=1

****

17

You might also like