Numerical Method V2
Numerical Method V2
EEE-302
Numerical Method
Experiment 4 Solutions to Non-Linear Equations (Secant Method & Regula Falsi .................
Method) .............................................................................................................................. 16
1.1 Objectives
1. Explain basic features configurations and applications of some signals in digital
domain.
2. Familiar with the practical implementation of digital signals.
3. Explain different digital systems and their properties.
4. Explain a system through difference equation in digital domain.
5. Manipulate frequency response of a system in digital domain.
6. Justify the system transfer function through different transformation techniques in
digital domain.
7. Recognize the applications of Time and Frequency domain analysis and advanced
signal processing aspects.
Page 3 of 60
1.3 Theory
1.3.1 Starting MATLAB
1. The Command Window: To enter variables, execute commands and to run M-files.
2. Menus:
I. File: from the file menu, you can create a new M-file, figure…etc. You can also
open any file and you can access the preferences of MATLAB.
II. Edit: cut, copy, paste...etc.
III. Desktop: to control the desktop of MATLAB.
IV. Tip: to restore the default desktop go to Desktop →Desktop Layout→default
V. Window: to get access to the windows/files e.g. the open M-files documents.
3. The Current Directory Browser: any files you want to run must either be in the
current directory or on search path. The current directory browser enables you to
browse all the files saved in the current directory. You can run, rename, delete…etc.
4. Command History: in the command history you can view the previously used
functions and copy and execute selected lines.
5. Lunch Pad: provides easy access to tools, demos, and documentations.
6. Help Browser: to search and view documentations for all your MathWorks products.
7. Workspace Browser: the MATLAB workspace consists of the set of variables
(named arrays) built up during a MATLAB session and stored in memory. To view
the workspace and information about each variable, use the Workspace Browser,
or use the commands who and whos.
Page 4 of 60
8. Array Editor: double click on a variable in the Workspace Browser to see it in the Array
Editor.
9. Editor/Debugger: to create and debug M-files.
List of Equipment:
1. Desktop PC
2. Software MATLAB R2020a
1.4 Procedure
1.4.1 First Steps in MATLAB
When MATLAB starts, the special >> prompt (the command line) appears, MATLAB is ready
to receive your commands. Try to compute c=a+b, where a=10, and b=20.
Page 5 of 60
Logarithm Command on MATLAB-
(log10(10))^4 logBasePower; Example: log10 10
log10 means= log 𝑒 10 (default)
(log2(5))^4 2 Base Logarithm
(log(10))^4 e base logarithm
X= 0:2:16; Y=2*X; Matrix Algebraic Calculations
T=linspace(0,2*pi*100); Plotting on MATLAB
X=sin(T); Y=cos(T); plotting X and Y respect to T
plot(X)
plot(Y)
subplot(3,1,1) Subplot
1
1. sin(225°) + cot(30°) + tan−1 (2) + 10𝑒 −10 + 9 ∗ 10−2 + (log10 10)3 + (log 𝑒 10)5
2. Think about A and B are the last two digits of your ID-
𝐵
sin(𝐴𝐵°) + cot(𝐵𝐴°) + tan−1 + 𝐵𝑒 −𝐵𝐴 + 𝐴 ∗ 10−𝐴 + (log10 𝐵)𝐴𝐵 + (log 𝑒 𝐵𝐴) 𝐴
𝐴
Page 6 of 60
Experiment 2
Introduction to MATLAB (Part B)
2.1 Objectives
1. Explain basic features configurations and applications of some signals in digital
domain.
2. Familiar with the practical implementation of digital signals.
3. Explain different digital systems and their properties.
4. Explain a system through difference equation in digital domain.
5. Manipulate frequency response of a system in digital domain.
6. Justify the system transfer function through different transformation techniques in
digital domain.
7. Recognize the applications of Time and Frequency domain analysis and advanced
signal processing aspects
Page 7 of 60
2.3 Theory
2.3.1 −File
M-files provide an easy way to write and excite your commands and programs. For a large
number of commands and complex problem-solving M-files is a must. It allows you to place
MATLAB command in a simple text file and then tell MATLAB to open the file and execute the
commands exactly as it would if you typed them at the MATLAB Command Window.
M-Files must end with the extension ‘.m’. For example, homework1.m
2.5 Procedure
2.5.1 Matrix Operations
Page 8 of 60
A+B-C-10
A/C
A*B*C
2*A
A^2
A.^2
Matrix zeros(3,3)
ones(3,2)
Generation rand(3,3)
Matrix Index B=[0 20 90; 12 -34 45]
B(2,3)
B(1,2)
B(2,2)
Transpose, W=[1 2 3; -4 -5 -6; 0 1 0] W’=Transpose Matrix
W'
Determinant det(W) det(W)=Determinant of W
and Inverse inv(W) inv(W)= inverse matrix of W
Matrix
For Loop for i = 1:2:10 For index = expression
x = i^2 Statement group x
end End
Page 9 of 60
Else disp('unequal') > Greater than.
else >= Greater than or equal to.
Statement disp('equal') == Equal to.
end ~= Not equal to.
Method 1
A=[3 10 -1;-2 1 -10; 1 1 -1];
b=[0;-2;3];
x=inv(A)*b
x=A\b
Method 2
syms x y z
[x,y,z]=solve([3*x+10*y-z==0,-
2*x+y-10*z==-2,x+y-z==3],[x,y,z])
Page 10 of 60
Experiment 3
Iterative Processes for Root Finding (Iterative Method, Aitken’s
∆² Acceleration Method, Bisection Method)
3.1 Objective
1. Explain basic features configurations and applications of some signals in digital
domain.
2. Familiar with the practical implementation of Iterative Method.
3. Familiar with the practical implementation of Aitken’s ∆² Acceleration Method.
4. Familiar with the practical implementation of Bisection Method.
5. Explain different digital systems and their properties.
6. Explain a system through difference equation in digital domain.
7. Manipulate frequency response of a system in digital domain.
3.3 Theory
Script file:
Page 11 of 60
A formula can be developed for simple fixed-point iteration by arranging the function 𝑓(𝑥) =
0so that 𝑥 is on the left hand side of the equation:
𝑥 = 𝑔(𝑥)
Eq. (1) can be used to compute a new estimate 𝑥𝑖+1 as expressed by the iterative formula
𝑥𝑖+1 = 𝑔(𝑥𝑖 )
The approximate error for this eq. (2) can be determined using the error estimator:
𝑥𝑖+1 − 𝑥𝑖
|e𝑎 | = | | 100%
𝑥𝑖+1
Here,
∆𝑥𝑛 = 𝑥𝑛+1 − 𝑥𝑛
The Bisection method is one of the simplest procedures for finding root of a function in a
given interval. The procedure is straightforward. The approximate location of the root is first
determined by finding two values that bracket the root (a root is bracketed or enclosed if the
function changes sign at the endpoints). Based on these a third value is calculated which is
closer to the root than the original two value. A check is made to see if the new value is a root.
Otherwise a new pair of bracket is generated from the three values, and the procedure is
repeated.
Page 12 of 60
Consider a function d ( x) and let there be two values of x , xlow and xup ( xup > xlow ), bracketing
a root of d ( x) . The first step is to use the brackets xlow and xup to generate a third value that
is closer to the root. This new point is calculated as the mid-point between xlow and, namely
xlow + xup
xmid = . The method therefore gets its name from this bisecting of two values. It is
2
also known as interval halving method. Test whether xmid is a root of d ( x) by evaluating the
function at xmid . If xmid is not a root, then check if d ( xlow ) and d ( xmid ) have opposite signs i.e.
d ( xlow ) . d ( xmid ) <0, root is in left half of interval. Or if d ( xlow ) and d ( xmid ) have same signs i.e.
d ( xlow ) . d ( xmid ) >0, root is in right half of interval. Continue subdividing until interval width
𝑥𝑢𝑝 −𝑥𝑙𝑜𝑤
( ) has been reduced to a size < tolerance. Tips: tolerance shall be 1x10-4 .
2
3.4 Procedure
3.4.1 Iterative Process Steps
5. If %𝑒𝑟𝑟𝑜𝑟 > 1% , then put 𝑥𝑜𝑙𝑑 = 𝑥𝑛𝑒𝑤 and repeat steps (2) to (4)
6. Continue evaluating steps (2) to (4) until %𝑒𝑟𝑟𝑜𝑟 has been reduced to a value 1
7. If %𝑒𝑟𝑟𝑜𝑟 < 1% , then 𝑅𝑜𝑜𝑡 = 𝑥𝑛𝑒𝑤 and stop evaluating iterations.
Page 13 of 60
3.4.2 Aitken's Method for Acceleration (Δ2Method) Steps
6. Check accuracy = |𝑎 − 𝑥3 |
7. If 𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 > 1 , then put 𝑥1 = 𝑎 and repeat steps (2) to (6)
8. Continue evaluating steps (2) to (6) until accuracy has been reduced to a value < 1
9. If 𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 < 1 , then Aitken’s 𝑅𝑜𝑜𝑡 = 𝑎 and stop evaluating iterations.
1. The first step choice the initial approximations 𝑥𝑢𝑝 and 𝑥𝑙𝑜𝑤
𝑥𝑢𝑝 + 𝑥𝑙𝑜𝑤
2. Determine the value of 𝑥𝑚𝑖𝑑𝑜𝑙𝑑 = 2
8. If 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 > 1 then change 𝑥𝑚𝑖𝑑𝑜𝑙𝑑 = 𝑥𝑚𝑖𝑑𝑛𝑒𝑤 and repeat steps (2) to (6)
9. Continue evaluating steps (2) to (6) until difference has been reduced to a value < 1
10. If 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 < 1 then 𝑅𝑜𝑜𝑡 = 𝑥𝑚𝑖𝑑𝑛𝑒𝑤 and stop evaluating iterations.
Page 14 of 60
2. Write MATLAB Program of Bisection Method for the following systems:
a) 𝑥 5 + 𝑥 + 1 = 0; 𝑥𝑢𝑝 = 5, 𝑥𝑙𝑜𝑤 = −5
b) 3𝑥 3 + 5𝑥 2 + 𝑥 − 1 = 0; 𝑥𝑢𝑝 = 6, 𝑥𝑙𝑜𝑤 = −7
c) 𝑥 3 − 3𝑥 − 5 = 0; 𝑥𝑢𝑝 = 5, 𝑥𝑙𝑜𝑤 = −5
d) 𝑥 3 − 0.39𝑥 2 − 10.5𝑥 + 11.0 = 0; 𝑥𝑢𝑝 = 5, 𝑥𝑙𝑜𝑤 = −5
Page 15 of 60
Experiment 4
Solutions to Non-Linear Equations (Secant Method & Regula
Falsi Method)
4.1 Objectives
1. Familiar with the practical implementation of Precision of Root Finding for a System.
2. Familiar with the practical implementation of Regula Falsi Method.
3. Familiar with the practical implementation of Newton Raphson Method.
4. Familiar with the practical implementation of Secant Method.
5. Explain a system through difference equation in digital domain.
4.3 Theory
4.3.1 Regula Falsi Method
A shortcoming of the bisection method is that, in dividing the interval from xlow to xup into
equal halves, no account is taken of the magnitude of f ( xlow ) and f ( xup ) . For example, if
f ( xlow ) is much closer to zero than f ( xup ) , it is likely that the root is closer to xlow than to xup
. An alternative method that exploits this graphical insight is to join f ( xlow ) and f ( xup ) by a
straight line. The intersection of this line with the x axis represents an improved estimate of
the root. The fact that the replacement of the curve by a straight line gives the false position
of the root is the origin of the name, method of false position, or in Latin, Regula Falsi. It is
also called the Linear Interpolation Method.
Page 16 of 60
Using similar triangles, the intersection of the straight line with the x axis can be estimated
f ( xlow ) f ( xup )
=
x − xlow x − xup
That is
values of xlow and xup always bracket the true root. The process is repeated until the root is
estimated adequately.
𝑥: 𝑓(𝑥) = 0
Page 17 of 60
The method starts with a function f defined over the real numbers x, the function's derivative f ′,
and an initial guess x0 for a root of the function f. If the function satisfies the assumptions made in
the derivation of the formula and the initial guess is close, then a better approximation x1 is
𝑓(𝑥0 )
𝑥1 = 𝑥0 −
𝑓 ′ (𝑥0 )
Geometrically, (x1, 0) is the intersection of the x-axis and the tangent of the graph of f at (x0,
f (x0)).
𝑓(𝑥𝑛 )
𝑥𝑛+1 = 𝑥𝑛 −
𝑓 ′ (𝑥𝑛 )
Until a sufficiently accurate value is reached.
The secant method can be coded so that only one new function evaluation is required per
iteration. The formula for the secant method is the same one that was used in the Regula
Falsi method, except that the logical decisions regarding how to define each succeeding term
are different.
Page 18 of 60
In the Secant method, the derivative can be approximated by a backward finite divided
difference, as in the figure,
f ( x k −1 ) − f ( x k )
f ( x k )
x k −1 − x k
f ( xk )
x k +1 = x k −
f ( x k )
Substituting, f ( xk )
f ( x k )( x k −1 − x k )
x k +1 = x k −
f ( x k −1 ) − f ( x k )
4.4 Procedure
4.4.1 Regula Falsi Method Steps
Page 19 of 60
4. Check if |𝑓(𝑥1 )| < |𝑓(𝑥2 )| then change 𝑥2 = 𝑥1 , 𝑥1 = 𝑥
if |𝑓(𝑥1 )| > |𝑓(𝑥2 )| then change 𝑥1 = 𝑥2 , 𝑥2 = 𝑥
5. Find 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = |𝑥1 − 𝑥2 |
6. If 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 > 1 then repeat steps (2) to (5)
7. Continue evaluating steps (2) to (5) until difference has been reduced to a value < 1
8. If 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 < 1 then 𝑅𝑜𝑜𝑡 = 𝑥 and stop evaluating iterations.
1. Write MATLAB Program of Regula Falsi, Secant, Newton-Raphson Methods for the
following systems: (here, 𝑥1 , 𝑥2 for Regula Falsi and Secant; 𝑥𝑖𝑛𝑖𝑡𝑖𝑎𝑙 for Newton-
Raphson)
i. 𝑥 5 + 𝑥 + 1 = 0; 𝑥1 = 5, 𝑥2 = −5 & 𝑥𝑖𝑛𝑖𝑡𝑖𝑎𝑙 = −0.8
Page 20 of 60
ii. 3𝑥 3 + 5𝑥 2 + 𝑥 − 1 = 0; 𝑥1 = 6, 𝑥2 = −7 & 𝑥𝑖𝑛𝑖𝑡𝑖𝑎𝑙 = −0.7
iii. 𝑥 3 − 3𝑥 − 5 = 0; 𝑥1 = 5, 𝑥2 = −5 & 𝑥𝑖𝑛𝑖𝑡𝑖𝑎𝑙 = 2
iv. 𝑥 3 − 0.39𝑥 2 − 10.5𝑥 + 11.0 = 0; 𝑥1 = 5, 𝑥2 = −5 & 𝑥𝑖𝑛𝑖𝑡𝑖𝑎𝑙 = 1.8
Page 21 of 60
Experiment 5
Interpolation by Newton-Gregory forward difference formula
5.1 Objectives
1. Familiar with the practical implementation of Precision of Root Finding for a System.
2. Familiar with the practical implementation of Interpolation by Newton-Gregory
forward difference formula.
3. Familiar with the Merits and Demerits of Interpolation by Newton-Gregory forward
difference formula.
4. Explain a system through difference equation in digital domain.
5.3 Theory
We are familiar with the analytical method of finding the derivative of a function when the
functional relation between the dependent variable y and the independent variable x is
known. However, in practice, most often functions are defined only by tabulated data, or the
values of y for specified values of x can be found experimentally. Also in some cases, it is not
possible to find the derivative of a function by analytical method. In such cases, the analytical
process of differentiation breaks down and some numerical process have to be invented. The
process of calculating the derivatives of a function by means of a set of given values of that
function is called numerical differentiation. This process consists in replacing a complicated
or an unknown function by an interpolation polynomial and then differentiating this
polynomial as many times as desired.
Page 22 of 60
5.3.1 Forward Difference Formula
𝑓′′(𝑥𝑖 ) 2
𝑓(𝑥𝑖−1 ) = 𝑓(𝑥𝑖 ) + 𝑓′(𝑥𝑖 )ℎ + ℎ
2!
From (1)
𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖 )
𝑓′(𝑥𝑖 ) ≅ + 𝑂(ℎ)
ℎ
Where, O(h) is the truncation error, which consists of terms containing h and higher order
terms of h.
𝑓(𝑥𝑖+1 )−𝑓(𝑥𝑖 )
Total or True error = |𝑓 ′ (𝑥) − |………………..(3)
ℎ
Very often it so happens in practice that the given data set (𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0,1,2, … , 𝑛 correspond
to a sequence {𝑥𝑖 } of equally spaced points. Here we can assume that
𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛
where 𝑥0 is the starting point (sometimes, for convenience, the middle data point is taken as
𝑥0 and in such a case the integer 𝑖 is allowed to take both negative and positive values.) and
ℎ is the step size. Further it is enough to calculate simple differences rather than the
divided differences as in the non-uniformly placed data set case. These simple differences
can be forward differences (∆𝑓𝑖 ) or backward differences (∇𝑓𝑖 ). We will first look at forward
differences and the interpolation polynomial based on forward differences.
∆𝑓𝑖 = 𝑓𝑖+1 − 𝑓𝑖
Page 23 of 60
∆2 𝑓𝑖 = ∆𝑓𝑖+1 − ∆𝑓𝑖
∆3 𝑓𝑖 = ∆2 𝑓𝑖+1 − ∆2 𝑓𝑖
∆4 𝑓𝑖 = ∆3 𝑓𝑖+1 − ∆3 𝑓𝑖
5.4 Procedure
Given the following data, estimate 𝑓(4.12)using Newton-Gregory forward difference
interpolation polynomial:
Page 24 of 60
Solution
Here we have six data points i.e. 𝑖 = 0,1,2,3,4,5 Let us first generate the forward difference
table.
Here,
𝑓(4.12) = 17.391338127360001(𝐴𝑛𝑠)
5.4.1 Steps
Page 25 of 60
5.5 Report Question
Determine the value, 𝑓(4.12)using MATLAB Program for Newton-Gregory forward difference
interpolation formula for the following systems, where h=1.
Page 26 of 60
Experiment 6
Interpolation by Newton-Gregory backward difference formula
6.1 Objectives
1. Familiar with the practical implementation of Precision of Root Finding for a System.
2. Familiar with the practical implementation of Interpolation by Newton-Gregory
backward difference formula.
3. Familiar with the Merits and Demerits of Interpolation by Newton-Gregory backward
difference formula.
4. Explain a system through difference equation in digital domain.
6.3 Theory
We are familiar with the analytical method of finding the derivative of a function when the
functional relation between the dependent variable y and the independent variable x is
known. However, in practice, most often functions are defined only by tabulated data, or the
values of y for specified values of x can be found experimentally. Also in some cases, it is not
possible to find the derivative of a function by analytical method. In such cases, the analytical
process of differentiation breaks down and some numerical process have to be invented. The
process of calculating the derivatives of a function by means of a set of given values of that
function is called numerical differentiation. This process consists in replacing a complicated
or an unknown function by an interpolation polynomial and then differentiating this
polynomial as many times as desired.
Page 27 of 60
6.3.1 Backward Difference Formula
𝑓′′(𝑥𝑖 )
𝑓(𝑥𝑖−1 ) = 𝑓(𝑥𝑖 ) − 𝑓′(𝑥𝑖 )ℎ + ℎ2 −…………..(1)
2!
From (1)
𝑓(𝑥𝑖 )−𝑓(𝑥𝑖−1 )
𝑓′(𝑥𝑖 ) ≅ + 𝑂(ℎ)……………..(2)
ℎ
Where, O(h) is the truncation error, which consists of terms containing h and higher order
terms of h.
𝑓(𝑥+ℎ)−𝑓(𝑥)
Total or True error = |𝑓 ′ (𝑥) − |………………..(3)
ℎ
∇𝑓𝑖 = 𝑓𝑖 − 𝑓𝑖−1
∇2 𝑓𝑖 = ∇𝑓𝑖 − ∇𝑓𝑖−1
∇3 𝑓𝑖 = ∇2 𝑓𝑖 − ∇2 𝑓𝑖−1
∇4 𝑓𝑖 = ∇3 𝑓𝑖 − ∇3 𝑓𝑖−1
Page 28 of 60
∇𝑘 𝑓𝑖 = ∇𝑘−1 𝑓𝑖 − ∇𝑘−1 𝑓𝑖−1
6.4 Procedure
Given the following data, estimate 𝑓(4.12)using Newton-Gregory backward difference
interpolation polynomial:
Solution:
Here we have six data points i.e. 𝑖 = 0,1,2,3,4,5 Let us first generate the backward difference
table (next page).
Page 29 of 60
Here,
Page 30 of 60
Page 31 of 60
Experiment 7
Numerical Differentiation (Based on Newton-Gregory Forward
& Backward Differences)
7.1 Objectives
1. Familiar with the practical implementation of numerical differentiation for a System.
2. Familiar with the practical implementation of numerical differentiation by Newton-
Gregory backward difference formula.
3. Familiar with the merits and demerits of numerical differentiation by Newton-
Gregory backward difference formula.
4. Explain a system through difference equation in digital domain.
7.3 Theory
We are familiar with the analytical method of finding the derivative of a function when the
functional relation between the dependent variable y and the independent variable x is
known. However, in practice, most often functions are defined only by tabulated data, or the
values of y for specified values of x can be found experimentally. Also in some cases, it is not
possible to find the derivative of a function by analytical method. In such cases, the analytical
process of differentiation breaks down and some numerical process have to be invented. The
process of calculating the derivatives of a function by means of a set of given values of that
function is called numerical differentiation. This process consists in replacing a complicated
or an unknown function by an interpolation polynomial and then differentiating this
polynomial as many times as desired.
Page 32 of 60
7.4 Forward Difference Formula
All numerical differentiation are done by expansion of Taylor series
𝑓′′(𝑥𝑖 )
𝑓(𝑥𝑖−1 ) = 𝑓(𝑥𝑖 ) + 𝑓′(𝑥𝑖 )ℎ + ℎ2 +…………..(1)
2!
From (1)
𝑓(𝑥𝑖+1 )−𝑓(𝑥𝑖 )
𝑓′(𝑥𝑖 ) ≅ + 𝑂(ℎ)……………..(2)
ℎ
Where, O(h) is the truncation error, which consists of terms containing h and higher order
terms of h.
𝑓(𝑥𝑖+1 )−𝑓(𝑥𝑖 )
Total or True error = |𝑓 ′ (𝑥) − |………………..(3)
ℎ
Very often it so happens in practice that the given data set (𝑥𝑖 , 𝑦𝑖 ), 𝑖 = 0,1,2, … , 𝑛 correspond
to a sequence {𝑥𝑖 } of equally spaced points. Here we can assume that
𝑥𝑖 = 𝑥0 + 𝑖ℎ, 𝑖 = 0,1,2, … , 𝑛
where 𝑥0 is the starting point (sometimes, for convenience, the middle data point is taken as
𝑥0 and in such a case the integer 𝑖 is allowed to take both negative and positive values.) and
ℎ is the step size. Further it is enough to calculate simple differences rather than the divided
differences as in the non-uniformly placed data set case. These simple differences can be
forward differences (∆𝑓𝑖 ) or backward differences (∇𝑓𝑖 ). We will first look at forward
differences and the interpolation polynomial based on forward differences.
∆𝑓𝑖 = 𝑓𝑖+1 − 𝑓𝑖
∆2 𝑓𝑖 = ∆𝑓𝑖+1 − ∆𝑓𝑖
Page 33 of 60
The third order forward difference ∆3 𝑓𝑖 is defined as
∆3 𝑓𝑖 = ∆2 𝑓𝑖+1 − ∆2 𝑓𝑖
∆4 𝑓𝑖 = ∆3 𝑓𝑖+1 − ∆3 𝑓𝑖
7.5 Procedure
Given the following data, estimate 𝑓 ′ (4.12)using Newton-Gregory forward difference
interpolation polynomial:
Solution
Page 34 of 60
Here we have six data points i.e. 𝑖 = 0,1,2,3,4,5 Let us first generate the forward difference
table.
Here,
𝑘(𝑘 − 1) 2 𝑘 (𝑘 − 1)(𝑘 − 2) 3
𝑓𝑥 = 𝑓0 + 𝑘∆𝑓0 + ∆ 𝑓0 + ∆ 𝑓0
2! 3!
𝑘 (𝑘 − 1)(𝑘 − 2)(𝑘 − 3) 4 𝑘 (𝑘 − 1)(𝑘 − 2)(𝑘 − 3)(𝑘 − 4) 5
+ ∆ 𝑓0 + ∆ 𝑓0
4! 5!
𝑓(4.12) = 17.391338127360001
𝑑 𝑑 (𝑓𝑥 ) 𝑑𝑘 1 𝑑 (𝑓𝑥 )
(𝑓𝑥 ) = ∙ = ∙
𝑑𝑥 𝑑𝑘 𝑑𝑥 ℎ 𝑑𝑘
Page 35 of 60
𝑑 𝑑
1 𝑑 (𝑘(𝑘 − 1)) (𝑘(𝑘 − 1)(𝑘 − 2))
∴ 𝑓𝑥′ = [ (𝑘) ∙ ∆𝑓0 + 𝑑𝑘 ∙ ∆2 𝑓0 + 𝑑𝑘 ∙ ∆3 𝑓0
ℎ 𝑑𝑘 2 6
𝑑 𝑑
(𝑘(𝑘 − 1)(𝑘 − 2)(𝑘 − 3)) (𝑘(𝑘 − 1)(𝑘 − 2)(𝑘 − 3)(𝑘 − 4))
+ 𝑑𝑘 4
∙ ∆ 𝑓0 + 𝑑𝑘
24 120
∙ ∆5 𝑓0 ]
1 1 1 1 1 3 11 1
𝑓𝑥′ = [∆𝑓0 + (𝑘 − ) ∆2 𝑓0 + ( 𝑘 2 − 𝑘 + ) ∆3 𝑓0 + ( 𝑘 3 − 𝑘 2 + 𝑘 − ) ∆4 𝑓0
ℎ 2 2 3 6 4 12 4
1 1 7 5 1
+ ( 𝑘 4 − 𝑘 3 + 𝑘 2 − 𝑘 + ) ∆5 𝑓0 ]
24 3 8 6 5
1 1 1 1 1 3 11 1
𝑓𝑥′ = [∇𝑓0 + (𝑘 + ) ∇2 𝑓0 + ( 𝑘 2 + 𝑘 + ) ∇3 𝑓0 + ( 𝑘 3 + 𝑘 2 + 𝑘 + ) ∇4 𝑓0
ℎ 2 2 3 6 4 12 4
1 1 7 5 1
+ ( 𝑘 4 + 𝑘 3 + 𝑘 2 + 𝑘 + ) ∇5 𝑓0 ]
24 3 8 6 5
7.5.1 Steps
Page 36 of 60
a) 𝑖 = 1,2,3,4; 𝑥𝑖 = 𝑖 + 1; 𝑓𝑖 = (𝑥𝑖 2 + 1) Ans. 8.24 (forward), 8.24 (backward)
b) 𝑖 = 2,4,8,12; 𝑥𝑖 = 𝑖/2; 𝑓𝑖 = (𝑥𝑖 2 + |𝑥𝑖 |) Ans. 26.038933(forward),
10.5189(backward)
Page 37 of 60
Experiment 8
Numerical Differentiation (Based on Lagrange Interpolation) &
Numerical Integration (Based on Simple Trapezium Rule)
8.1 Objectives
1. Familiar with the practical implementation of numerical differentiation and
numerical integration for a System.
2. Familiar with the practical implementation of numerical differentiation based on
Lagrange Interpolation for a System.
3. Familiar with the practical implementation of numerical integration based on Simple
Trapezium Rule for a System.
4. Explain a system through difference equation in digital domain.
8.3 Theory
8.3.1 Lagrange Polynomial
Where,
Page 38 of 60
When constructing interpolating polynomials, there is a tradeoff between having a better fit
and having a smooth well-behaved fitting function. The more data points that are used in the
interpolation, the higher the degree of the resulting polynomial, and therefore the greater
oscillation it will exhibit between the data points. Therefore, a high-degree interpolation may
be a poor predictor of the function between points, although the accuracy at the data points
will be "perfect."
For points,
Note that the function P( x) passes through the points , as can be seen for the case ,
We know that higher order differences are negligible for small ℎ in the case of well behaved
functions.
𝑥0 +𝑘
1 ℎ
∫ 𝑦𝑑𝑥 = ℎ (𝑓0 + ∆𝑓0 ) = (𝑓1 + 𝑓0 )
2 2
𝑥0
8.4 Procedure
Find 𝒇′ (𝟎. 𝟏𝟐) from the following data using Lagrange Polynomial Interpolation:
Page 39 of 60
x 0.05 0.10 0.20 0.26
𝑓 ′ (0.12) = 0.9927
For a given x and f(x), two sets of (N+1) data pairs, (xi , fi), i= 0, 1, . ….. N:
Set SUM=0
N=length of x
DO FOR i=1 to N
Set Y=1
DO FOR j=1 to N
IF j~=i
Set Y=Y*(x-x(j))/(x(i)-x(j))
End DO(j)
Y=Y*f
Page 40 of 60
SUM=SUM+Y
𝟎.𝟖 𝟐
a) Evaluate Numerically ∫𝟎 (𝒆−𝒙 )𝒅𝒙 using Simple Trapezium Rule
Table 2
2
X 𝑓 = 𝑒 −𝑥
0 1.0000
0.1 0.9900
0.2 0.9608
0.3 0.9139
0.4 0.8521
0.5 0.7788
0.6 0.6977
0.7 0.6126
0.8 0.5273
We know that higher order differences are negligible for small ℎ in the case of well-behaved
functions.
𝑥0 +𝑘
1
∫ 𝑦𝑑𝑥 = ℎ (𝑓0 + ∆𝑓0 )
2
𝑥0
ℎ
= (𝑓 + 𝑓0 )
2 1
ℎ 0.8
𝐼11 = (𝑓0 + 𝑓1 ) = (1 + 0.5273) = 0.6109
2 2
For two strip (i.e. h=0.4),
ℎ 0.4
𝐼21 = (𝑓0 + 2𝑓1 + 𝑓2 ) = (1 + 2 ∗ 0.8521 + 0.5273) = 0.6463
2 2
Page 41 of 60
For four strips (i.e. h=0.2),
ℎ
𝐼31 = (𝑓 + 2𝑓1 + 2𝑓2 + 2𝑓3 + 𝑓4 )
2 0
0.2
= (1 + 2 ∗ 0.9608 + 2 ∗ 0.8521 + 2 ∗ 0.6977 + 0.5273) = 0.6549
2
ℎ
𝐼41 = (𝑓 + 2𝑓1 + 2𝑓2 + 2𝑓3 + 2𝑓4 + 2𝑓5 + 2𝑓6 + 2𝑓7 + 𝑓8 )
2 0
0.1
= (1 + 2 ∗ 0.9900 + 2 ∗ 0.9608 + 2 ∗ 0.9139 + 2 ∗ 0.8521 + 2 ∗ 0.7788
2
+ 2 ∗ 0.6977 + 2 ∗ 0.6126 + 0.5273) = 0.6570
8.4.1 Steps
Page 42 of 60
Experiment 9
Numerical Integration by Romberg Integration and Simpson's
1/3rd, 3/8th Rule
9.1 Objectives
1. Familiar with the practical implementation of numerical integration for a System.
2. Familiar with the practical implementation of numerical integration based on
Romberg Integration for a System.
3. Familiar with the practical implementation of numerical integration based on
Simpson's 1/3rd rule for a System.
4. Familiar with the practical implementation of numerical integration based on
Simpson's 3/8th rule for a System.
5. Explain a system through difference features in digital domain.
9.3 Theory
There are two cases in which engineers and scientists may require the help of numerical
integration technique. (1) Where experimental data is obtained whose integral may be
required and (2) where a closed form formula for integrating a function using calculus is
difficult or so complicated as to be almost useless. For example, the integral
x t3
(t ) = dt.
0 et − 1
Page 43 of 60
Since there is no analytic expression for ( x) , numerical integration technique must be used to
obtain approximate values of ( x) .
Formulae for numerical integration called quadrature are based on fitting a polynomial
through a specified set of points (experimental data or function values of the complicated
function) and integrating (finding the area under the fitted polynomial) this approximating
function. Any one of the interpolation polynomials studied earlier may be used.
4𝑛 𝐼𝑗(𝑛−1) − 𝐼(𝑗−1)(𝑛−1)
𝐼𝑗𝑛 = ; 𝑗 = 𝑛, 𝑛 + 1, …
4𝑛 − 1
This is based on approximating the function f(x) by fitting quadratics through sets of three
points. For only three points it can be written as:
x1 + 2 h
h
x1
f ( x)dx = ( f1 + 4 f 2 + f3 )
3
It is evident that the result of integration between x1 and x1+nh can be written as
x1 + nh
h
f ( x)dx =
i =1,3,5,..., n −1 3
( fi + 4 fi +1 + fi + 2 )
x1
h
= ( f1 + 4 f 2 + 2 f3 + 4 f 4 + 2 f 5 + 4 f 6 + ... 4 f n + f n +1 )
3
In using the above formula it is implied that f is known at an odd number of points (n+1 is
odd, where n is the no. of subintervals).
Page 44 of 60
9.3.3 Simpson’s 3/8 Rule
This is based on approximating the function f(x) by fitting cubic interpolating polynomial
through sets of four points. For only four points it can be written as:
x1 +3 h
3h
x1
f ( x)dx =
8
( f1 + 3 f 2 + 3 f3 + f 4 )
It is evident that the result of integration between x1 and x1+nh can be written as
x1 + nh
h
f ( x)dx =
i =1,4,7,..., n − 2 3
( f i + 3 f i +1 + 3 f i + 2 + f i +3 )
x1
3h
= ( f1 + 3 f 2 + 3 f3 + 2 f 4 + 3 f 5 + 3 f 6 + 2 f 7 + ... + 2 f n −2 + 3 f n −1 + 3 f n + f n +1 )
8
In using the above formula it is implied that f is known at (n+1) points where n is divisible by 3.
9.4 Procedure
9.4.1 Romberg Integration
0.8 2
Using the values given in the following Table 2, find ∫0 (𝑒 −𝑥 )𝑑𝑥 by Romberg’s integration.
Solution: Table 1
2
X 𝑓 = 𝑒 −𝑥
0 1
0.1 0.9901
0.2 0.9608
0.3 0.9139
0.4 0.8521
0.5 0.7788
0.6 0.6977
0.7 0.6126
0.8 0.5273
From Simple Trapezium Rule, we know that higher order differences are negligible for small ℎ in
the case of well behaved functions.
Page 45 of 60
𝑥0 +𝑘
1
∫ 𝑦𝑑𝑥 = ℎ (𝑓0 + ∆𝑓0 )
2
𝑥0
ℎ
(𝑓 + 𝑓0 )
=
2 1
Here, ℎ = 𝑥𝑛 − 𝑥0 = 0.8 − 0 = 0.8, for one strip
𝑥 −𝑥 0.8−0
ℎ = 𝑛 2 0 = 2 = 0.4, for two strip
𝑥𝑛 −𝑥0 0.8−0
ℎ= = = 0.2, for three strip
4 4
𝑥𝑛 −𝑥0 0.8−0
ℎ= = = 0.1, for four strip
8 8
ℎ 0.4
𝐼21 = (𝑓0 + 2𝑓1 + 𝑓2 ) = (1 + 2 ∗ 0.8521 + 0.5273) = 0.6463
2 2
ℎ
𝐼41 = (𝑓 + 2𝑓1 + 2𝑓2 + 2𝑓3 + 2𝑓4 + 2𝑓5 + 2𝑓6 + 2𝑓7 + 𝑓8 )
2 0
0.1
= (1 + 2 ∗ 0.9900 + 2 ∗ 0.9608 + 2 ∗ 0.9139 + 2 ∗ 0.8521 + 2 ∗ 0.7788
2
+ 2 ∗ 0.6977 + 2 ∗ 0.6126 + 0.5273) = 0.6570
Table 2
h 𝐼𝑗1 (𝑗 = 1,2,3,4)
0.8 𝐼11 = 0.6109
0.4 𝐼21 = 0.6463
0.2 𝐼31 = 0.6549
0.1 𝐼41 = 0.6570
Page 46 of 60
4𝑛−1 𝐼𝑗(𝑛−1) − 𝐼(𝑗−1)(𝑛−1)
𝐼𝑗𝑛 = ; 𝑗 = 𝑛, 𝑛 + 1, …
4𝑛−1 − 1
4𝐼21 − 𝐼11 4𝐼31 − 𝐼21 4𝐼41 − 𝐼31
𝐼22 = , 𝐼32 = , 𝐼42 =
3 3 3
43 𝐼43 − 𝐼33
𝐼44 =
63
ℎ2 ℎ4 ℎ6 ℎ8
𝐼11
𝐼21 𝐼22
𝐼31 𝐼32 𝐼33
𝐼41 𝐼42 𝐼43 𝐼44
9.4.2 Steps
7 1
Evaluate ∫1 (𝑥) 𝑑𝑥 by using Simpson’s composite 1/3 rule.
Solutiuon
Table 1
X 1
𝑓(𝑥) =
𝑥
1 1
Page 47 of 60
2 0.5
3 0.33
4 0.25
5 0.2
6 0.17
7 0.14
𝑥𝑛 − 𝑥0
ℎ=
𝑛−1
7
1 ℎ
∫ ( ) 𝑑𝑥 = (𝑓1 + 4𝑓2 + 2𝑓3 + 4𝑓4 + 2𝑓5 + 4𝑓6 + 𝑓7 )
𝑥 3
1
7 1
Evaluate ∫1 (𝑥) 𝑑𝑥 by using Simpson’s composite 3/8 rule.
Solutiuon: Compare the values with the exact value of the integral
Table 2
X 1
𝑓(𝑥) =
𝑥
1 1
2 0.5
3 0.33
4 0.25
5 0.2
6 0.17
7 0.14
𝑥𝑛 − 𝑥0
ℎ=
𝑛−1
7
1 3ℎ
∫ ( ) 𝑑𝑥 = (𝑓 + 3𝑓2 + 3𝑓3 + 2𝑓4 + 3𝑓5 + 3𝑓6 + 𝑓7 )
𝑥 8 1
1
Ans.: 1.9661
9.4.5 Steps:
An algorithm for integrating a tabulated function using composite trakpezoidal rule:
Page 48 of 60
Remarks: f1, f2,………, fn+1 are the tabulated values at x1, x1+h,………x1+nh (n+1 points)
1 Read 𝑥1→𝑛+1
2 n=length(x)
𝑥 −𝑥 𝑥𝑛 −𝑥0
3 ℎ = 𝑛+1𝑛 0 𝑂𝑅 𝑛−1
4 Read 𝑓(𝑥) = 𝑦
5 𝑓𝑜𝑟 𝑖 = 2 𝑡𝑜 𝑛 + 1 𝑅𝑒𝑎𝑑 𝑓𝑖 𝑒𝑛𝑑 𝑓𝑜𝑟
6 𝑠𝑢𝑚1 ← (𝑓𝑖 + 𝑓𝑛+1 )
7 𝑓𝑜𝑟 𝑖 = 3 𝑡𝑜 𝑛 − 1 𝑅𝑒𝑎𝑑 𝑓𝑖 𝑒𝑛𝑑 𝑓𝑜𝑟
8 𝑠𝑢𝑚2 ← (𝑓𝑖 + 𝑓𝑛−1 )
9 𝑠𝑢𝑚 ← (𝑠𝑢𝑚1 + 𝑠𝑢𝑚2)
ℎ
10 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑙 ← 3 . 𝑠𝑢𝑚
2. Write MATLAB Program of Numerical Differentiation for Simpson’s composite 1/3 &
3/8 rules for the following systems:
a) 𝑖 = 1,2,3,4; 𝑥𝑖 = 𝑖 + 1; 𝑓𝑖 = (𝑥𝑖 2 + 1) Ans. 23.6667, 11.6250
b) 𝑖 = 2,4,8,12; 𝑥𝑖 = 𝑖/2; 𝑓𝑖 = (𝑥𝑖 2 + |𝑥𝑖 |) Ans. 37.7778, 27.5000
Page 49 of 60
Experiment 10
Solution of system of Linear Equations By a direct method (LU
Decomposition) and an indirect method (Jacobi or Gauss Seidel
Method)
10.1 Objectives
1. Familiar with the practical implementation of the different techniques of finding
solution of a set of n linear algebraic equations in n unknowns.
2. Familiar with the practical implementation of LU Decomposition for system solution
of linear equations.
3. Familiar with the practical implementation of Jacobi or Gauss Seidel method for
system solution of linear equations.
4. Explain a system through difference features in digital domain.
10.3 Theory
… … … …
aM 1 x1 + aM 2 x2 + ...aMN xN = bM
Page 50 of 60
Here the N unknowns xj , j = 1, 2, . . .,N are related by M equations. The coefficients aij with i = 1,
2, . . .,M and j = 1, 2, . . .,N are known numbers, as are the right-hand side quantities bi, i = 1, 2, .
. .,M.
Existence of solution
If N = M then there are as many equations as unknowns, and there is a good chance of solving
for a unique solution set of xj’s. Analytically, there can fail to be a unique solution if one or
more of the M equations is a linear combination of the others (This condition is called row
degeneracy), or if all equations contain certain variables only in exactly the same linear
combination(This is called column degeneracy). (For square matrices, a row degeneracy
implies a column degeneracy, and vice versa.) A set of equations that is degenerate is called
singular.
• While not exact linear combinations of each other, some of the equations may be so close
to linearly dependent that round off errors in the machine renders them linearly dependent
at some stage in the solution process. In this case your numerical procedure will fail, and it
can tell you that it has failed.
• Accumulated round off errors in the solution process can swamp the true solution. This
problem particularly emerges if N is too large. The numerical procedure does not fail
algorithmically. However, it returns a set of x’s that are wrong, as can be discovered by direct
substitution back into the original equations. The closer a set of equations is to being singular,
the more likely this is to happen.
10.3.1 Matrices
A·x=b (2)
Here the raised dot denotes matrix multiplication, A is the matrix of coefficients, x is the
column vector of unknowns and b is the right-hand side written as a column vector,
Page 51 of 60
a11 a12 ... a1N x1 b1
a a22 ... a2 N x b
A = 21 x= 2 b= 2
... ... ... ... .. ..
aM 1 aM 2 ... aMN xN bM
x = A\b;
Page 52 of 60
Here the quantities bi, i = 1, 2, . . .,M’s are replaced by aiN+1, where i=1,2, ….M for simplicity of
understanding the algorithm.
The First Step is to eliminate the first term from Equations (B) and (C). (Dividing (A) by a11 and
multiplying by a21 and subtracting from (B) eliminates x1 from (B) as shown below)
a11 a a a
(a21 − a21 ) x1 + (a22 − 12 a21 ) x2 + (a23 − 13 a21 ) x3 = (a24 − 14 a21 )
a11 a11 a11 a11
a21
Let, = k2 , then
a11
a31
Similarly multiplying equation (A) by = k3 and subtracting from (C), we get
a11
Observe that (a21 − k2 a11 ) and (a31 − k3a11 ) are both zero.
In the steps above it is assumed that a11 is not zero. This case will be considered later in this
experiment.
10.4 Procedure
For triangularizing n equations in n unknowns
3 for i = (k + 1) to n in steps of 1 do
4 u aik / akk
5 for j = k to (n + 1) in steps of 1 do
Page 53 of 60
6 aij aij − uakj endfor
endfor
endfor
The next step is to eliminate a32 from the third equation. This is done by multiplying second
equation by u = a32 / a22 and subtracting the resulting equation from the third. So, same algorithm
can be used.
a33 x3 = a34
From the above upper triangular form of equations, the values of unknowns can be obtained by
back substitution as follows:
x3 = a34 / a33
Page 54 of 60
Algorithmically, the back substitution for n unknowns is shown below:
1 xn an ( n +1) / ann
3 sum 0
4 for j = (i + 1) to n in steps of 1 do
endfor
10.5 Pivoting
u aik / akk
Here it is assumed that akk is not zero. If it happens to be zero or nearly zero, the algorithm will
lead to no results or meaningless results. If any of the akk is small it would be necessary to
reorder the equations. It is noted that the value of akk would be modified during the elimination
process and there is no way of predicting their values at the start of the procedure.
The elements akk are called pivot elements. In the elimination procedure the pivot should not be
zero or a small number. In fact for maximum precision the pivot element should be the largest in
absolute value of all the elements below it in its column, i.e. akk should be picked up as the
maximum of all amk where, m k
So, during the Gauss elimination, amk elements should be searched and the equation with the
maximum value of amk should be interchanged with the current position. For example if during
elimination we have the following situation:
x1 + 2 x2 + 3x3 = 4
0.3x2 + 4 x3 = 5
−8 x2 + 3x3 = 6
Page 55 of 60
As −8 0.3, 2nd and 3rd equations should be interchanged to yield:
x1 + 2 x2 + 3x3 = 4
−8 x2 + 3x3 = 6
0.3x2 + 4 x3 = 5
It should be noted that interchange of equations does not affect the solution.
The algorithm for picking the largest element as the pivot and interchanging the equations is
called pivotal condensation.
10.5.1 Procedure
1 max akk
2 pk
3 for m = (k + 1) to n in steps of 1 do
5 max amk
6 pm
7 endif
endfor
8 if ( p ~= k )
9 for q = k to (n + 1) in steps of 1 do
10 temp akq
11 akq a pq
Page 56 of 60
12 a pq temp
endfor
endif
There are several iterative methods for the solution of linear systems. One of the efficient
iterative methods is the Gauss-Seidel method.
4 x1 − x2 + x3 = 7
4 x1 − 8 x2 + x3 = −21
−2 x1 + x2 + 5 x3 = 15
7 + x2k − x3k
x1k +1 =
4
21 + 4 x1k +1 + x3k
x2k +1 =
8
15 + 2 x1k +1 − x2k +1
x3k +1 =
5
The very first iteration, that is x20 , x30 ,.....xn0 (for n equations) are set equal to zero and x11 is
calculated. The main point of Gauss-Seidel iterative process to observe is that always the
latest approximations for the values of variables are used in an iteration step.
It is to be noted that in some cases the iteration diverges rather than it converges. Both the
divergence and convergence can occur even with the same set of equations but with the
change in the order. The sufficient condition for the Gauss-Seidel iteration to converge is
stated below.
The Gauss-Seidel iteration for the solution will converge (if there is any solution) if the matrix
A (as defined previously) is strictly diagonally dominant matrix.
Page 57 of 60
N
akk akj for k = 1, 2,...N
j =1
j k
2 x1 + 3x2 + 5 x3 = 23
3x1 + 4 x2 + x3 = 14
6 x1 + 7 x2 + 2 x3 = 26
For generalization, you will have to write a program for triangularizing n equations in n
unknowns with back substitution.
2 x1 + 4 x2 − 6 x3 = −4 x1 + x2 + 6 x3 = 7
(A) x1 + 5 x2 + 3x3 = 10 (B) − x1 + 2 x2 + 9 x3 = 2
x1 + 3x2 + 2 x3 = 5 x1 − 2 x2 + 3x3 = 10
4 x1 + 8 x2 + 4 x3 = 8
x1 + 5 x2 + 4 x3 − 3x4 = −4
(C)
x1 + 4 x2 + 7 x3 + 2 x4 = 10
x1 + 3x2 − 2 x4 = −4
8 x1 − 3x2 = 10 4 x − y = 15
(A) (B)
− x1 + 4 x2 = 6 x + 5y = 9
Page 58 of 60
5 x1 − x2 + x3 = 10 2 x + 8 y − z = 11
(C) 2 x1 + 8 x2 − x3 = 11 (D) 5 x − y + z = 10
− x1 + x2 + 4 x3 = 3 −x + y + 4z = 3
Page 59 of 60
Updated Lab Manual
Numerical Method
Lab-in-Charge: Nahian Rabbi Ushan
Updated Date: 23 Aug 2023
Page 60 of 60