Numerical Analysis: Lecture Notes
Numerical Analysis: Lecture Notes
Lecture Notes
By
Dr. Maan A. Rasheed
2018
1
Contains
Chapter 1: Introduction
Types and sources of errors
Chapter2: Numerical solutions for nonlinear equations
Bisection
Newton-Raphson
Fixed point Method
Direct methods
Gauss Elimination Method
Gauss-Jordan Method
LU Method
Indirect Methods
Jacobi Method
Gauss-Sidel Method
Trapezoidal method
Composite Trapezoidal method
Simpson Method
Composite Simpson method
2
Chapter 6:Numerical Solutions of First Order Ordinary Differential
Equations
Euler Method
Modified Euler Method
Runge-Kutta Method of order2
Runge-Kutta Method of order4
Recommend Books
3
Chapter One
Introduction
For instance, we can’t find the exact value of the following integral
while we can find an approximate value for this integral, using numerical
methods.
Since for any numerical Algorithm (the steps of the numerical method), we have
lots of mathematical calculations, we need to choose a suitable computer language
such as Matlab or Mable and write the algorithm processes in programing steps.
4
In fact, the accuracy of numerical solutions, for any problem, is controlled by three
criteria:
We can point out the most important types and sources of these errors as follows:
1- Rounding errors: this type of errors can be got, because of the rounding of
numbers in computer programming languages.
Example :-
( )
If we compute ( ) with taking 4 terms, we get more errors than with taking 10
terms.
3-Total errors:- Since any numerical algorithm is about iterative presses, the
solution in step depends on the solution in step ( ). Therefore, for larger
number of steps we get more errors, and those errors are the total of all previous
types of errors.
Let ̅ is the approximate value of , there are two methods can be used to
measure the errors:-
5
1- Absolute Error:- ̅
2- Relative error:-
Solution
̅ ,
Remark: Clearly,
Questions
Q1:
6
Chapter 2
Numerical Solutions for Nonlinear Equations
There are lots of real problems, can be solved by mathematical forms, and these
forms has nonlinear equations. Mostly, it is difficult to calculate the exact solutions
for these equations; therefore, we study some numerical methods in order to be
able to find the approximate solutions for these equations.
Examples
( ) ( ) ( )
nonlinear equations for two variables
Example:
1- ( ) ( )( )
In order to ensure that, there exist a root for the equation ( ) on the
interval , - we have to make sure that ( ) ( ) see the following figure:
7
( )
( )
( ) or
Next, we study some known numerical algorithms those can be used to find the
approximate solutions (roots) for non-linear equations, which are Bisection
algorithm, Newton–Raphson algorithm and fixed point algorithm.
Bisection Algorithm
Let , -
or
8
While, if ( ) ( ) then the exact root belong to , -, and we set
, , and then we repeat the same steps.
( ) or
or
_ _ + +
( ) ( ) ( ) ( )
[ [ [ ] ]
Remarks:
9
Example: consider that, we have the following equation
( ) [1,2],
3- Since the exact solution is known for this equation, find also the absolute errors
at each steps.
Solution
, ( )
So,
Absolute error |
n f( ) Absolute
Errors
10
The iterative error, - |=1.25, clearly, it is, still too large, so we have to
continue in the iterative processes until we get the convergent condition,
or
Write the Matlab code which can be used to find the approximate root of ( )
, -, with considering
1- a=input('a=');
2- b=input('b=');
3- x=sym('x');
4- f=x^2+x-1;;
5- fa=subs(f,x,a);
6- fb=subs(f,x,b);
7- k=0;
8- if fa*fb>0
9- fprintf('the function f(x) has no root')
10- break;
11- else
12- while abs(b-a)>0.0001
13- c=(a+b)/2;
14- fc=subs(f,x,c);
15- if fc==0
16- fprintf('the exact root=%f',c);
17- fprintf('the number of iteration=%d',k);
18- break;
19- end
20- if fa*fc>0
21 a=c; fa=fc;
22- else
23- b=c; fb=fc;
24- end
25- k=k+1;
11
26- end
27- fprintf('the approximate root=%f',c);
28- fprintf('the number of iteration=%d',k)
29- end
a=0
b=1
Newton-Raphson Method
This algorithm can be used to find the approximate toots for the equation ( )
when it easy to find the derivative,
Let , -
i.e. , -
( )
where, ( )
substitute
12
( ) ( ) ( ) ( ) ( )
( ) ( )
( ) ( )
Set
( )
( )
Remark:- In order to guarantee that, the iterative process is convergent, the initial
root , , should be chosen close to the exact root which means:
2- Set
( )
3- Calculate
( )
4- Set and continue in the iterative processes, until we get the stop
condition is satisfied:
For two iterative steps (only find ) Also, find the iterative error at each step,
where
solution
13
( ) ( )
( )
( ) ( )
( )
( ) ( ) ( )
( )
( ) ( )
( )
Thus
( )
( )
( )
( )
( )
( )
( )
( )
N ( ) ( )
0 -0.2000 0.1976
1 0649.1 0.0225
2 -0.4201 0.0003
3 -169419 0.0001
4 -169410
14
We can write a Matlab code to find the approximate root of the last example using
N.R. algorithm, as follows:
1- x0=input('x0=');
2- x=sym('x');
3- f=sin(x)-((x+1)/(x-1));
4- g=diff(f);
5- fx0=subs(f,x,x0)
6- gx0=subs(g,x,x0)
7- k=0;
8- x1=x0-(fx0/gx0)
9- fx1=subs(f,x,x1);
10- while abs(fx0/gx0)>eps;
11- fx1=subs(f,x,x1);
12- if fx1==0
13- fprintf('The exact root=%f',x1);
14- break;
15- else
16- x0=x1;
17- end
18- k=k+1;
19- fx0=subs(f,x,x0)
20- gx0=subs(g,x,x0)
21- x1=x0-(fx0/gx0)
22- end
23- fprintf('the approximate root=%f',x1);
24- fprintf('the number of iteration=%d',k);
15
Fixed Point Algorithm
This method depends on the concept of fixed points for one variable functions
Definition :- The point which belongs to the domain of the function is called
a fixed point for , iff ( )
( ) ( )
( ) → ( )
Therefore, the problem becomes, we have to look for the fixed point of rather
than, looking for the root of .
Let , - ( ) , - , -
Then g has a fixed point on [a,b], moreover, if exists on (a,b), such that
| (x)|
Remark :- when we choose a certain form for g , we have to make sure that
( ) ( )
16
Fixed point algorithm steps
3- Set ( ) ( )
Solution
Let us choose two forms for g as follows:
( ) ( )
( ) ……..(2)
( )
( )
( )
It is clear that, ( )| ( )
while | ( )|
17
( )
. /
( )
. /
Where and
H.W. For last example find the iterative errors for three steps.
Example 2, write the Matlab code that can be used to find the approximate root of
the following equation
Before to write the grogram let us study the possible forms for g
( ) ( )
( ) ( )
Since ( ) ( ) ( )
18
9- k=0;
10- if fa*fb>0
11- fprintf('the function f has no root');
12- break
13- end
14- if abs(subs(diff(g),x,x0))>1
15- fprintf('the algorithm is divergent' );
16- break;
17- end
18- x1=subs(g,x,x0);
19- while abs(x1-x0)>eps
20- fx1=subs(f,x,x1);
21- if fx1==0
22- fprintf('the exact root=%f',x1);
23- break;
24- end
25- k=k+1;
26- x0=x1;
27- x1=subs(g,x,x0);
28- end
29- fprintf('the approximate root=%f',x1)
30- fprintf('the number of iterations=%d',k)
a=0
b=1
19
The number of iterations=63
Exercises
Q1:- Find the approximate roots of the following equation
Q2:- Find the approximate value of √ by using Bisection Algorithm (for three
iterative steps) and find the absolute errors at each step.
Hint: consider ( ) √
20
Chapter 3
The Numerical Solutions of Linear Systems
It is well known from the linear algebra that, that there are many methods used to
find the exact solutions of linear systems, where ,
such as Gauss elimination or, Gauss-Gordan, or Kramer’s method. But using these
methods becomes so difficult when the dimension , of the matrix , is large.
Therefore, we need to compute the solutions numerically by using computers.
In general, there are two types of numerical methods, which can be used to find the
numerical solutions of linear systems: direct methods and indirect methods.
Before starting to study these methods, let’s revision some equivalent algebraic
facts of the linear system:
Direct methods
These methods can be convergent very fast, but when the dimension of A is large,
it is not recommended to use these methods because, we need to compute lots of
mathematical operations, which means, the errors become bigger.
21
It is clear that, for solving lower triangular system we use Forward substitutions
and for solving upper triangular system we use Backward substitutions.
In fact, solving lower (upper) triangular system is easier than solving the original
system.
Solution
Firstly, we write the system in matrix form , -, as follows
0 1
[ ]
( )
( )
Finally, we solve the last system by using the backward substitutions, to get
22
( ) ( )
A=[4,-9,2;2,-4,6;1,-1,3];
b=[5;3;4];
n=3;
for k=1:n-1;
for i=k+1:n
m(i,k)=A(i,k)/A(k,k);
for j=k:n
A(i,j)=A(i,j)-m(i,k)*A(k,j);
end
b(i)=b(i)-m(i,k)*b(k);
end
end
x(n)=b(n)/A(n,n);
for i=n-1:-1:1
s=0;
for j=i+1:n
s=s+A(i,j)*x(j);
end
x(i)=(b(i)-s)/A(i,i);
end
23
disp(x);
Ax=b
Solution
Firstly, we write the system in matrix form , -, as follows:
( ) [ ]……(3)
-2( )+ [ ]……(4)
24
Thus, we get
0 1=[ ]
LU Algorithm :
This method is called by LU, because the matrix in the linear system ,
decomposes into the multiplication of two matrices : lower L and upper U, and this
decomposition works for any vector b, which means
Thus, we get lower triangular and upper triangular systems.
In order to get the solution of the system , we need to solve these two
systems.
Steps of LU Method
1- Decompose , in the form where U is an upper matrix and L is a lower
matrix.
2- Set Ux=y, which leads to Ly=b
3-Solve first the lower triangular system, Ly=b, using the forward substitutions to
get y, and then solve the lower triangular system, Ux=y, using the backward
substitutions to get x .
Remarks:-
1-This method can be considered better than Gauss and Gauss-Gordan methods
and that because the decomposition of the matrix works for any vector , while
in Gauss and Gauss - Gordan, the mathematical operations, which we have to do
on [A:b], should be redone again when we choose another vector .
2-In fact, not any matrix A can be decomposed to LU, unless the following
condition (The diagonal control condition), is satisfied.
∑ , i=1,2,..n
25
A=( )
Solution
Solution
Set
A=0 1 =0 10 1
L U
( )( )
It follows that
L=0 1, u=0 1
Set
We need to solve first the system , - by using Forward substitutions
26
0 10 1 0 1 , thus
( )( ) y= 0 1
Secondly, we solve the system [ ], by using Backward
substitutions
0 10 1 0 1 ( )
( )
Thus [ ]
Indirect methods
In these methods, we don’t need to do lots of matrix operations as in the direct
methods, but it is known that indirect methods are slower than direct methods in
convergence. Moreover, the main different between direct and indirect methods
that indirect methods needs an initial solution, ( ) in order to
start and depending on this initial condition, we can get
27
which can be written as follows:
+ +……….+ =
+ +………….+ =
+ ………..+
where
Remarks:
1- In case of we replace the equation number i, with
another equation in order to get
2- On the other hand, in order to get faster convergence, we should make sure that,
The diagonal control condition is satisfied
| |≥∑
In case of this condition is not satisfied, we can also switch places between the
equations, until we get this condition is satisfied.
and in case of one of these condition is not satisfied, switch places of the equations,
until we get the two conditions are satisfied.
28
5-Compute the solution iterative iteratively, ( for ), until we get the
following the stop condition is satisfied:
( ) ( )
‖ ‖ where
( ) ( )
‖ ‖ | |
Example:- Use Jacobi algorithm to solve the following linear system, for
two iterative step and find the iterative error at each step.
Note: The exact solution for this system is x=( 1; 2; 3)
Solution
Since we need to switch the places of equation 2 and 3:
It is clear that, the last system satisfies the diagonal control condition.
We can rewrite the last system as follows:
= +
=4 - -
29
k=0,1,…….
Set k=0
( )
= - ( )- ( )=
( )
= + ( )=
( )
= 4 - (0.8) - ( )=
( ) ( )
* +
( )
( ) ,
( )
( ) ( ) 2.95
( ) ( )
* +
The main difference between this method and Jacobi method, is for any iterative
step ,k, the new approximate values of the , are used directly, to
30
compute the approximate values of , while in Jacobi we don’t use the new
approximate values until, we consider the next iterative step, .
Remark: The steps of Gauss-Sidel algorithm are the same as the step of Jacobi
method expect for in step (4), we use the Gauss-Sidel iterative system:
( ∑ ∑ )
for
= +
=4 - -
= - -
= +
= 4- -
31
Set k=0, we get
( )
= - ( )- ( )=
( )
= + ( )=
( )
= 4 - (1.15) - ( )=
( ) ( )
* +
( )
= + (0.9859) =1.993
=4– ( )– ( )
( )
( )
* +
Remark: From last example, we see that, the approximate results that we get by
using Gauss-Sidel are more accurate and closer to the exact solution, compared
with the results that we got by using Jacobi method. Moreover, for ,the
iterative errors those arise from using Gauss-Sidel method are less than the
iterative errors those arise from using Gauss-Sidel. Which means Gauss-Sidel
method is faster than Jacobi method in convergence.
32
Matlab Code
Write a matlab program, which can be used to find the approximate solution
of the following system by using
1- Jacobi Method
2-Gauss-Sidel
with ( )
Jacobi
A=[9,-4,2;2,-4,1;1,-1,3];
b=[5;3;4];
n=3;
x0=[0;0;0];
r=norm(b-A*x0);
k=0;
while r> 0.01
k=k+1;
for i=1:n
s=0;
for j=1:i-1
s=s+A(i,j)*x0(j);
end
for j=i+1:n
s=s+A(i,j)*x0(j);
end
x(i)=(b(i)-s)/A(i,i);
end
x0=x' ;
r=norm(b-A*x0);
end
disp(x);
disp(k);
33
0.1205 -0.4005 1.1605 k=19
Gass-Sidel
A=[9,-4,2;2,-4,1;1,-1,3];
b=[5;3;4];
n=3;
x0=[0;0;0]; x=x0';
r=norm(b-A*x0);
k=0;
while r> 0.01
k=k+1;
for i=1:n
s=0;
for j=1:i-1
s=s+A(i,j)*x(j);
end
for j=i+1:n
s=s+A(i,j)*x0(j);
end
x(i)=(b(i)-s)/A(i,i);
end
x0=x' ;
r=norm(b-A*x0);
end
disp(x);
disp(k);
34
Exercises
1- Under which condition the above system has a unique solution, in terms of
the elements of A ?
2- Use Gauss Method, with Forward substitution, to find the solution of this
system in terms of the elements of A.
35
Chapter 4
Interpolation,Extrapolation & Numerical Differentiations
Sometimes, we need to estimate an unknown value depending on known values
(Data base), for instance, consider that, the numbers of people who have lived in Iraq is
known, for the years 57, 67, 77,87, 97,2007, 2017, if we would like to estimate the
number of Iraq’s people in the year 75, this operation is called the interpolation,
because the number 75 belongs to the interval [37, 87], while , if we would like to
estimate the number of Iraqi people in the year 2018, this operation is called the
extrapolation, because the number 2018 does not belong to the range of the data base.
( ),
..............
...........
, or ,
Next, we will study some finite difference methods that can be used to find the
solutions for interpolation problems, when the distances between the points in the
database are equal.
36
Forward , Backward 𝛁 and Center 𝜹.
( ) ( ) ( )
( ) ( )
( ) ( ( )) (( ) ( )) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( )
37
Center difference operator
The centre difference operator is defined as follows:
( ) ( ) ( )
( )
Where ,
……………
……………
38
where,
If close to the centre of the database, ( ), then we use Newton centre formula,
which takes the form:
( ) ( )
( )
where ,
2- Input
3-set
X 4 6 8 10
( ) 1 3 8 20
39
Solution
1- Since 4.5 close to the beginning of the data base, we use the forward
Newton formula
( )
( )
( )
4 1
6 3
8 8
10 20
Thus
( )
( ) ( ) . /. / = 1.2188
2- Since the point 9 close to the end of the database ( ), we use the
Backward Newton formula,
40
Set,
( )
( ) 𝛻
𝛻
( )
4 1
6 3
8 8
10 20
. /. /
Thus ( ) ( )( ) ( )
,
41
( ) ( )
( )
( )
( )
4 1
6 3
8 8
10 20
Thus
,( ) - ,( ) -
( ) ( )( ) ( ) ( )( ) ( )
=3.6320
42
Matlab Codes for Finite Difference Methods
Example: Consider the following database
X 0 0.5 1 1.5
( ) 1 1.25 2 3.25
2- f(1.25) ( Backward )
3- f(0.75) ( Centre )
x=[0,0.5,1,1.5];
y= [1,1.25,2,3.25];
xp=input('xp=');
h=0.5;
p=(xp-x(1))/h;
Dy0=y(2)-y(1);
D2y0=y(3)-2*y(2)+y(1);
yp=y(1)+p*Dy0+(p*(p-1)/2)*D2y0;
fprintf('f(%f)=%f',xp,yp);
( )
x=[0,0.5,1,1.5];
43
y= [1,1.25,2,3.25];
xp=input('xp=');
h=0.5;
p=(xp-x(2))/h;
q=1-p;
S2y1=y(3)-2*y(2)+y(1);
S2y2=y(4)-2*y(3)+y(2);
yp=p*y(3)+p*(p+1)*(p-
1)*S2y2/factorial(3)+q*y(2)+q*(q+1)*(q-
1)*S2y1/factorial(3);
fprintf('f(%f)=%f',xp,yp);
( )
x=[0,0.5,1,1.5];
y= [1,1.25,2,3.25];
xp=input('xp=');
h=0.5;
p=(xp-x(4))/h;
By3=y(4)-y(3);
B2y3=y(4)-2*y(3)+y(2);
yp=y(4)+p*By3+(p*(p+1)/2)*B2y3;
fprintf('f(%f)=%f',xp,yp);
xp=1.25
f(1.25)=2.5625
44
Numerical Differentiations
Let , be a differentiable function on , -, and let ) , ], and ( ) - are
known . So, we have the following database
Let , -
{ }
So,
( ) ( )
( )
( ) ( )
Therefore, Forward, Backward and Center Newton formulas for differentiation take
the forms, respectively:
45
( ) ( )
( ) . / ( ),
( ) ( )
( ) . / ( ),
( ) ( ) , ( ) -
( ) ( )
Example: Going back the example that we have considered before, find
( ) ( ) ( ) by using (forward, backward, center) Newton formulas.
Solution:-
Since 4.5 close to the beginning of the data base, we use forward formula to find
( )
( )
( ) ( ),
where
( ) ( ( ) )
H.W. in the same way we can find ( ) ( ) by using the backward and
center formulas, respectively.
where
46
It is well known that,
( ) ( ) ( )
( )
→
( ) ( ) ( ) ( ) ( )
Or ( ) →
Remarks:
1- equation (3) is more accurate than (1) & (2), therefore, we will use it to find
( ) while, we can only use equation (1) & (2) to find
( ) ( ) respectively.
Next, our aim is to derive a formula, which can be used to find the second
derivatives, ( )
( ) ( ) ( ) ( ) ( )
47
and
( ) ( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( )
1- ( ) ( )
2- ( ) ( )
solution
( ) ( ) ( )
( )
( ) ( )
( )
( ) ( )
( )
( ) ( ) ( )
( )
( ) ( ) ( ) ( )
( )
48
( ) ( ) ( ) ( )
( )
( ) ( )
( )
( )
( ) ( )
( )
( )
Exercises
a. Find where
b. Use Newton’s formulas to find the approximate values of
( ) ( ) ( ) ( ).
c. Compute the absolute errors at each point.
49
Chapter 5
Numerical integration
In this chapter, we study some methods; used to find the approximate value for the
definite following integral
∫ ( ) ( , - , -)
when it’s difficult to find the exact value by using known methods (integration
methods), such as:
The general idea of the integration methods is to divide the interval , -, into of
subintervals :
, - , - , - , -
( ) ( ) , -
∫ ( ) ∫ ( )
From the last form, we note that, the formula of numerical integration depends on
the way of choosing the polynomial .
∫ ( ) ∫ ( ) ∑ ( ) ( )
where,
50
* + are called the coefficients
, - , - , - , -
i.e.
Thus ( ) ( ) ( ) , -
( ) ( ( ))
where ∫ ∏ ( ) is the truncation error formula
( )
∫ ( ) ∑ (∫ ( ) ) ( )) ……(1),
which means
(∫ ( ) )
51
Trapezoidal method
From the general form of integration, with choosing Lagrange polynomial, and
n=1, we get
, ,
( ( ))
∫ ( ) ∑( ∫ ( ) ) ( ) ∫ ( )( )
( ) ( )
Set , ∫ ( ) ∫ ( ) ( )
( ) ( )
∫ ( ) ∫
( ) ( )
∫ ( ) ∑ ( ) ( ) ( )
Remark:- We note that, if is polynomial of order less than or equal one, then the
truncation error equal zero, which means:
∫ ( ) ∑ ( ) ( ) ( )
52
Example:- Use the Trapezoidal method to find the approximate value of the following
integral
∫( )
Solution
, , ,
∫ ( ) ( ( ) ( ))
∫( ) (( ) ( ) ) ( )
∫( ) ( )
In order to get more accurate value to the integration, we use the composite
Trapezoidal methods.
, - , - , - , -
Since the summation of all the integrals on the subintervals is equal the integral on
the whole interval , -, we can apply the Trapezoidal formula, on each of the
integrals as follows:
53
∫ ( ) ∫ ( ) ∫ ( ) ∫ ( )
, ( ( ) ( ))- , ( ( ) ( ))- , ( ( ) ( ))
∑ ( )
where
∫ ( ) 0 ( ( ) ( ))1 ∑ ( )
1-Input
4-Find
∫ ( ) [ ( ( ) ( ))] ∑ ( )
Example: For the last example, find the approximate value of the integral, using
composite Trapezoidal methods with taking
∫ ( )
54
, - , - , -
∫ ( ) ∫ ( ) ∫ ( )
,( ) ( ) - ,( ) ,( ) -
=1/4(1+1/8+1+1/8+1+2)= 1/4[21/4]=[21/16]=1.3125
E=| - |=|1.31-1.25|=0.06
Remark
In last example, it is clear that, from the absolute errors, the result of the composite
Trapezoidal method is more accurate than the result that we get by using normal
Trapezoidal method.
Next, we write dawn the Matlab Code for last example with
a=0; b=1; n=40;
h=(b-a)/n; g=0;
x=sym('x');
f=x^3+1;
m=subs(f,x,a)+subs(f,x,b);
for i=1:n-1
d=a+i*h;
g=g+2*subs(f,x,d);
end
T=(h/2)*(m+g);
55
fprintf('I=%f',T);
I=1.250156
Simpson method
Here, we set , which means
, - , - , -
( ) ( ) , -
( ( ))
∫ ( ) ∑( ∫ ( ) ) ( ) ∫ ( )( )( )
∫ ( )
∫ ( )
∫ ( )
( ( )) ( )
( ) ∫ ( )( )( ) ( ) ( )
Thus, we get
∫ ( ) ( ) ( ) ( )
56
Remark:- We note that, if is polynomial of order less than or equal 3, then the
truncation error equal zero, which means:
∫ ( ) ∑ ( ) ( ) ( ) ( )
Example: find the approximate value of the following integral , using Simpson
method
Solution:
, - , - , -
∫ , | | -
= , ( ) -
∫ ( )
)answer 1.25, (
57
Apply Simpson formula for each of the pairs of subintervals
∫ ( ) ∫ ( ) ∫ ( ) ∫ ( )
, ( ) ( ) ( )-
, ( ) ( ) ( )- , ( ) ( )
( )
( )- ∑ ( )
where
∫ ( ) , ( ) ( )- ∑ ( ) ∑ ( )
1-Input a,b
4-
∫ ( ) , ( ) ( )- ∑ ( ) ∑ ( )
58
Example:- Use Composite Simpson integral formula to find the value of the
following integral, consider n=4
solution
[ - (, - , -) (, - , -)
∫ ∫ ∫
∫ ( ) ( )
= ( ( ) ) ( ( ) ) 15.0375
H.W. Compare between the two absolute errors, those can arise from using
Simpson method with n=2 and 4, respectively, to find the approximate value
for the following integral:
∫ ( )
59
Matlab Code for composite Simpson method
Next, we write dawn the Matlab Code for which can be used to find the following
integer ∫ ( ) using composite Simpson method, with .
I=1.25
Note that
and that because is a polynomial of order three.
60
Exercises
Q2: Use Trapezoidal method, with n=1, and n=3 to find the approximate value of
the following integral, and find the absolute error in each case.
∫( )
61
Chapter 6
Numerical Solutions of First Order Ordinary Differential
Equations
while, partial differential equation, has unknown function of two or more variables
and some of its partial derivatives. For instance
The order of the differential equation, is the highest derivative appears in the
differential equation.
In this chapter, we will study, the numerical solutions for first order ordinary
differential equations, which takes the general form:
( )
( ) ( )
Example:
( )
62
While, if y are given at more than one point, then the problem is called
Boundary values problem:
Example
( ) ( )
In this chapter, we will only study the numerical solutions of first order
initial vale problems.
Our aim is find the approximate solution, for this problem at certain points:
* + , -, which means, we only need to find * +
Next, we will study some important methods that can be used to find the numerical
solutions of initial value problems
Euler Algorithm
The general idea of this method, is to divide the interval , -, into n of
subintervals, as follows:
( )
In order to find the approximate solution of the initial value problem at the point
, we consider the definition of ( )
( ) ( ) ( ) ( )
( ) ( )
→
since, ( ), we obtain
( ) ( )
( )
( ) ( ) ( ( ))
63
( )
Remark:-
Euler formula, can only be used only for finding the numerical solutions at the
points ) , while if we would like to find the approximate value of
( ) ) , - then in this case, we can use an
interpolation method.
1-Input
3-Define ( )
( )
4- Input, n,
5- Set
( )
Solution :-
h=
64
( )
( )( ( )( ))
( )
( )( ( )( ))
We can show that, by using separation of variables, the exact solution of this
problem takes the form:
which leads to ( ) ( )
( )
( )
( ( ) ( ))
where
( ) Normal Euler
65
In fact, the first equation depends on the second equation, and this way is called
(Estimation–Correction), because from the first equation, we get estimate value
for ( ), and then in the second equation, we get a correction for this value.
The steps of Modified Euler algorithm are the same as the steps of normal Euler
algorithm and we only have to add one step more after step 5, which is:
6- Correct the estimated value that we get from using normal Euler formula,
by using the modified Euler formula.
n=2
Estimation: ( )
( )( )
Correction: ( ( ) ( ))
( ( )( ))
Estimation: ( )
( )( ( )( ))
Correction: ( ( ) ( ))
( )
( ( )( ) ( )( ))
x 0 0.1 0.2
y 0.5
66
Next, we compute the absolute errors:
( )
( )
It is clear that, this database is much different from that we have got from using
normal Euler method. Moreover, it is more accurate.
Modified Euler
x0=0; y0=0.5;h=0.1; X(1)=x0+h; X(2)=X(1)+h;
x=sym('x');
y=sym('y');
f=-x*y;
67
for i=1:2
Ye(i)=y0+h*subs(f,{x,y},{x0,y0});
Yc(i)=y0+(h/2)*(subs(f,{x,y},{x0,y0})+subs(f,{x,y},{X(i
),Ye(i)}))
x0=X(i); y0=Yc(i);
end
disp(Yc)
Answer: 0.4975 0.4901
Runge-Kutta Methods
Since modified Euler method needs two steps to get the solutions, it is considered
a two-steps method. Moreover, Euler methods need to approximate the derivatives
by using special forms. Thus, we will use Runge-Kutta method , which is a one-
step method and it can be used to avoid determining higher order derivatives.
Set ( )
( )
( ( ) ( ))
where
( )
68
( )
Set ( )
( )
( )
( )
( )
Solution
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
69
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
( ( ) ( ) )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( )
( ( ) ( ) )
70
Thus ( ) ( )
( )
( )
( )
( )
It is clear that, the results of Ruga-Kutta method of order 4 are much accurate than
the results of Ruga-Kutta method of order 2.
71
1.3125 1.7832
Order4
1.3180 1.7974
Exercises
Q1: Consider the following initial value problem:
( ) .
a. Use Runge-Kutta method of order 2 to find the approximate values of
( ) ( ) ( ) ( )
b. Depending on the results of a. , find the approximate value of ( )
c. Find the absolute error at each point in a.
72
, -
73