Unit Iterative Methods: Dense
Unit Iterative Methods: Dense
7.1 Introduction
7.2 The General Iteration Method
7.3 The Jacobi Iteration Method
-7.4 The Gauss-Seidel Iteration Method
7.5 Summary
7.6 Solutions/Answers
,7.1 INTRODUCTION
In the previous two units, you have studied direct methods for solving linear system
of equations Ax = b, A being n x n non-singular matrix. Direct methods provide the
exact solution in a finite number of steps provided exact arithmetic is used and there
is no round-off error. Also, direct methods are generally used when the matrix A is
dense or filled, that is, there are few zero elements, and the order of the matrix is
not very large say n < 50.
Iterative methods, on the other hand, start with an initial approximation and by
applying a suitably chosen algorithm, lead to successively better approximations.
Even if the process converges, it would give only an approximate solution. These
methods are generally used when the matrix A is sparse and the order of the matrix
A is very large say n > 50. Sparse matrices have very few non-zero elements. In most
cases these non-zero elements lie on or near the main diagonal giving rise to
tri-diagonal, five diagonal or band matrix systems. It may be noted that there are no
fixed rules to decide when to use direct methods and when to use iterative methods.
However, when the coefficient matrix is sparse or large, the use of iterative methods
is ideally suited to find the solution which take advantage of the sparse nature of the
matrix involved.
In this unit we shall discuss two iterative methods, namely, Jacobi iteration and
We assume that the diagonal coefficients aii f 0,(i = 1,. ..,n). If some of aii r 0, then
we rearrange the equations so that this condition holds. We then rewrite system (2) as
I
where
.......- !k
I 1
0 -alz -9
a11 all
a21
,-
a22
0 - 3 2 .......-
a22
azn
a22
H = ......:................. :............................
To so!ve system (3) we make an initial'guess x(O)of the solution vector and substitute
In general we can write the iteration method for solving the linear system,of
Eqns. (1) in the form
dk+')= H X ( ~+) C. k = 0.1 ......
lim dk'= 0
k-rm
Before we discuss the above convergence criteria, let us recall the following
definitions from linear algebra, MTE-02.
iterative Methods t
.. -
\
The eigenvalues of the matrix A are obtained from the characteristic equation
det (A-XI) = 0
which is an nth degree polynomial in X. The roots of this polynomial XI, X2,...,Xn are
I the eigenvalues of A. Therefore, we have
i Theorem 1 : An iteration method of the form (5) is convergent for arbitrary initial
approximate vector x(O) if and only if p(H)<l.
b
Definition : The number v = -loglo p(H) is called. the rate of convergence of an
iteration method.
ii) l l ~ l =l ~ max IIAXIIw , based on the maximum vector norm, llxll, = max lxil.
ll~llm Isla.
In (i) and (ii) above the maximum is taken over all (non zern) n-vectors. The
I most commonly used norms is the maximum no& IW(., as it+ easier to
calculate. It can be calculated in any oMhe following two ways:
llAll, = max
k
x
i
~ a , (maximum
~l absolute column-sum)
/
I
or
llAll. = max x l a , , l (maximum absolute row sum)
k
I Solution of Linear Algebraic Eguations The norm of a matrix is a non-negative number which in addition to the property
I
b) IlaAll = 1 a1 IIAll, for all numbers a .
c) lIA+BIl 6 IIAll + IlBll
where A and B are square matrices of order'n.
Theorem 2 : The iteration method of the form (5) for the solution of system (1)
converges to the exact solution for any initial vector, if IJHJJ< 1.
Also note that
IlHll 3 P(H).
This can be easily proved by considering the eigenvalue problem Ax = Ax.
Then IlAxll =.llxxll = I A.1 llxll
or 1AI Ilxll = IIAxll 6 IlAll llxll
i.e., ihl d llAll since IIxll # 0
I Since this result is true for all eigenvalues, we have
k i=l i k=l
condition is violated it is not necessary that the iteration diverges.
There is another sufficient condition for coovergence as follows:
Theorem 3 : If the matrix A is strictlv diaeonallv dominant that is.
then the iteration method (5) converges for any initial approximation x(o). I
i
?
If no better initial approximation is known, we generally take x(O) = 0.
We shall mostly use the criterion given in Theorem 1, which is both necessary and
sufficient.
i
1
I
I For using the iteration method (S), we need the matrix H a i d t6e vector &'which
depend on the matrix A and the vector b. The well-known iteration methods ?re
! based on the splitting of the matrix A in the form
A=D+L+U
Note that, A being a non-singular matrix, it is possible for us to make all the p i v M
non-zero. It is only when the matrix A is singular that even complete pivoting may
not lead to all the non-zero pivots.
We rewrite system (2) in the form (3) and define the Jacobi iteration method as
x I( ~ + ' )=
1 (a x ( ~ +
-- 12 2
) al3xSk) + ... + a,,~,!,~)-b,)
a11
xik+') = - 1
+
(anlxlk) an2x?) + ... + a,, ~!x: -bn)
i+j
Let us now solve a few examples for better understanding of the method and its
~~~- ~ -~
Determine the rate of convergence of the method and the number of iterations needed Iterative Methods
Solution : The Jacobi method when applied to the system of Eqns. (18), gives the
iteration matrix
The eigenvalues of the matrix H are the roots of the characteristic eqmtion.
det (H-XI) = 0
Now
-A -1 --1
4 4
3
det (H-XI) = 12 -A 18
= ~ 3 - - = 0
80
-2 --1 -A
5 5
'
All the three eigenvalues of the matrix H are equal and they are equal to
A = 0.3347
The spectral radius is
We obtain t h a t e of convergence as
v = -i0gl0(0.3347) = 0.4753
The number of iterations needed for the required accuracy is given by
The Jacobi method when applied to the system of Eqns. (18) becomes
starting with the initial approximation x(" = [l 2 21T, we get from Eqn. (21)
x")=[1.75 3.375 3 . 0 1 ~
x"' = [1.8437 3.875 3.0251~
x'~'= 11.9625 3.925 2.9625IT
,'(A) =
.& [1.9906 3.9766 3.000(1]~
x(" = [1.9941 3.9953 3.00091~
- ..--,CI~G ~UIIuitionin Theorem 1 is violated. The iteration method does not conve ge
? .
Iterative Methods
We now perform few iterations and see what happens actually. Taking x(') = 0 and
using the Jacobi method
we obtain
and so on, which shows that the iterations are diverging fast. You may also try to
obtain the solution with other initial approximations.
El) ~ o i r i i f i v iterations
e of the Jacobi method for solving the system of equations
given in Example 4 with x(O) = [ l 1 1IT.
Let us now consider an example to show that the convergence criterion given in
Theorem 3 is only a sufficient condition. That is, there are system of equations which
are not diagonally dominant but, the Jacobi iteration method converges.
Example 5 : Perform iterations of the Jacobi method for solving the system of
equations
with x(O) = [0 1 llT. What can you say about the solution obtained if the exact
solution is x = [0 1 2IT?
Solution : The Jacobi method when applied to the given system of equations becomes
X(k+l)
1
= [3 - X$k) - x3(k)I
=1
You may notice that the coefficient matrix is not diagonally dominant but the
iterations the exact solution after only two iterations.
And now a few exercises for yo;.
~ 2Perform
j four iterations of the Jacobi method for solving the system of equations
I
I
I
I
I
I
with.dO)= 0. The exact solution is x = (1 1
I
I
E4) Perform four iterations of the Jacobi method for solving the system of equations
You may notice'here that in the first equation of system (24), we substitute the initial
approximation (xi0): xi0',.. .,xi0)).on the right hand side. In the second equation,
we substitute (xi1), xSO',...,xAO))on the right hand side. In the third equation, we
substitute (xi1), xi1), X$~),...X:O))on the right hand side. We continue in this manner
until all the components have been improved. A t the end of this first iteration, we
will have an @proved vector (xi1), xi1),. ..,xi1)). The entire process is then repeated.
In other words, the method uses an improved component as soon as it becomes
available. It is for this reason the metbod is also called the method of successive
displacements.
b
We can also write the s$tem of Eqns. (24) as follows:
all xik+') = - a12xik) - a13xjk) ... aIn xAk) + bl
-,
and L and U are respectively the lower and upper triangular matrices with the zeros
along the diagonal and are of the form
Example 6 : Perform four iterations (rounded to four decimal placesj using the
Gauss-Seidel method for solving the system of equations
which is a good approximation to the exact solution x = (-1 -4 -3)T with maximum
absolute error 0.0034. Comparing with the results obtained in Example 1, we find
'-.
that the values of xi, i=1,2,3 obtained here are better approximates to the exact
60 solution than the one obtained in Example 1.
Soiution of Linear Algebraic Quation6 The eigenvalues of the matrix H are the roots of the characteristic equation
-A -1 -1
4 4
det(H-AI)= 0 --A 0 =0
8
0 -3
40 -(+AI ) +
We have
h(80A2 - 2A -1) = 0
which gives
A = 0, 0.125, -0.1
Therefore, we have
p(H) = 0.125
The rate of convergence of the method is given by
v = -10g~~(0.125) = 0.9031
The number of iterations needed for obtaining the desired accuracy is given by
k = - =2 - . Z 2 3
v 0.9031
The Gauss-Seidel method when applied to the system of Eqns. (29) becomes
X[k+')
1
= -[7 - X$k) + X$k)]
4
X$k+l) = 1
- -[-21 - 4x$k+l)- x 6 k ) ~
8 (30)
X$k+l)
1
= -[15 + zx$k+l) - (k+l)
5 x2 I
The successive iterations are obtained as
x(')= [1.75 3.75 2.951~
x ' ~ )= [1.95 3.9688 2.98631T
) [1.9956 3.9961 2.99901~
x(~=
which is an approximation to the exact solution after three iterations. Comparing the
results obtained in Example 2, we conclude that the Gauss-Seidel method converges
faster than the Jacobi method.
Example 8 : Use the Gauss-Seidel method for solving the following system of
equations.
with x(O) = [0.5 0.5 0.5 0.5IT. Compare the results with those obtained in
Example 3 after four iterations. The exact solution is x = [I 1 1 llT.
Solution : The Gauss-Seidel method, when applied to the system of Eqns. (31)
becomes
= ' [I + xik)]
x21 - 1
-q[xl (,+I) +
x3( k + l ) -
- T[x2
+ x$k)]
In Example 3, the result obtained after four iterations by the Jacobi method was
d4)= [0.8438 0.75 0.75 0.84381~
-Remark : The matrix formulations of the Jacobi and Gauss-Seidel methods are used
whenever we want to check whether the iterations converges or to find the rate of
convergence. If we wish to iterate and find solutions of the systems, we shall use the
equation form of the methods.
And now a few exercises for you.
You may now attempt the following exercises.
-- - -
E7) Perform four iterations of the Gauss-Seidel method for solving t k system of
equations given in E2).
E8) Perform four iterations of the Gauss-Seidel method for solving the system of
equations given in E3).
E9) Perform four iterations of the Gauss-Seidel method for solving the system of
equations given in E4).
E10) Set up the matrix formulation of the Gauss-Seidel method for solving the system
of equations given in E5). Perform four iterations of the method.
E l l ) Gauss-Seidel method is used to solve the system of equations given in E6).
Determine the rate of convergence and the number of iterations needed to
make m ? x I ~ $ ~G) ) [lo-'. Perform four iterations and compare the results with
'the exact solution.
We now end this unit. by giving a summary of what we have covered in it.
7.5 SUMMARY
In this unit, we have covered the following:
1) Iterative methods for solving linear system of .equations
Ax = b (see Eqn. (1))
where A is an n x n , non-singular matrix. Iterative methods are generally used
when the system is large and the matrix A is sparse. The process is 'started using
an initial approximation and lead to successively better approximations.
2) General iterative method for solving the linear system of Eqn. (1) can be written
in the form
x ( ~ + ' )= H X ( ~ ) + C, k = O,l, ........(see Eqn. (5))
where dk)and x ( ~ + ' )are the approximations to the solution vector x at the kth
and the (k+l)th iterations respectively. H is the iteration matrix which depends
on A and is generally a constant matrix. c is a column vector and depends on
both A and b.
3) Iterative method of the form given in 2) above converges for any initial vector,
if IlHll <1, which ii a sufficient condition for convergence. The necessary and
sufficient condition for convergence is p(H) <, where p(H) is the spectral radius
of H.
4) In the Jacobi iteration method or the method of simultaneous displacements.
H = - D-' (L+U); c = D-' b
where D is a diagonal matrix, L and U are respectively the lower and upper
triangular matrices with zero diagonal elements.
soluuon of *lgebrniC Qua-
5) In the Gauss-Seidel iteration method or the method of successive displacements
H = -(D + L)-'U and c = (D + L)-'b.
6) If the matrix A in Eqn. (I) is strictly diagonally dominant then the Jacobi and
Gauss-Seidel methods converge. Gauss-Seidel method converges faster than the
Jacobi method.