0% found this document useful (0 votes)
339 views

Unit Iterative Methods: Dense

This document provides an introduction to iterative methods for solving systems of linear equations. It discusses two common iterative methods - Jacobi iteration and Gauss-Seidel iteration. The general iteration method is presented, which generates successive approximations that converge to the exact solution. Convergence criteria based on the eigenvalues of the iteration matrix and its norm are also introduced. The goal is to obtain approximate solutions when the matrix is large and sparse by taking advantage of the sparse structure.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
339 views

Unit Iterative Methods: Dense

This document provides an introduction to iterative methods for solving systems of linear equations. It discusses two common iterative methods - Jacobi iteration and Gauss-Seidel iteration. The general iteration method is presented, which generates successive approximations that converge to the exact solution. Convergence criteria based on the eigenvalues of the iteration matrix and its norm are also introduced. The goal is to obtain approximate solutions when the matrix is large and sparse by taking advantage of the sparse structure.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 7 ITERATIVE METHODS

7.1 Introduction
7.2 The General Iteration Method
7.3 The Jacobi Iteration Method
-7.4 The Gauss-Seidel Iteration Method
7.5 Summary
7.6 Solutions/Answers

,7.1 INTRODUCTION
In the previous two units, you have studied direct methods for solving linear system
of equations Ax = b, A being n x n non-singular matrix. Direct methods provide the
exact solution in a finite number of steps provided exact arithmetic is used and there
is no round-off error. Also, direct methods are generally used when the matrix A is
dense or filled, that is, there are few zero elements, and the order of the matrix is
not very large say n < 50.
Iterative methods, on the other hand, start with an initial approximation and by
applying a suitably chosen algorithm, lead to successively better approximations.
Even if the process converges, it would give only an approximate solution. These
methods are generally used when the matrix A is sparse and the order of the matrix
A is very large say n > 50. Sparse matrices have very few non-zero elements. In most
cases these non-zero elements lie on or near the main diagonal giving rise to
tri-diagonal, five diagonal or band matrix systems. It may be noted that there are no
fixed rules to decide when to use direct methods and when to use iterative methods.
However, when the coefficient matrix is sparse or large, the use of iterative methods
is ideally suited to find the solution which take advantage of the sparse nature of the
matrix involved.
In this unit we shall discuss two iterative methods, namely, Jacobi iteration and

After studying this unit, you should be able to:


"
obtain the solution of system of linear equations, Ax = b, when the matrix A is
large or sparse, by using the iterative method viz; Jacobi metliod br the
GaussBeidel method;
tell whether these iterative methods converge or not;
* '
obtain the rate of convergence and the approximate number of iterationsneeded
for the required accuracy of these.iterative methods.

7.2 THE GENERAL ITERATION METHOD


- . I niteration methods as we have already men.tioned, we start with some initial
approximate solution vector do)and generate a sequence of approximants { x ( ~ ) )
which converge to the exact solution vector x as k + m. If the method is convergent,
each iteration produces a better approximation to the exact solution. We repeat the
iterations till the required accuracy is obtained. Therefore, in an it/erative method the
amount of computation depends on the desired accuracy whereas in direct methods
the amount of computation is fixed. The number of iterations needed to obtain the
desired accuracy also depends on the initial approximation, closer the initial
apprdximation to the exact solution, faster will be the convergence.
Consider the system of equations
I
k,,lulion I I l.inrnr
~ ilprhrnie Equalions Writing the system in expanded form, We get
allxl + a12x2+ ...... alnxn = bl
azlxl + a 2 2 ~+2...... aznxn = b2

We assume that the diagonal coefficients aii f 0,(i = 1,. ..,n). If some of aii r 0, then
we rearrange the equations so that this condition holds. We then rewrite system (2) as

In matrix form, system (3) can be written as


x = Hx + c

I
where
.......- !k

I 1
0 -alz -9
a11 all
a21
,-

a22
0 - 3 2 .......-
a22
azn
a22
H = ......:................. :............................

To so!ve system (3) we make an initial'guess x(O)of the solution vector and substitute

manner until the successive iterations x ( ~have


) converged to.the'required number of

In general we can write the iteration method for solving the linear system,of
Eqns. (1) in the form
dk+')= H X ( ~+) C. k = 0.1 ......

i When the method (5) is canvergent, then


lim X(k) = lim X(k+l) = x
k-r = k-r .I.
l
and we obtain f ; o m ' ~ ~ (5)
n.
x=Hx+c (6)
1 If we define the error vector at the kth iteration as
€(k) = X(k) - (7) i
I then- subtracting Eqn. (6) from Eqn. (S), we obtain 1
= H E(k) 4

lim dk'= 0
k-rm
Before we discuss the above convergence criteria, let us recall the following
definitions from linear algebra, MTE-02.
iterative Methods t
.. -
\

eigenvalue or characteristic value of the matrix A .

The eigenvalues of the matrix A are obtained from the characteristic equation
det (A-XI) = 0
which is an nth degree polynomial in X. The roots of this polynomial XI, X2,...,Xn are
I the eigenvalues of A. Therefore, we have

We now state a theorem on the convergence of the iterative methods.

i Theorem 1 : An iteration method of the form (5) is convergent for arbitrary initial
approximate vector x(O) if and only if p(H)<l.

We define the rate of convergence as follows:

b
Definition : The number v = -loglo p(H) is called. the rate of convergence of an
iteration method.

Obviously, smaller the value of p(H), larger is the value of v.

dk)S Also the number of iterations k that will be needed to make

depends on v. For a method having higher rate of convergence, lesser number of


iterations will be needed for a fixed accuracy and fixed initial approximation. \\ e
There is another convergence criterion for iterative methods which is based on the
norm of a matrix.
! The norm of a square matrix A of order n can be 'defined in the same way as we
define the norm of an n-vector by comparing the size of Ax with the size of x (an
n-vector) as follows:
lIAxll2 IlAfl denotes the norm of A.
i) llAll2 = max

based on the euclidean vector norm, llxl12 = J Ix1I2+ /x2I2+ )xnI2


and
/

ii) l l ~ l =l ~ max IIAXIIw , based on the maximum vector norm, llxll, = max lxil.
ll~llm Isla.

In (i) and (ii) above the maximum is taken over all (non zern) n-vectors. The
I most commonly used norms is the maximum no& IW(., as it+ easier to
calculate. It can be calculated in any oMhe following two ways:
llAll, = max
k
x
i
~ a , (maximum
~l absolute column-sum)
/
I
or
llAll. = max x l a , , l (maximum absolute row sum)
k
I Solution of Linear Algebraic Eguations The norm of a matrix is a non-negative number which in addition to the property

IIABII s IIAII IIBII


satisfies all the properties of a vector norin, viz.,
a) /[A/(3 O and ((A((= 0 iff A =.o

I
b) IlaAll = 1 a1 IIAll, for all numbers a .
c) lIA+BIl 6 IIAll + IlBll
where A and B are square matrices of order'n.

Theorem 2 : The iteration method of the form (5) for the solution of system (1)
converges to the exact solution for any initial vector, if IJHJJ< 1.
Also note that
IlHll 3 P(H).
This can be easily proved by considering the eigenvalue problem Ax = Ax.
Then IlAxll =.llxxll = I A.1 llxll
or 1AI Ilxll = IIAxll 6 IlAll llxll
i.e., ihl d llAll since IIxll # 0
I Since this result is true for all eigenvalues, we have

The criterion given in Theorem 2 is only a sufficient condition, it is not necessary.


Therefore, for a system of equations for which the matrix H is such that either '

k i=l i k=l
condition is violated it is not necessary that the iteration diverges.
There is another sufficient condition for coovergence as follows:
Theorem 3 : If the matrix A is strictlv diaeonallv dominant that is.

then the iteration method (5) converges for any initial approximation x(o). I
i
?
If no better initial approximation is known, we generally take x(O) = 0.
We shall mostly use the criterion given in Theorem 1, which is both necessary and
sufficient.
i
1
I

I For using the iteration method (S), we need the matrix H a i d t6e vector &'which
depend on the matrix A and the vector b. The well-known iteration methods ?re
! based on the splitting of the matrix A in the form
A=D+L+U

I discuss two iteration methods of the form (5).

We write the system of Eqn. (1) in the form (2), viz.,


+ +
allxl aI2x2+ ... alnx, = b,
+
azlxl azzx2+ ... + a,,x, = bz

anlxl+ an2x2+ ... + annxn= b,


l t c ~ t i v Methods
t

Note that, A being a non-singular matrix, it is possible for us to make all the p i v M
non-zero. It is only when the matrix A is singular that even complete pivoting may
not lead to all the non-zero pivots.
We rewrite system (2) in the form (3) and define the Jacobi iteration method as
x I( ~ + ' )=
1 (a x ( ~ +
-- 12 2
) al3xSk) + ... + a,,~,!,~)-b,)
a11

xik+') = - 1
+
(anlxlk) an2x?) + ... + a,, ~!x: -bn)

i+j

The method (13) can be put in the matrix form as

E The method (14) is of the form ( 5 ) , where

H = -D-' (L+U) and c = D-' b

. then replace the entire vector x ( ~on


vector x ( ~ + ' )We ) the right side of Eqn. (13) by
x ( ~ + ' )to obtain the solution at the next iteration. In other words. each of the

Let us now solve a few examples for better understanding of the method and its
~~~- ~ -~
Determine the rate of convergence of the method and the number of iterations needed Iterative Methods

to make m?x IE!~)I S lo-'


Perform these number of iterations starting with-initial approximation = [I 2 21T
and com6are the result with the exact solutibn [2 4 3IT

Solution : The Jacobi method when applied to the system of Eqns. (18), gives the
iteration matrix

The eigenvalues of the matrix H are the roots of the characteristic eqmtion.
det (H-XI) = 0
Now
-A -1 --1
4 4
3
det (H-XI) = 12 -A 18
= ~ 3 - - = 0
80
-2 --1 -A
5 5
'
All the three eigenvalues of the matrix H are equal and they are equal to
A = 0.3347
The spectral radius is

We obtain t h a t e of convergence as
v = -i0gl0(0.3347) = 0.4753
The number of iterations needed for the required accuracy is given by

The Jacobi method when applied to the system of Eqns. (18) becomes

starting with the initial approximation x(" = [l 2 21T, we get from Eqn. (21)
x")=[1.75 3.375 3 . 0 1 ~
x"' = [1.8437 3.875 3.0251~
x'~'= 11.9625 3.925 2.9625IT
,'(A) =
.& [1.9906 3.9766 3.000(1]~
x(" = [1.9941 3.9953 3.00091~
- ..--,CI~G ~UIIuitionin Theorem 1 is violated. The iteration method does not conve ge
? .
Iterative Methods

We now perform few iterations and see what happens actually. Taking x(') = 0 and
using the Jacobi method

we obtain

and so on, which shows that the iterations are diverging fast. You may also try to
obtain the solution with other initial approximations.

El) ~ o i r i i f i v iterations
e of the Jacobi method for solving the system of equations
given in Example 4 with x(O) = [ l 1 1IT.

Let us now consider an example to show that the convergence criterion given in
Theorem 3 is only a sufficient condition. That is, there are system of equations which
are not diagonally dominant but, the Jacobi iteration method converges.

Example 5 : Perform iterations of the Jacobi method for solving the system of
equations

with x(O) = [0 1 llT. What can you say about the solution obtained if the exact
solution is x = [0 1 2IT?
Solution : The Jacobi method when applied to the given system of equations becomes
X(k+l)
1
= [3 - X$k) - x3(k)I
=1

xSk+') = [-1 + 3xIk)], k=0,1, .....


Using x(O) = [0 1 llT, we obtain

You may notice that the coefficient matrix is not diagonally dominant but the
iterations the exact solution after only two iterations.
And now a few exercises for yo;.

~ 2Perform
j four iterations of the Jacobi method for solving the system of equations

,with x(O) = 0. Exact solution is x = (1 -1 -llT 57


I

I ons of the Jacobi method for solving tne system V L c q u a r l v x w


I

I
I

I
I
I
I
with.dO)= 0. The exact solution is x = (1 1
I
I
E4) Perform four iterations of the Jacobi method for solving the system of equations
You may notice'here that in the first equation of system (24), we substitute the initial
approximation (xi0): xi0',.. .,xi0)).on the right hand side. In the second equation,
we substitute (xi1), xSO',...,xAO))on the right hand side. In the third equation, we
substitute (xi1), xi1), X$~),...X:O))on the right hand side. We continue in this manner
until all the components have been improved. A t the end of this first iteration, we
will have an @proved vector (xi1), xi1),. ..,xi1)). The entire process is then repeated.
In other words, the method uses an improved component as soon as it becomes
available. It is for this reason the metbod is also called the method of successive
displacements.

b
We can also write the s$tem of Eqns. (24) as follows:
all xik+') = - a12xik) - a13xjk) ... aIn xAk) + bl
-,

a21 X 1( k + l ) + a22 X2( k + l ) = - a2fl$k) - - a2n x(k)


n
+ b2

In matrix form, this system can be written as


(D+L) x ( ~ + ' )= - U x ( ~ +
) b
where D is the diagonal matrix

and L and U are respectively the lower and upper triangular matrices with the zeros
along the diagonal and are of the form

From Qn. (25), we obtain


= - ( D + L ) - ~ U X ( ~+) .(D+L)-l b
which is of the form (5) with
H = -(D+L)-' U and c = (D+L)-' b.
It may again be noted here, that if A is diagonally dominant then the iteration always
cbnverges.
Gauss-Seidel method will generally converge if the Jacobi method converges, and will
converge at a faster rate. For symmetric A ; it can be shown that
p(Gauss8eidel iteration method) = [p(Jacobi iteration method)12
of Linmr *lgebrniC Hence the rate of convergence clL ol. .gauss-Seidel method is twice the rate of
convergence of the ~ a c o bmethod.
i This result is usually true even when A is nc
symmetric.
I We shall illustrate this fact through exam~les.

Example 6 : Perform four iterations (rounded to four decimal placesj using the
Gauss-Seidel method for solving the system of equations

with do)= 0. The exact solution is x = (-1 -4 -3)T.

Solution : The Gapss-Seidel method, for the system (25) is

x{k+l) = + [x$k) + x$k)-:,.]

Taking do)= 0, we obtain the following iterations.


k =0

which is a good approximation to the exact solution x = (-1 -4 -3)T with maximum
absolute error 0.0034. Comparing with the results obtained in Example 1, we find
'-.
that the values of xi, i=1,2,3 obtained here are better approximates to the exact
60 solution than the one obtained in Example 1.
Soiution of Linear Algebraic Quation6 The eigenvalues of the matrix H are the roots of the characteristic equation
-A -1 -1
4 4
det(H-AI)= 0 --A 0 =0
8
0 -3
40 -(+AI ) +

We have
h(80A2 - 2A -1) = 0
which gives
A = 0, 0.125, -0.1
Therefore, we have
p(H) = 0.125
The rate of convergence of the method is given by
v = -10g~~(0.125) = 0.9031
The number of iterations needed for obtaining the desired accuracy is given by
k = - =2 - . Z 2 3
v 0.9031
The Gauss-Seidel method when applied to the system of Eqns. (29) becomes

X[k+')
1
= -[7 - X$k) + X$k)]
4
X$k+l) = 1
- -[-21 - 4x$k+l)- x 6 k ) ~
8 (30)

X$k+l)
1
= -[15 + zx$k+l) - (k+l)
5 x2 I
The successive iterations are obtained as
x(')= [1.75 3.75 2.951~
x ' ~ )= [1.95 3.9688 2.98631T
) [1.9956 3.9961 2.99901~
x(~=
which is an approximation to the exact solution after three iterations. Comparing the
results obtained in Example 2, we conclude that the Gauss-Seidel method converges
faster than the Jacobi method.
Example 8 : Use the Gauss-Seidel method for solving the following system of
equations.

with x(O) = [0.5 0.5 0.5 0.5IT. Compare the results with those obtained in
Example 3 after four iterations. The exact solution is x = [I 1 1 llT.
Solution : The Gauss-Seidel method, when applied to the system of Eqns. (31)
becomes
= ' [I + xik)]

x21 - 1
-q[xl (,+I) +

x3( k + l ) -
- T[x2
+ x$k)]

x$k+l) = l[l + xSk+')], k = 0, 1,...


Iterative Methods
Starting with the initial approximation x(O) = [0.5 0.5 0.5 0.51T, we obtain the
following iterates
x(') = [0.75 0.625 0.5625 0.78131~
xc2) = [0.8125 0.'6875 0.7344 0.86721~
d3)= [0.8438 0.7891 0.8282 0.9141]~
x ( ~ )= [0.8946 0.8614 0.8878 0.94391~

In Example 3, the result obtained after four iterations by the Jacobi method was
d4)= [0.8438 0.75 0.75 0.84381~
-Remark : The matrix formulations of the Jacobi and Gauss-Seidel methods are used
whenever we want to check whether the iterations converges or to find the rate of
convergence. If we wish to iterate and find solutions of the systems, we shall use the
equation form of the methods.
And now a few exercises for you.
You may now attempt the following exercises.
-- - -

E7) Perform four iterations of the Gauss-Seidel method for solving t k system of
equations given in E2).
E8) Perform four iterations of the Gauss-Seidel method for solving the system of
equations given in E3).
E9) Perform four iterations of the Gauss-Seidel method for solving the system of
equations given in E4).
E10) Set up the matrix formulation of the Gauss-Seidel method for solving the system
of equations given in E5). Perform four iterations of the method.
E l l ) Gauss-Seidel method is used to solve the system of equations given in E6).
Determine the rate of convergence and the number of iterations needed to
make m ? x I ~ $ ~G) ) [lo-'. Perform four iterations and compare the results with
'the exact solution.

We now end this unit. by giving a summary of what we have covered in it.

7.5 SUMMARY
In this unit, we have covered the following:
1) Iterative methods for solving linear system of .equations
Ax = b (see Eqn. (1))
where A is an n x n , non-singular matrix. Iterative methods are generally used
when the system is large and the matrix A is sparse. The process is 'started using
an initial approximation and lead to successively better approximations.
2) General iterative method for solving the linear system of Eqn. (1) can be written
in the form
x ( ~ + ' )= H X ( ~ ) + C, k = O,l, ........(see Eqn. (5))
where dk)and x ( ~ + ' )are the approximations to the solution vector x at the kth
and the (k+l)th iterations respectively. H is the iteration matrix which depends
on A and is generally a constant matrix. c is a column vector and depends on
both A and b.
3) Iterative method of the form given in 2) above converges for any initial vector,
if IlHll <1, which ii a sufficient condition for convergence. The necessary and
sufficient condition for convergence is p(H) <, where p(H) is the spectral radius
of H.
4) In the Jacobi iteration method or the method of simultaneous displacements.
H = - D-' (L+U); c = D-' b
where D is a diagonal matrix, L and U are respectively the lower and upper
triangular matrices with zero diagonal elements.
soluuon of *lgebrniC Qua-
5) In the Gauss-Seidel iteration method or the method of successive displacements
H = -(D + L)-'U and c = (D + L)-'b.
6) If the matrix A in Eqn. (I) is strictly diagonally dominant then the Jacobi and
Gauss-Seidel methods converge. Gauss-Seidel method converges faster than the
Jacobi method.

El) x(') = (-3 7 -31T

d3)= (-15 19 -9)T

)'(x = (-63 67 -33)T

Iterations do not converge.


E2) x") = [0.2 -1.2 -0.81~
)
'
(
x = [1.0 -0.8 -0.64JT
d3)= [0.776 -1.216 -1.041~
x(~)= [#.I024 -0.8864 -0.86721T

E3) x("= [0.75 0.0 0.251~


)'(x = [0.75 0.625 0.43751~
I x'~) = [0.9063 0.6719 0.71881~
I
~I x(~) = [0.9180 0.8594 0.75391~

E4) "'x = [-0.8 1.2 1.6 3.41~


I x'~' = [0.44 1.62 2.36 3.6IT
~ x(~)= [0.716 1.84 2.732 3.8421~
x'~) = [0.8828 1.9290 2.87% 3.92881~

x") = [0.5 0.5 0.5 0.5IT


x'~' = [0.75 0.75 0.75 0.751~
[0.875 0.875 0.875 0.875IT
x(*) = [0.9375 0.9375 0.9375 0.93751~
k=2,6
v

x"' = [0.5 0.3333 -0.54171T


x(*) = [0.7709 0.6945 -0.79001~
x'~' = [0.8950 0.8600 -0.90381~
x'~)= [0.9519 0.9359 -0.95591~

You might also like