In20 S2 MA1024
4 Numerical Solutions of System of Linear Equations
4.1 Introduction
• A linear equation is an equation that may be put in the form
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1
where x1 , x2 , x3 , · · · xn are variables, a’s are coefficients and b1 is constant.
• A system of linear equations (or linear system) is a collection of two or more linear
equations involving the same set of variables.
• A general system of n linear equations with n unknowns can be written as
a11 x1 + a12 x2 + a13 x3 + · · · · · · + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + · · · · · · + a2n xn = b2
a31 x1 + a32 x2 + a33 x3 + · · · · · · + a3n xn = b3 (4.1)
..
.
an1 x1 + an2 x2 + an3 x3 + · · · · · · + ann xn = bn
• Any linear system can be written in matrix form
~ =B
AX ~ (4.2)
~ is a variable or unknown matrix and B
where A is called as a coefficient matrix, X ~ is a
constant matrix.
• Thus, the matrix form of the system of linear equation (4.1) is
a11 a12 a13 · · · · · · a1n x1 b1
a21 a22 a23 · · · · · · a2n x2 b2
a
31 a32 a33 · · · · · · a3n
x 3 = b3
(4.3)
. .. .. .. . .
.. . . . .. ..
an1 an2 an3 · · · · · · ann xn bn
| {z } | {z } | {z }
A ~
X ~
B
4.2 Iterative Techniques
4.2.1 Jacobi Method
~ =B
• Solve the ith equation in AX ~ for xi to obtain (provided aii 6= 0)
n
aij xj bi
X
xi = − + , i = 1, 2, . . . , n (4.4)
j=1,j6=i aii aii
(k) ~ (k) using the componenets of previous
• For each k ≥ 1, generate the components xi of X
~ (k−1) by
iteration X
n
1
(k) X (k−1)
xi = − aij xj + bi (4.5)
aii j=1,j6=i
for each i = 1, 2, . . . , n.
1 UoM
In20 S2 MA1024
4.2.2 Jacobi Method Algorithm
1. Rearrange the given equations, if possible, such that the system becomes diagonally
dominanta .
(0)
2. Select the intial approximation xi for i = 1, 2, . . . , n.
3. Rewrite the ith equation as in equation (4.4).
4. Generate the sequence of approximate solution using the equation (4.5) until a good
enough approximation is obtained.
• The possible stopping criterion is to iterate until
~ (k) − X
kX ~ (k−1) k
(4.8)
kX~ (k) k
is smaller than some prescribed tolerance. For this purpose, any convenient
norm can be used, the usual being the `∞ b norm.
a
Definition 4.1 – Diagonally Dominant Matrices. The n×n matrix A is said to be diagonally dominant
when
n
X
|aii | ≥ |aij | (4.6)
j=1,j6=i
holds for each i = 1, 2, . . . , n.
A diagonally dominant matrix is said to be strictly diagonally dominant when the
n
X
|aii | > |aij | (4.7)
j=1,j6=i
holds for each i = 1, 2, . . . , n.
Theorem 4.1. The `∞ norm of a matrix is the maximum of the sum of the magnitude of the column
entries of the matrix. That is, if A is an n × n matrix, then
n
X
kAk∞ = max |aij | (4.9)
1≤i≤n
j=1
• In general, iterative techniques for solving linear systems involve a process that convert-
sthe system AX ~ =B ~ into an equivalent system of the form X ~ = TX ~ +C~ for some fixed
matrix T and vector C. ~
~ (0) is selected, the sequence of approximate solution vectors is
• After the initial vector X
generated by computing
~ (k) = T X
X ~ (k−1) + C,
~ (4.10)
for each k = 1, 2, 3, . . . .
2 UoM
In20 S2 MA1024
• The Jacobi method can be written in the form X ~ (k) = T X
~ (k−1) + C
~ by splitting A into
its diagonal and off-diagonal parts as given below.
– Let, D be the diagonal matrix whose diagonal entries are those of A, −L be the
strictly lower-triangular part of A, and −U be the strictlyupper-triangular part of
A.
– With this notation,
A=D−L−U (4.11)
– Then,
AX~ =B~ (4.12)
~ =B
(D − L − U )X ~ (4.13)
DX ~ = (L + U )X
~ +B
~ (4.14)
– If aii 6= 0, D−1 exists and therefore
~ = D−1 (L + U )X
X ~ + D−1 B
~ (4.15)
– This results in the matrix form of the Jacobi iterative technique:
~ (k) = D−1 (L + U )X
X ~ (k−1) + D−1 B
~ , k = 1, 2, 3, . . . (4.16)
~ j = D−1 B
– Introducing the notation Tj = D−1 (L+U ) and C ~ gives the Jacobi technique
the form
~ (k) = Tj X
X ~ (k−1) + C
~ j , k = 1, 2, 3, . . . (4.17)
– In practice, equation (4.5) is used in computation and equation (4.17) for theoretical
purposes.
• Note that the Jacobi method is slow to converge.
example 4.1 Spring-Mass System
The spring-mass system is given in Figure 4.1. An arrangement of four
springs in series being depressed with a force of 2000 kg. At equilibrium,
force-balance equations can be developed defining the inter relationships
between the springs,
k2 (x2 − x1 ) = k1 x1 , (4.18)
k3 (x3 − x2 ) = k2 (x2 − x1 ) (4.19)
k4 (x4 − x3 ) = k3 (x3 − x2 ), (4.20)
F = k4 (x4 − x3 ) (4.21)
where the k 0 s are spring constants. If k1 through k4 are 150, 50, 75, and 225
N/m, respectively, compute the x0 s.
3 UoM
In20 S2 MA1024
Figure 4.1
4.2.3 Gauss-Seidel Method
• The convergence rate of the Jacobi method can be improved by using the most recently
(k)
calculated values to compute xi .
(k) (k) (k)
• That is, the components x1 , . . . , xi−1 have already been computed when computing xi
(k)
and therefore they can be used to compute the xi as follows.
i−1 n
1
(k) X (k) X (k−1)
xi = − aij xj − aij xj + bi (4.22)
aii j=1 j=i+1
for each i = 1, 2, . . . , n,
• This modification is called the Gauss-Seidel iterative technique.
• As in Jacobi Method, Gauss-Seidal Method can be written in matrix form as follows.
~ = UX
(D − L)X ~ +B
~ (4.23)
and
~ (k) = (D − L)−1 U X
X ~ (k−1) + (D − L)−1 B
~ , k = 1, 2, 3, . . . (4.24)
[Note: For the lower-triangular matrix D − L to be nonsingular, it is necessary and
sufficient that aii 6= 0, for each i = 1, 2, . . . , n.]
~ g = (D − L)−1 B
• Introducing the notation Tg = (D − L)−1 U and C ~ gives the Gauss-Seidal
technique the form
~ (k) = Tg X
X ~ (k−1) + C
~ g , k = 1, 2, 3, . . . (4.25)
example 4.2 Apply Gauss-Seidal Method and find the solution to the system given in
Example 4.1.
4 UoM
In20 S2 MA1024
4.2.4 Convergence of Iterative Method
~ (0) ∈ Rn , the sequence {X
Theorem 4.2. For any X ~ (k) }∞
k=0 defined by
~ (k) = T X
X ~ (k−1) + C,
~ for each k ≥ 1, (4.26)
~ = TX
converges to the unique solution of X ~ +C
~ if and only if ρ(T ) < 1.a
a
Definition 4.2 – Spectral Radius. The spectral radius ρ(A) of a matrix A is defined by
ρ(A) = max |λ|, (4.27)
p
where λ is an eigenvalue of A. (For complex λ = α + βi, we define |λ| = α2 + β 2 .)
Corollary 4.1. If kT k < 1 for any natural matrix norm and C ~ is a given vector, then the
sequence{X ~ (k) = T X
~ (k) }∞ defined by X ~ (k−1) + C
~ converges, for any X~ (0) ∈ Rn , to a vector
k=0
X~ ∈ Rn , with X ~ = TX
~ + C,
~ and the following error bounds hold:
~ −X
(i) kX ~ (k) k ≤ kT kk kX
~ −X
~ (0) k
~ −X
~ (k) k ≤ kT kk ~ (1) − X
~ (0) k
(ii) kX kX
1 − kT k
~ (0) , both the Jacobi
Theorem 4.3. If A is strictly diagonally dominant, then for any choice of X
and Gauss-Seidel methods give sequences {X ~ (k) }∞ that converge to the unique solution of
k=0
AX~ = B.
~
0
REFERENCES
(i) Numerical Analysis, Richard L. Burden, J.Douglas Faires.
(ii) Numerical Methods for Engineers, Steven C. Chapra, Raymond P. Canale
5 UoM