0% found this document useful (0 votes)
11 views

Numerical Analysis Chapter 4

The document discusses various methods for solving systems of linear equations, including Gauss elimination, LU decomposition, and Gauss-Jordan elimination. It provides examples of using Gauss elimination to solve a system of equations and using Doolittle's LU decomposition method. The document contains information about representing linear systems in matrix form and performing row operations to put the matrix in triangular form to solve for the variables.

Uploaded by

Chhin Visal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Numerical Analysis Chapter 4

The document discusses various methods for solving systems of linear equations, including Gauss elimination, LU decomposition, and Gauss-Jordan elimination. It provides examples of using Gauss elimination to solve a system of equations and using Doolittle's LU decomposition method. The document contains information about representing linear systems in matrix form and performing row operations to put the matrix in triangular form to solve for the variables.

Uploaded by

Chhin Visal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

1.

Greeting

Numerical Analysis
Solving Linear Systems of Equations

OL Say

[email protected]

Institute of Technology of Cambodia

September 21, 2023

OL Say (ITC) Numerical Analysis September 21, 2023 1 / 54


2. Outline

1 Linear Systems of Equations

2 Gauss Elimination and Backward Substitution

3 LU Decompositions

4 Gauss-Jordan Elimination

5 Symmetric and Banded Coefficient Matrices

6 Gauss Elimination with Scaled Row Pivoting

7 The Jacobi and Gauss-Siedel Iterative Techniques

8 Relaxation Techniques

OL Say (ITC) Numerical Analysis September 21, 2023 2 / 54


1. Linear Systems of Equations

1 A system of algebraic equations has the form





 a11 x1 + a12 x2 + · · · + a1n xn = b1
 a21 x1 + a22 x2 + · · · + a2n xn = b2



..


 .

 an1 x1 + an2 x2 + · · · + ann xn = bn


2 In matrix notation the equations are written as

a a ··· a1n x1 b
© 11 12 ª © ª © 1ª
­a21 a22 · · · a2n ® ­x2 ® ­b2 ®
­ .
­ .. .. .. .. ®® ­­ .. ®® = ­­ .. ®®
­ . . . ®­ ® . ­ ® .
«an1 an2 · · · ann ¬ «xn ¬ « bn ¬
or simply AX = B.

OL Say (ITC) Numerical Analysis September 21, 2023 3 / 54


1. Linear Systems of Equations

3 A particularly useful representation of the equations for


computational purposes is the augmented coefficient matrix
obtained by adjoining the constant vector B to the coefficient
matrix A in the following fashion:

a a ··· a1n b1
© 11 12
­ a21 a22 · · · a2n b2 ®
ª
(A|B) = ­­ . .. .. .. .. ®®
­ .. . . . . ®
a a
« n1 n2 · · · ann bn ¬
4 In the first few sections, we discuss about direct methods for
solving the system. The three popular methods are listed below
Method Initial form Final form
Gauss elimination AX = B UX = C
LU decomposition AX = B LUX = B
Gauss-Jordan elimination AX = B IX = C
OL Say (ITC) Numerical Analysis September 21, 2023 4 / 54
1. Linear Systems of Equations

1 U represents an upper triangular matrix

u11 u12 u13 · · · u1n


0 u22 u23 · · · u2n ®
© ª
­
0 0 u33 · · · u1n ®
­ ®
U = ­­
.. .. .. .. .. ®®
­
­ . . . . . ®
« 0 0 0 ... unn ¬
2 L is a lower triangular matrix

l 0 0 ··· 0
© 11
­l21 l22 0 · · · 0 ®
ª

L = ­­l31 l32 l33 · · · 0 ®®


­ ®
­ ... ..
.
.. . .
. . 0 ®®
­
«ln1 ln2 ln3 · · · lnn ¬
3 And A = LU.
OL Say (ITC) Numerical Analysis September 21, 2023 5 / 54
2. Gauss Elimination and Backward Substitution

Example 1
Represent the linear system


 x1 − x2 + 2x3 − x4 = −8,
 x1 − x2

 + 2x3 − x4 = −8,
 x1 + x2
 + x3 = −2,
 x1 − x2

+ 4x3 + 3x4 = 4,

as an augmented matrix and use Gaussian Elimination to find its
solution.

OL Say (ITC) Numerical Analysis September 21, 2023 6 / 54


2. Gauss Elimination and Backward Substitution

Use the backward substitution from the result of Gaussian


elimination
4
x4 = = 2,
2
−4 − (−1)x4
x3 = = 2,
−1
6 − x4 − (−1)x3
x2 = = 3,
2
−8 − (−1)x4 − 2x3 − (−1)x2
x1 = = −7.
1

The system has a unique solution X = (−7, 3, 2, 2)T .

OL Say (ITC) Numerical Analysis September 21, 2023 7 / 54


2. Gauss Elimination and Backward Substitution

.. .. .. .. .. .. ..
. . . . . . .
..
. ak,k ak,k+1 ak,k+2 ··· ak,n
i=k+1 ak+1,k ak+1,k+1 ak+1,k+2 ··· ak+1,n
i = k + 2 (0) ak+2,k ak+2,k+1 ak+2,k+2 ··· ak+2,n
.. .. .. .. .. ..
. . . . . .
i=n an,k an,k+1 an,k+2 ··· an,n
j=k j = k +1 j = k +2 ··· j=n

Gaussian Elimination and Backward Substitution


To solve the n × n linear system AX = B.
INPUT ??
OUTPUT ??
Body ??
OL Say (ITC) Numerical Analysis September 21, 2023 8 / 54
3. LU Decompositions

1 It is possible to show that any square matrix A can be expressed


as a product of a lower triangular matrix L and an upper
triangular matrix U : A = LU.
2 The process of computing L and U for a given A is known as LU
decomposition or LU factorization.
3 LU decomposition is not unique (the combinations of L and U for
a prescribed A are endless), unless certain constraints are placed
on L or U.
4 These constraints distinguish one type of decomposition from
another.
5 Three commonly used decompositions are listed below
Name Constraints
Doolittle LU decomposition lii = 1, i = 1, 2, ... , n
Doolittle LDLt decomposition U = DLT , lii = 1, i = 1, 2, ... , n
Cholesky LLt decomposition U = LT
Crout LU decomposition uii = 1, i = 1, 2, ... , n
OL Say (ITC) Numerical Analysis September 21, 2023 9 / 54
3.1 Doolittle LU Decomposition

1 Consider a 3 × 3 matrix A and assume that there exist triangular


matrices
1 0 0 u11 u12 u13
L = ­l21 1 0® , U = ­ 0 u22 u23 ® , such that A = LU.
© ª © ª

«l31 l32 1¬ « 0 0 u33 ¬


2 After completing the multiplication on the right-hand side, we get
u11 u12 u13
A = ­ u11 l21 u12 l21 + u22 u13 l21 + u23
© ª
®
« u11 l 31 u12 l 31 + u 22 l 32 u 13 l31 + u23 l 32 + u33 ¬
3 R2 ← R2 − l21 R1 (eliminates A21 )
R3 ← R3 − l31 R1 (eliminates A31 )
u11 u12 u13
A1 = ­ 0 u22 u23
© ª
®
« 0 u 22 l 32 u 23 l 32 + u 33 ¬
OL Say (ITC) Numerical Analysis September 21, 2023 10 / 54
3.1 Doolittle LU Decomposition

4 R3 ← R3 − l32 R2
u11 u12 u13
A2 = ­ 0 u22 u23 ®
© ª

« 0 0 u33 ¬

Example 2
Use Doolittle’s decomposition method to solve the equations
AX = B, where

1 4 1 7
A=­ 1 6 −1 ® , B = ­ 13 ®
© ª © ª

« 2 −1 2 ¬ « 5 ¬
1 Decompose A into LU by Gaussian Elimination method.
2 Solve LY = B for Y by Forward Substitution method where Y = UX.
3 Solve UX = Y for X by Backward Substitution method.
OL Say (ITC) Numerical Analysis September 21, 2023 11 / 54
3.1 Doolittle LU Decomposition

Doolittle’s Decomposition Algorithm

Decompose an n × n square matrix A into LU with Lii = 1 for


i = 1, 2, ... , n.
INPUT Square matrix A of size n × n
OUTPUT Lower triangular matrix L and upper triangular matrix U
1 Set U ← A
2 For i = 1 to n set lii ← 1
3 For k = 1 to n − 1 do
a For i = k + 1 to n do
i Set r ← uik /ukk ; lik ← r; uik ← 0
ii For j = k + 1 to n set uij ← uij − r · ukj

4 OUTPUT L and U

OL Say (ITC) Numerical Analysis September 21, 2023 12 / 54


3.2 Doolittle LDLt Decomposition

Definition 3 (Positive Definite)

A matrix A is positive definite if it is symmetric and if X T AX > 0 for


every n-dimensional vector X ≠ 0.

Example 4

2 1 0
A symmetric matrix A = ­ 1 2 1 ® is positive definite because
© ª

« 0 1 2 ¬
2 1 0 x1
X T AX = (x1 , x2 , x3 ) ­ 1 2 1 ® ­ x2 ®
© ª© ª

« 0 1 2 ¬ « x3 ¬
= x1 + (x1 + x2 )2 + (x2 + x3 )2 + x32 > 0, for all X ≠ 0.
2

OL Say (ITC) Numerical Analysis September 21, 2023 13 / 54


3.2 Doolittle LDLt Decomposition

Definition 5 (Leading Pricipal Submatrix)

A leading principal submatrix of a matrix A is a matrix of the form

a a ··· a1k
© 11 12
­ a21 a22 · · · a2k
ª
®
Ak = ­­ . .. .. .. ®
­ .. . . . ®
®
a
« k1 a k2 · · · akk ¬
for some 1 ≤ k ≤ n.

Theorem 6

A symmetric matrix A is positive definite if and only if each of its


leading principal submatrices has a positive determinant.

OL Say (ITC) Numerical Analysis September 21, 2023 14 / 54


3.2 Doolittle LDLt Decomposition

Example 7

2 1 0
A symmetric matrix A = ­ 1 2 1 ® is positive definite because
© ª

« 0 1 2 ¬
=2>0

A1 = det 2
 
2 1
A2 = det =3>0
1 2
2 1 0
A3 = det ­ 1 2 1 ® = 4 > 0.
© ª

« 0 1 2 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 15 / 54


3.2 Doolittle LDLt Decomposition

Theorem 8

The symmetric matrix A is positive definite if and only if Gaussian


elimination without row interchanges can be performed on the linear
system AX = B with all pivot elements positive. Moreover, in this
case, the computations are stable with respect to the growth of
round-off errors.

Corollary 9 (Doolittle LDLt Decomposition)

The matrix A is positive definite if and only if A can be factored in the


form LDLT , where L is lower triangular with 1’s on its diagonal and D
is a diagonal matrix with positive diagonal entries.

Corollary 10 (Cholesky LLt Decomposition Existence)

The matrix A is positive definite if and only if A can be factored in the


form LLT , where L is lower triangular with nonzero diagonal entries.

OL Say (ITC) Numerical Analysis September 21, 2023 16 / 54


3.2 Doolittle LDLt Decomposition

1 Consider a 4 × 4 symmetric positive definite matrix A, a lower


triangular matrix L and a diagonal matrix D such that A = LDLT
where L, D and A are denoted respectively as the following
1 0 0 0 d 0 0 0 a (sym)
ª © 1 ª © 11
l 1 0 0 0 d 0 0 ® ­a21 a22
© ª
­ 21 2
®,­ ®,­
® ­ ®
­l31 l32 1 0® ­ 0 0 d3 0 ® ­a31 a32 a33
­ ®
®
«l41 l42 l43 1¬ « 0 0 0 d4 ¬ «a41 a42 a43 a44 ¬
2 The equality L(DLT ) = A can be displayed as
1 0 0 0 d d1 l21 d1 l31 d1 l41 a (s)
ª© 1 ª © 11
­l21 1 0 0® ­ 0 d2 d2 l32 d2 l42 ® ­a21 a22
© ª
®=­
®
­l31 l32 1 0® ­ 0 0 d3 d3 l43 ® ­a31 a32 a33
­ ®­ ®
®
«l41 l42 l43 1¬ « 0 0 0 d4 ¬ «a41 a42 a43 a44 ¬
3 After completing the matrix multiplication on the left-hand side,
equate the elements in each row
4 First row d1 = a11
OL Say (ITC) Numerical Analysis September 21, 2023 17 / 54
3.2 Doolittle LDLt Decomposition

1 Second row
d1 l21 = a21 ⇒ l21 = a21 /d1
2 2
d1 l21 + d2 = a22 ⇒ d2 = a22 − d1 l21
2 Third row
d1 l31 = a31 ⇒ l31 = a31 /d1
d1 l31 l21 + d2 l32 = a32 ⇒ l32 = (a32 − d1 l31 l21 )/d2
2 2 2 2
d1 l31 + d2 l32 + d3 = a33 ⇒ d3 = a33 − d1 l31 − d2 l32
3 Fourth row
d1 l41 = a41 ⇒ l41 = a41 /d1
d1 l41 l21 + d2 l42 = a42 ⇒ l42 = (a42 − d1 l41 l21 )/d2
d1 l41 l31 + d2 l42 l32 + d3 l43 = a43 ⇒ l43 = (a42 − d1 l41 l31 − d2 l42 l32 )/d3
2 2 2 2 2 2
d1 l41 + d2 l42 + d3 l43 + d4 = a44 ⇒ d4 = a44 − d1 l41 − d2 l42 − d3 l43
OL Say (ITC) Numerical Analysis September 21, 2023 18 / 54
3.2 Doolittle LDLt Decomposition

Doolittle LDLt Decomposition Algorithm

Decompose an n × n symmetric positive definite matrix A into LDLT ,


where L is a lower triangular matrix with 1’s along the diagonal and
D is a diagonal matrix with positive entries on thediagonal:
INPUT the dimension n; entries aij , for 1 ≤ i, j ≤ n of A.
OUTPUT the entries lij , for 1 ≤ j ≤ i and 1 ≤ i ≤ n of L and di for
1 ≤ i ≤ n.
2
1 Set d1 = a11 ; l21 = a21 /d1 ; d2 = a22 − d1 l21
2 For i = 3 to n do
a Set li1 = ai1 /d1
Íj−1
b For j = 2 to i − 1 set lij = (aij − d l l )/dj
Íi−1 k=1 k ik jk
c Set di = aii − 2
k=1 dk lik
3 OUTPUT lij for 1 ≤ j ≤ i and 1 ≤ i ≤ n and di for 1 ≤ i ≤ n

OL Say (ITC) Numerical Analysis September 21, 2023 19 / 54


3.3 Cholesky LLt Decomposition Existence

1 Consider a 4 × 4 symmetric positive definite matrix A and a lower


triangular matrix L such that

l 0 0 0 l l l l a (sym)
© 11 ª © 11 21 31 41 ª © 11
­l21 l22 0 0 ® ­ 0 l22 l32 l42 ® ­a21 a22
ª
®=­
®
­l31 l32 l33 0 ® ­ 0 0 l33 l43 ® ­a31 a32 a33
­ ®­ ®
®
«l41 l42 l43 l44 ¬ « 0 0 0 l44 ¬ «a41 a42 a43 a44 ¬
2 After completing the matrix multiplication on the left-hand side,
equate the elements in each row
2 √
3 First row l11 = a11 ⇒ l11 = a11
4 Second row

l21 l11 = a21 ⇒ l21 = a21 /l11


q
2 2 2
l21 + l22 = a22 ⇒ l22 = a22 − l21

OL Say (ITC) Numerical Analysis September 21, 2023 20 / 54


3.3 Cholesky LLt Decomposition

5 Third row

l31 l11 = a31 ⇒ l31 = a31 /l11


l31 l21 + l32 l22 = a32 ⇒ l32 = (a31 − l31 l21 )/l22
q
2 2 2 2 2
l31 + l32 + l33 = a33 ⇒ l33 = a33 − l31 − l32

6 Fourth row

l41 l11 = a41 ⇒ l41 = a41 /l11


l41 l21 + l42 l22 = a42 ⇒ l42 = (a42 − l41 l21 )/l22
l41 l31 + l42 l32 + l43 l33 = a43 ⇒ l43 = (a43 − l41 l31 − l42 l32 )/l33
q
2 2 2 2 2 2 2
l41 + l42 + l43 + l44 = a44 ⇒ l44 = a44 − l41 − l42 − l43

OL Say (ITC) Numerical Analysis September 21, 2023 21 / 54


3.3 Cholesky LLt Decomposition

Cholesky LLt Decomposition Algorithm

Decompose an n × n positive definite, symmetric matrix A into LLT


INPUT the dimension n; entries aij , for 1 ≤ i, j ≤ n of A.
OUTPUT the entries lij , for 1 ≤ j ≤ i and 1 ≤ i ≤ n of L.
(The entries of U = LT is uij = lji , for i ≤ j ≤ n and 1 ≤ i ≤ n.)
√ q
2
1 Set l11 = a11 ; l21 = a21 /l11 ; l22 = a22 − l21
2 For i = 3 to n do
a Set li1 = ai1 /l11
Íj−1
b For j = 2 to i − 1 set lij = (aij − l l )/ljj
q k=1 ik jk
Íi−1 2
c Set lii = aii − k=1 lik
3 OUTPUT lij for 1 ≤ j ≤ i and 1 ≤ i ≤ n.

OL Say (ITC) Numerical Analysis September 21, 2023 22 / 54


3.3 Cholesky LLt Decomposition

Example 11
Use Cholesky’s decomposition method to solve the equations
AX = B, where

4 −2 2 8
A = ­ −2 2 −4 ® , B = ­ 0 ®
© ª © ª

« 2 −4 11 ¬ « −9 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 23 / 54


3.4 Crout’s Decomposition

Follow the Doolittle’s Decomposition method to derive the Crout’s


Decomposition method.
Crout’s Decomposition

Decompose an n × n square matrix A into LU with Uii = 1 for


i = 1, 2, ... , n.
INPUT ??
OUTPUT ??
Body ??

OL Say (ITC) Numerical Analysis September 21, 2023 24 / 54


4. Gauss-Jordan Elimination

1 The Gauss-Jordan method is essentially Gauss elimination taken


to its limit.
2 In the Gauss elimination method only the equations that lie below
the pivot equation are transformed.
3 In the Gauss-Jordan method the elimination is also carried out on
equations above the pivot equation, resulting in a diagonal
coefficient matrix.
4 The main disadvantage of Gauss-Jordan elimination is that it
involves about n3 /2 long operations, which is 1.5 times the
number required in Gauss elimination.
5 For this reason, we will not discuss the method into more detail.

OL Say (ITC) Numerical Analysis September 21, 2023 25 / 54


5. Symmetric and Banded Coefficient Matrices

1 Engineering problems often lead to coefficient matrices that are


sparsely populated, meaning that most elements of the matrix are
zero.
2 If all the nonzero terms are clustered about the leading diagonal,
then the matrix is said to be banded.
3 Tridiagonal and pentadiagonal matrices are examples of banded
matrices.

d1 e1 0 0 0 ··· 0 d1 e1 f1 0 0 ··· 0
© c1 d2 e2 0 0 ··· 0 ª ©­c1 d2 e2 f2 0 ··· 0 ª
® ­b1 c2 d3 e3 f3 ··· 0
®
···
­0 c2 d3 e3 0 0 ®
­
­0 0 c3 d4 e4 ··· 0
® ­ .. .. ®
.
® ­0 b2 c3 d4 e4 .
­ ..
®,­ ®
.. ..
®
.
­ ® ­
­0 0 0 c4 d5 . ® ­0 0 b3 c4 d5 .
®
fn−2 ®
­. .. .. .. .. .. ® ­
­ .. en−1 ® ­ ... .. .. .. .. .. ®
. . . . . . . .
. . en−1 ®
«0 0 0 0 ··· cn−1 dn ¬ ···
«0 0 0 bn−2 cn−1 dn ¬

OL Say (ITC) Numerical Analysis September 21, 2023 26 / 54


5.1 LU Decomposition for Tridiagonal Matrix

Consider the Tridiagonal matrix mentioned above. Let us now apply


LU decomposition to the coefficient matrix.
1 We reduce row k by getting rid of ck−1 with the elementary
operation Rk ← Rk − 𝜆Rk−1 , 𝜆 = ck−1 /dk−1 , k = 2, 3, ... , n.
2 The corresponding change in dk is dk = dk − 𝜆ek−1 whereas ek is
not affected.
3 To finish up with Doolittle’s decomposition of the form (L\U), we
store the multiplier 𝜆 = ck−1 /dk−1 in the location previously
occupied by ck−1 : ck−1 ← 𝜆.
4 The resulting factors L and U are

OL Say (ITC) Numerical Analysis September 21, 2023 27 / 54


5.1 LU Decomposition for Tridiagonal Matrix

1 0 0 0 0 ··· 0 d1 e1 0 0 0 ··· 0
­ 1 1 0 0 0 ··· ···
©c 0ª® ©­ 0 d2 e2 0 0 0ª
®
­0 c 1 0 0 ··· 0®® ­­ 0 0 d3 e3 0 ··· 0®
­ 2 ®
­ 0 0 c3 1 0 · · · 0®® ­­ 0 0 0 d4 e4 ··· 0®
­ , ®
­ .. .. ® ­ .. ..
®
­ 0 0 0 c4 1 . . ®® ­­ 0 0 0 0 d5 . .
®
­ .. .. .. .. . . ® ­. .. .. .. . .
­ ®
.. ..
. . 0® ­ .. . .
®
­. . . . . . . en−1 ®
« 0 0 0 0 · · · c n−1 1¬ « 0 0 0 0 · · · 0 dn ¬

where the values of ci and di are modified by the Doolittle’s


decomposition above.

OL Say (ITC) Numerical Analysis September 21, 2023 28 / 54


5.1 LU Decomposition for Tridiagonal Matrix

4 Solve LY = B for Y by Forward Substitution method

y1 = b1 , yi = bi − ci−1 ∗ yi−1 , i = 2, 3, ... , n.

5 Solve UX = Y for X by Forward Substitution method

xn = yn /dn , xi = (yi − ei xi+1 )/di , i = n − 1, n − 2, ... , 1.

Doolittle’s Decomposition for Tridiagonal Matrix


Decompose an n × n square tridiagonal matrix A into LU with Lii = 1
for i = 1, 2, ... , n.
INPUT ??
OUTPUT ??
Body ??

OL Say (ITC) Numerical Analysis September 21, 2023 29 / 54


5. Symmetric and Banded Coefficient Matrices

Example 12 (LU Decomposition)

Determine L and U that result from Doolittle’s decomposition of a


tridiagonal matrix A and solve for X of AX = B where

1 −2 0 0
A = ­ 2 −2 3 ® , B = ­ −1 ®
© ª © ª

« 0 6 7 ¬ « −1 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 30 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

1 We encounter pentadiagonal (bandwidth = 5) coefficient


matrices in the solution of fourth-order, ordinary differential
equations by finite differences.
2 Often these matrices are symmetric, in which case an n × n
coefficient matrix has the form
d1 e1 f1 0 0 ··· 0
©e d e
­ 1 2 2 f2 0 ··· 0 ª®
­f e d e
­ 1 2 3 3 f3 ··· 0 ®®
­ .. .. ®®
­ 0 f2 e3 d4 e4
­ . . ®
­
­ 0 0 f3 e4 d5 . . . f ®®
n−2 ®
­ .. .. .. . . .. ..
­
­. . . . . . en−1 ®
®

« 0 0 0 · · · fn−2 en−1 dn ¬

OL Say (ITC) Numerical Analysis September 21, 2023 31 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

3 Let us now look at the solution of the equations AX = B by


Doolittle’s decomposition.
4 The first step is to transform A to upper triangular form by Gauss
elimination.
5 If elimination has progressed to the stage where the k-th row has
become the pivot row, we have the following situation

.. .. .. .. .. .. .. ..
© . . . . . . . . ª
­
­ ··· 0 dk ek fk 0 0 0 ··· ®
®
­ ··· 0 ek dk+1 ek+1 fk+1 0 0 ··· ®
A → ­­ ®
­ ··· 0 fk ek+1 dk+2 ek+2 fk+2 0 ··· ®
®
­ ··· 0 0 fk+1 ek+2 dk+3 ek+3 fk+3 ··· ®
.. .. .. .. .. .. ..
­ ®
..
« . . . . . . . . ¬

OL Say (ITC) Numerical Analysis September 21, 2023 32 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

6 The elements ek and fk below the pivot row (the k-th row) are
eliminated by the operations
Rk+1 ← Rk+1 − 𝜆1 Rk , 𝜆1 = ek /dk
Rk+2 ← Rk+2 − 𝜆2 Rk , 𝜆2 = fk /dk
7 The only terms (other than those being eliminated) that are
changed by the operations are
dk+1 ← dk+1 − 𝜆1 ek , 𝜆1 = ek /dk
ek+1 ← dk+1 − 𝜆1 fk , 𝜆1 = ek /dk
dk+2 ← dk+2 − 𝜆2 fk , 𝜆2 = fk /dk
8 Storage of the multipliers in the upper triangular portion of the
matrix results in
ek ← 𝜆1 = ek /dk
fk ← 𝜆2 = fk /dk
OL Say (ITC) Numerical Analysis September 21, 2023 33 / 54
5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

9 Apply above iterations for k = 1, 2, n − 2, the matrix has the form


(do not confuse d, e, and f with the original contents of A)
d1 e1 f1 0 0 ··· 0
©0
­ d2 e2 f2 0 ··· 0 ª
®
­0
­ 0 d3 e3 f3 ··· 0 ®
®
­ .. .. ®
U∗ = ­ 0
­ 0 0 d4 e4 . . ®
®
­ .. .. ®
­0 0 0 0 . . fn−2 ®®
­ .. .. .. .. ..
­
­. . . . . dn−1 en−1 ®
®

«0 0 0 ··· 0 en−1 dn ¬
10 One last step
𝜆1 ← en−1 /dn−1
dn ← dn − 𝜆1 en−1
en−1 ← 𝜆1 .
OL Say (ITC) Numerical Analysis September 21, 2023 34 / 54
5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

11 Now comes the solution phase. The equations LY = B have the


augmented coefficient matrix

1 0 0 0 0 ··· 0 b1
©
­ e1 1 0 0 0 ··· 0 b2 ª
®
­
­ f1 e2 1 0 0 ··· 0 b3 ®
®
­ .. .. ®
(L|B) = ­
­ 0 f2 e3 1 0 . . b4 ®
®
.. .
0 ..
­ ®
­ 0 0 f3 e4 1 . ®
.. .. .. .. .. ..
­ ®
. . . . . .
0 bn−1
­ ®
­ ®
« 0 0 0 ··· fn−2 en−1 1 bn ¬

OL Say (ITC) Numerical Analysis September 21, 2023 35 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

12 Solution by forward substitution yields

y1 = b1
y2 = b2 − e1 y1
..
.
yk = bk − ek−1 yk−1 − fk−2 yk−2 , k = 3, 4, ... , n

OL Say (ITC) Numerical Analysis September 21, 2023 36 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

13 The equations to be solved by back substitution, namely UX = Y,


have the augmentedcoefficient matrix

d1 d1 e1 d1 f1 0 0 ··· 0 y1
©
­ 0 d2 d2 e2 d2 f2 0 ··· 0 y2 ª
®
­
­ 0 0 d3 d3 e3 d3 f3 ··· 0 y3 ®
®
­ .. .. ®
(U|Y) = ­­ 0 0 0 d4 d4 f4 . . y4 ®
.. .. .. ..
®
.. .. ..
. . .
­ ®
­ . . . dn−2 fn−2 . ®
­ ®
­ 0 0 0 ··· 0 dn−1 dn−1 en−1 yn−1 ®
« 0 0 0 ··· 0 0 dn yn ¬

OL Say (ITC) Numerical Analysis September 21, 2023 37 / 54


5.2 LU Decomposition for Symmetric Pentadiagonal Matrix

14 the solution of which is obtained by back substitution:

xn = yn /dn
xn−1 = yn−1 /dn−1 − en−1 xn
..
.
xk = yk /dk − ek xk+1 − fk xk+2 , k = n − 2, n − 3, ... , 1.

Doolittle’s Decomposition for Pentadiagonal Matrix


Decompose an n × n square symmetric pentadiagonal matrix A into
LU with Lii = 1 for i = 1, 2, ... , n.
INPUT ??
OUTPUT ??
Body ??

OL Say (ITC) Numerical Analysis September 21, 2023 38 / 54


5. Symmetric and Banded Coefficient Matrices

Example 13 (LDLt Decomposition)

Determine L and D such that A = LDLT and solve for AX = B for X


provided that A is a symmetric pentadiagonal matrix

3 6 3 0
A = ­ 6 14 4 ® and B = ­ −2 ®
© ª © ª

« 3 4 9 ¬ « 6 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 39 / 54


6. Gauss Elimination with Scaled Row Pivoting

Definition 14 (Diagonal Dominance)

An n × n matrix A is said to be diagonally dominant if each diagonal


element is larger than the sum of the other elements in the same
row (we are talking here about absolute values). Thus diagonal
n
Õ
dominance requires that |aii | > |aij | for i = 1, 2, ... , n.
j=1
j≠i

Example 15

Matrix A is not diagonally dominant. But matrix B obtaining from A


by rearrange rows in the following manner is diagonally dominant.

1 4 −2 3 −1 1
A=­ 2 0 −3 ® , B = ­ 1 4 −2 ®
© ª © ª

« 3 −1 1 ¬ « 2 0 −3 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 40 / 54


6. Gauss Elimination with Scaled Row Pivoting

1 Consider the solution of AX = B by Gauss elimination with row


pivoting.
2 Pivoting aims at improving diagonal dominance of the coefficient
matrix.
3 That is making the pivot element as large as possible in
comparison to other elements in the pivot row.
4 The comparison is made easier if we establish an array s with the
elements si = max |aij |, i = 1, 2, ... , n.
j
5 Thus si , called the scale factor of row i, contains the absolute
value of the largest element in the i-th row of A.
6 The relative size of an element aij (that is, relative to the largest
element in the i-th row) is defined as the ratio rij = |aij |/si .
7 Suppose that the elimination phase has reached the stage where
the k-th row has become the pivot row.
8 The augmented coefficient matrix at this point is shown in the
following matrix:
OL Say (ITC) Numerical Analysis September 21, 2023 41 / 54
6. Gauss Elimination with Scaled Row Pivoting

a11 a12 a13 a14 ··· a1n b1


©
­ 0 a22 a23 a24 ··· a2n b2 ª®
­
­ 0 0 a33 a34 ··· a3n b3 ®®
­ .. .. .. .. .. .. ®
­
­ . . . . ··· . . ®®
­ 0 ··· 0 akk ··· akn bk ®®
.. .. .. .. .. ®
­
­
­ . ··· . . ··· . . ®
« 0 ··· 0 ank ··· ann bn ¬

9 We do not automatically accept akk as the next pivot element, but


look in the k-th column below akk for a “better” pivot.
10 The best choice is the element apk that has the largest relative
size; that is, we choose p such that rpk = max(rik ), i ≥ k.
i

OL Say (ITC) Numerical Analysis September 21, 2023 42 / 54


6. Gauss Elimination with Scaled Row Pivoting

Example 16
Employ Gauss elimination with scaled row pivoting to solve the
equations AX = B, where

2 −3 4 −2 9
­ −2 4 1 −3 ® ­ −27 ®
© ª © ª
A=­ ®,B = ­ ®.
­ 5 −4 3 −2 ® ­ 17 ®
« 3 2 −4 2 ¬ « 4 ¬

OL Say (ITC) Numerical Analysis September 21, 2023 43 / 54


7. The Jacobi and Gauss-Siedel Iterative Techniques

Definition 17 (Jacobi Method)


The Jacobi iterative method is obtained by solving the i-th equation
in AX = B for xi to obtain (provided aii ≠ 0)
n
bi Õ aij
xi = − xj , i = 1, 2, ... , n.
aii aii
j=1
j≠i

(k)
For each k ≥ 1, generate the components xi of x(k) from the
components of x(k−1) by
n
(k) bi Õ aij (k−1)
xi = − x , i = 1, 2, ... , n.
aii aii j
j=1
j≠i

OL Say (ITC) Numerical Analysis September 21, 2023 44 / 54


7. The Jacobi and Gauss-Siedel Iterative Techniques

As an illustration, consider a 3 × 3 system of linear equations

b1 a12 a13
x1 = −
x2 − x3


a11 a11 a11



b2 a21 a23



x =
2 − x1 − x3
 a22 a22 a22
b3 a31 a32


 x3 = − x2 − x3


 a33 a33 a33
Jacobi Iteration is defined to be
(k) b1 a12 (k−1) a13 (k−1)
x1 = − x − x


a a11 2 a11 3


 11
b2 a21 a23 (k−1)

 (k) (k−1)

x = − x − x
 2 a22 a22 1 a22 3
b a a32 (k−1)

(k) 3 31 (k−1)

 x3 = − x − x


 a33 a33 2 a33 3

OL Say (ITC) Numerical Analysis September 21, 2023 45 / 54


7. The Jacobi and Gauss-Siedel Iterative Techniques

1 The components of x(k−1) are used to compute all the


(k)
components xi of X (k) .
(k) (k)
2 But, for i > 1, the components x1 , ... , xi−1 of X (k) have already
been computed and are expected to be better approximations to
(k−1) (k−1)
the actual solutions x1 , ... , xi−1 than are x1 , ... , xi−1 .
(k)
3 It seems reasonable, then, to compute xi using these most
recently calculated values. That is, to use

(k) b1 a12 (k−1) a13 (k−1)


x1 = − x2 − x


a11 a11 a11 3



b2 a21 (k) a23 (k−1)

 (k)

x2 = − x − x
 a22 a22 1 a22 3
b3 a31 (k) a32 (k)

(k)

 x3 = − x − x


 a33 a33 2 a33 3

OL Say (ITC) Numerical Analysis September 21, 2023 46 / 54


7. The Jacobi and Gauss-Siedel Iterative Techniques

Definition 18 (Gauss-Siedel Method)

Gauss-Siedel iterative method is a modification of Jacobi by


(k−1) (k)
replacing xj by xj for j = 1, 2, ... , i − 1 in the i-the equation.

i−1 n
(k) bi Õ aij (k) Õ aij (k−1)
xi = − x − x , i = 1, 2, ... , n.
aii aii j aii j
j=1 j=i+1

OL Say (ITC) Numerical Analysis September 21, 2023 47 / 54


7. The Jacobi and Gauss-Siedel Iterative Techniques

Example 19

Solving AX = B for X = (x1 , x2 , x3 )T where

2 −1 2 3
A = ­ 4 −2 2 ® , B = ­ 4 ®
© ª © ª

« 3 −1 1 ¬ « 4 ¬

1 using Jacobi method with two iterations and


2 using Gauss-Siedel method with two iterations.

OL Say (ITC) Numerical Analysis September 21, 2023 48 / 54


8. Relaxation Techniques

1 We introduce some notations about diagonal and off-diagonal


parts of matrix coefficients A of the equation AX = B as follow

a11 a12 a13


A = ­a21 a22 a23 ®
© ª

«a31 a32 a33 ¬


a11 0 0 0 0 0 0 −a12 −a13
= ­ 0 a22 0 ® − ­−a21 0 0® − ­0 0 −a23 ®
© ª © ª © ª

« 0 0 a33 ¬ «−a31 −a32 0¬ «0 0 0 ¬


=D−L−U

2 The equation AX = B can be re-written as (D − L − U)X = B and

DX = (L + U)X + B ⇒ X = D−1 (L + U)X + D−1 B


(D − L)X = UX + B ⇒ X = (D − L)−1 UX + (D − L)−1 B

OL Say (ITC) Numerical Analysis September 21, 2023 49 / 54


8. Relaxation Techniques
3 The Jacobi and Gauss-Siedel methods can be written in the form
X (k) = D−1 (L + U)X (k−1) + D−1 B = TJ X (k−1) + CJ and
X (k) = (D − L)−1 UX (k−1) + (D − L)−1 B = TG X (k−1) + CG respectively.
4 To study the convergence of general iteration techniques, we
need to analyze the formula
X (k) = TX (k−1) + C, for k = 1, 2, ... ,
where X (0) is arbitrary.
5 Relaxation method represents a slight modification of the
Gauss-Seidel method that is designed to enhance convergence.
6 After each new value of x is computed using Gauss-Seidel
formula, that value is modified by a weighted average of the
results of the previous and the present iterations:
i−1 n
!
(k) bi Õ aij (k) Õ aij (k−1) (k−1)
xi =𝜔 − xj − x + (1 − 𝜔)xi
aii aii aii j
j=1 j=i+1
OL Say (ITC) Numerical Analysis September 21, 2023 50 / 54
8. Relaxation Techniques

7 The relaxation factor 𝜔 is chosen to be positive and if 0 < 𝜔 < 1,


it is called Under-Relaxation, if 1 < 𝜔, it is called Over-Relaxation,
and if 𝜔 = 1, there is no Relaxation (unmodified Gauss-Seidel
method.)
8 To determine the matrix form of the Relaxation method, we
re-write formula as
i−1
Õ n
Õ
(k) (k) (k−1) (k−1)
aii xi −𝜔 (−aij )xj = (1 − 𝜔)aii xi +𝜔 (−aij )xj + 𝜔bi
j=1 j=i+1

(D − 𝜔L)X (k) = [(1 − 𝜔)D + 𝜔U]X (k−1) + 𝜔B


X (k) = T𝜔 X (k−1) + C𝜔
where
T𝜔 = (D − 𝜔L)−1 [(1 − 𝜔)D + 𝜔U]
C𝜔 = 𝜔(D − 𝜔L)−1 B
OL Say (ITC) Numerical Analysis September 21, 2023 51 / 54
8. Relaxation Techniques

Definition 20 (Spectral Radius)

The spectral radius 𝜌(A) of a square matrix A is defined by


𝜌(A) = max |𝜆|, where 𝜆 is an eigenvalue of A and |𝜆| is the absolute
value or mudulus of 𝜆.

Theorem 21
n o∞
For any X (0) ∈ Rn , the sequence X (k) defined by
k=0
X (k) = TX (k−1) + C, for each k ≥ 1, converges to the unique solution
of X = TX + C if and only if 𝜌(T) < 1.

OL Say (ITC) Numerical Analysis September 21, 2023 52 / 54


8. Relaxation Techniques

Theorem 22 (Kahan)

If aii ≠ 0, for each i = 1, 2, ... , n, then 𝜌(T𝜔 ) ≥ |𝜔 − 1|. This implies


that the Relaxation method can converge only if 0 < 𝜔 < 2.

Theorem 23 (Ostrowski-Reich)

If A is a positive definite matrix and 0 < 𝜔 < 2, then the Relaxation


method converges for any choice of initial approximate vector X (0) .

Theorem 24

If A is positive definite and tridiagonal, then 𝜌(TG ) = [𝜌(TJ )]2 < 1, and
the optimal choice of 𝜔 for the Relaxation method is
2
𝜔= p ,
1 + 1 − [𝜌(TJ )]2

With this choice of 𝜔, we have 𝜌(T𝜔 ) = 𝜔 − 1.


OL Say (ITC) Numerical Analysis September 21, 2023 53 / 54
8. Relaxation Techniques

Example 25 (Relaxation)
Consider an equation AX = B with

4 3 0 24
a=­ 3 4 −1 ® , b = ­ 30 ® .
© ª © ª

« 0 −1 4 ¬ « −24 ¬

1 Show that A is positive definite (|Ak | > 0, all its leading principle
submatrices has positive determinant.)
2 Split A = D − L − U and compute TJ = D−1 (L + U).
3 Find all eigenvalues of TJ , then determine the spectral radius
𝜌(TJ ) of Jacobi matrix TJ and deduce the optimal value of
Relaxation factor 𝜔.
4 Use computer program to solve the equation using Relaxation
method with initial guess X (0) = (1, 1, 1)T .
OL Say (ITC) Numerical Analysis September 21, 2023 54 / 54

You might also like