0% found this document useful (0 votes)
10 views

ACM_lecture-2-1

The document provides an overview of computational methods, including error analysis, numerical techniques, and Gaussian elimination. It details the steps involved in forward elimination and back substitution for solving simultaneous linear equations. Additionally, it includes examples of applying these methods to velocity data and polynomial approximation.

Uploaded by

Gebrekiros Araya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

ACM_lecture-2-1

The document provides an overview of computational methods, including error analysis, numerical techniques, and Gaussian elimination. It details the steps involved in forward elimination and back substitution for solving simultaneous linear equations. Additionally, it includes examples of applying these methods to velocity data and polynomial approximation.

Uploaded by

Gebrekiros Araya
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 160

Summery

 Introduction about Computational Methods


 Computation and Error Analysis
 True Error
 Relative True Error
 Approximate Error
 Relative Approximate Error
 Causes of Errors
 Taylor Series
 Numerical Techniques
 Graphical method

 Bisection Method/Bracketing Methods

 Newton Raphson Method

 Secant Method

 False position Method


Outline

Gaussian
Elimination

Gauss-Siedel
Method

LU
Decomposition
Gaussian Elimination
Naïve Gauss Elimination
Naïve Gaussian Elimination

A method to solve simultaneous linear


equations of the form [A][X]=[C]

Two steps
1. Forward Elimination
2. Back Substitution
Forward Elimination
The goal of forward elimination is to transform the
coefficient matrix into an upper triangular matrix

 25 5 1 x1 106.8
 64 8    
1 x2  177.2 

144 12 1 x3 279.2

25 5 1  x1  106.8 


 0 4.8 1.56 x   96.21
   2  
 0 0 0.7  x3  0.735
Forward Elimination
A set of n equations and n unknowns
a11x1 a12x2 a13x3 ...a1nxn  b1
a21x1 a22x2 a23x3 ...a2nxn  b2
. .
. .
. .
an1x1 an2x2 an3x3 ...annxn  bn
(n-1) steps of forward elimination
Forward Elimination
Step 1
For Equation 2, divide Equation 1 by a11 and
multiply by a21.

a21
a (a11x1 a12x2 a13x3 ...a1nxn b1)
 11
a a a
a21x1  a12x2 ... a1nxn  21b1
21 21
a11 a11 a11
Forward Elimination
Subtract the result from Equation 2.
a21x1 a22x2 a23x3 ...a2nxn  b2
a x  a21
a x ... a21
a x  a21
b1
− 21 1
a11 12 2
a11 1n n
a11
_________________________________________________
 a   a  a
a 
 22 a11 12 2
21
a x ... a  21
a x  b 
 2n a11 1n  n 2 a11 b1
21

or a x ...a x b
'
22 2
'
2n n
'
2
Forward Elimination
Repeat this procedure for the remaining
equations to reduce the set of equations as
a11x1 a12x2 a13x3 ...a1nxn  b1
'
a22 x2 a23
'
x3 ...a2' nxn b2'
'
a32 x2 a33
'
x3 ...a3' nxn b3'
. . .
. . .
. . .

an' 2x2 an' 3x3 ...ann


'
xn bn'
End of Step 1
Forward Elimination
Step 2
Repeat the same procedure for the 3rd term of
Equation 3.
a11x1 a12x2 a13x3 ...a1nxn  b1
'
a22x2 a23
'
x3 ...a2' nxn b2'
"
a33x3 ... a3"nxn  b3"
. .
. .
. .

an"3x3 ...ann
"
xn bn"
End of Step 2
Forward Elimination
At the end of (n-1) Forward Elimination steps, the
system of equations will look like
a11x1 a12x2 a13x3 ...a1nxn b1
'
a22x2 a23
'
x3 ...a2' nxn b2'
a x ...a x b
"
33 3
"
3n n
"
3
. .
. .

n1 
. .

n1
a x
nn n  bn

End of Step (n-1)


Matrix Form at End of Forward
Elimination

a11 a12 a13  a1n x1  b1 


0 '
a22 '
a23  '    ' 
a2n x2  b2 

0 0 a"33  a"3n x3   b3" 
      
       

 0 0 0 0 (n1)    (n-1) 
ann xn bn 
Back Substitution
Solve each equation starting from the last equation

25 5 1  x1  106.8 


 0 4.8 1.56 x   96.21
   2  
 0 0 0.7  x3  0.735
Example of a system of 3 equations
Back Substitution Starting Eqns

a11x1 a12x2 a13x3 ...a1nxn b1


'
a22x2 a23
'
x3 ...a2' nxn b2'
"
a33x3 ... an" xn  b3"
. .
. .

n1 
. .

an1x
nn n  bn
Back Substitution
Start with the last equation because it has only one unknown
(n1)
xn  b n
(n1)
a nn
Back Substitution
(n1)
xn  b n
(n1)
a nn

bii1 ai,ii11xi1 ai,ii12xi2 ...ai,in1xn


xi   i1 fori n1,...,1
aii
bi1  i1x
n
i  ij j
a
xi  ji1
i1 fori n1,...,1
a ii
THE END
Naïve Gauss
Elimination
Example
Example 1

Table 1 Velocity vs. time data.

Time, t s Velocity, v m/s


5 106.8
8 177.2
12 279.2

The velocity data is approximated by a polynomial as:

vt a1t a2t a3 ,


2
5 t 12.
Find the velocity at t=6 seconds .
Example 1 Cont.
Assume
vta1t2 a2t a3 , 5t 12.
Results in a matrix template of the form:

t12 t1 1 a1  v1 


t2 t 1 a   v 
 2 2   2   2

 3 t3 1
t 2
a3 
 
v3 

Using data from Table 1, the matrix becomes:
 25 5 1 a1  106 .8
 64 8 1 a2   177
.2

144 12 1 a3  279 .2
Example 1 Cont.
 25 5 1 a1  106 .8  25 5 1  106 .8
 64 8 1 a2   177.2  64 8 1  177.2

144 12 1 a3 279.2 144 12 1  279.2

1. Forward Elimination
2. Back Substitution
Forward Elimination
Number of Steps of Forward
Elimination

Number of steps of forward elimination is


(n1)(31)2
Forward Elimination: Step 1
 25 5 1  106.8 Divide Equation 1 by 25 and
 64 8 1  177.2
  multiply it by 64, 64 2.56.
144 12 1  279.2 25
25 5 1  106.82.5664 12.8 2.56  273.408
64 8 1  
177.2
64 12.8 273.408
.
Subtract the result from 2.56 
Equation 2
0 4.8 1.56  96.208
 25 5 1  106.8 
Substitute new equation for  0 4.8 1.56  96.208
Equation 2  
144 12 1  279.2 
Forward Elimination: Step 1 (cont.)
 25 5 1  106.8  Divide Equation 1 by 25 and
 0 4.8 1.56  96.208
  multiply it by 144, 1445.76.
144 12 1  279.2  25
25 5 1  106.85.76144 28.8 5.76  615.168
.
Subtract the result from 144 12 1  279.2
Equation 3 144 28.8 5.76  615.168
0 16.8 4.76  335.968

Substitute new equation for 
25 5 1  106.8 
0 4.8 1.56  96.208 
Equation 3  
 0 16.8 4.76  335.968
Forward Elimination: Step 2

25 5 1  106.8  Divide Equation 2 by −4.8


 0 4.8 1.56  96.208
  and multiply it by −16.8,
 0 16.8 4.76  335.968 16.8 3.5.
4.8
04.8 1.56  96.2083.50 16.8 5.46  336.728
0 16.8 4.76  335.968 
0 16.8 5.46  336.728
Subtract the result from
Equation 3
0 0 0.7  0.76
25 5 1  106.8 
Substitute new equation for  0 4.8 1.56  96.208
Equation 3  
 0 0 0.7  0.76 
Back Substitution
Back Substitution
25 5 1  106.8 25 5 1  a1   106.8 
 0 4.8 1.56  96.2  0 4.8 1.56 a   96.208
     2  
 0 0 0.7  0.7   0 0 0.7  a3  0.76 

Solving for a3
0.7a3 0.76
a3  0.76
0.7
a3 1.08571
Back Substitution (cont.)
25 5 1  a1   106.8 
 0 4.8 1.56 a   96.208
   2  
 0 0 0.7  a3  0.76 

Solving for a2
4.8a2 1.56a3  96.208
a2  96 .2081 .56
a 3
4.8
a2  96.208 1.56 1.08571
4.8
a2 19.6905
Back Substitution (cont.)
25 5 1  a1  106.8
 0 4.8 1.56 a   96.2
   2  
 0 0 0.7  a3  0.76
Solving for a1
25a1 5a2 a3 106.8
a1  106
.8 5a2 a 3
25
 106
.8 519 . 
69051.08571
25
0.290472
Naïve Gaussian Elimination
Solution
 25 5 1 a1 106 .8
 64 8 1 a2  177.2

144 12 1 a3 279.2

a1  0.290472 
a    19.6905
 2  
a3  1.08571

Example 1 Cont.

a1  0.290472 
Solution
a    19.6905
The solution vector is
 2  
a3  1.08571

The polynomial that passes through the three data points is then:

vt a1t2 a2t a3


0.290472 t2 19.6905
t 1.08571
, 5t 12
v6 0.290472
6 19.690561.08571
2

129.686m/s.
THE END
Naïve Gauss Elimination
Pitfalls
Pitfall#1. Division by zero

10x2 7x3  3
6x1 2x2 3x3 11
5x1  x2 5x3  9
0 10 7 x1  3
6 2 3  x   11
   2  
5 1 5  x3  9
Is division by zero an issue
here?
12x1 10x2 7x3 15
6x1 5x2 3x3 14
5x1  x2 5x3  9
12 10 7 x1 15
 6 5 3  x   14
   2  
 5 1 5  x3  9 
Is division by zero an issue
here? YES
12x1 10x2 7x3 15
6x1 5x2 3x3 14
24x1  x2 5x3  28

12 10 7 x1 15 12 10 7 x1 15


 6 5 3  x   14  0 0 6.5 x   6.5
   2      2  
24 1 5  x3 28 12 21 19 x3 2

Division by zero is a possibility at any step


of forward elimination
Pitfall#2. Large Round-off Errors

20 15 10 x1  45 


3 2.249 7  x   1.751
   2  
 5 1 3 x3  9 
Exact Solution

x1 1
x   1
 2  
x3 1
Pitfall#2. Large Round-off Errors

20 15 10 x1  45 


3 2.249 7  x   1.751
   2  
 5 1 3 x3  9 

Solve it on a computer using 6 significant digits with chopping


x1  0.9625
x    1.05 
 2  
x3 0.999995

Pitfall#2. Large Round-off Errors

20 15 10 x1  45 


3 2.249 7  x   1.751
   2  
 5 1 3 x3  9 
Solve it on a computer using 5 significant digits with chopping
x1  0.625
x    1.5 
 2  
x3 0.99995

Is there a way to reduce the round off error?
Avoiding Pitfalls
Increase the number of significant digits
• Decreases round-off error
• Does not avoid division by zero
Avoiding Pitfalls

Gaussian Elimination with Partial Pivoting


• Avoids division by zero
• Reduces round off error
THE END
Gauss Elimination
with Partial Pivoting
Pitfalls of Naïve Gauss
Elimination
Possible division by zero
Large round-off errors
Avoiding Pitfalls
Increase the number of significant digits
• Decreases round-off error
• Does not avoid division by zero
Avoiding Pitfalls

Gaussian Elimination with Partial Pivoting


• Avoids division by zero
• Reduces round off error
What is Different About
Partial Pivoting?
At the beginning of the kth step of forward elimination,
find the maximum of

akk , ak1,k ,.........


.......,ank
If the maximum of the values is apk
in the p th row, k  p  n, then switch rows p and k.
Matrix Form at Beginning of 2nd
Step of Forward Elimination

a11 a12 a13  a1n x1 b1


0 '
a22 '
a23  '    ' 
a2nx2 b2

0 '
a32 '
a33  a3' n x3  b3' 
      
       

 0 a'n2 an' 3 an' 4 '    ' 
ann xn bn
Example (2nd step of FE)

6 14 5.1 3.7 6 x1  5 


0 7 6 1
 2 x2 6
0 4 12 1 11x3   8 
0 9 23 6    
8 x4  9  

0 17 12 11 43x5  3 
Which two rows would you switch?
Example (2nd step of FE)

6 14 5.1 3.7 6 x1  5 


0 17 12 11    
43x2  3  

0 4 12 1 11x3   8 
0 9 23 6    
8 x4  9  

0 7 6 1 2 x5 6
Switched Rows
Gaussian Elimination
with Partial Pivoting
A method to solve simultaneous linear
equations of the form [A][X]=[C]

Two steps
1. Forward Elimination
2. Back Substitution
Forward Elimination

Same as naïve Gauss elimination method


except that we switch rows before each of
the (n-1) steps of forward elimination.
Example: Matrix Form at Beginning
of 2nd Step of Forward Elimination

a11 a12 a13  a1n x1 b1


0 a'22 a'23  a'2nx2 b2' 

0 '
a32 '
a33  a3n x3  b3
' '
       
       

 0 '
an2 '
an3 '
an4 '    ' 
annxn bn
Matrix Form at End of Forward
Elimination

a11 a12 a13  a1n x1  b1 


0 a'22 a'23  a'2n x2  b2' 

0 0 "
a33  a3n x3   b3 
" "
      
       

 0 0 0 0 (n1)    (n-1) 
ann xn bn 
Back Substitution Starting
Eqns
a11x1 a12x2 a13x3 ...a1nxn b1
'
a22x2 a23
'
x3 ...a2' nxn b2'
"
a33x3 ... an" xn  b3"
. .
. .

n1 
. .

an1x
nn n  bn
Back Substitution
(n1)
xn  b n
(n1)
a nn

i1
b 
n i1
i  ij a x j
xi  ji1
i1 fori n1,...,1
aii
THE END
Gauss Elimination
with Partial Pivoting
Example
Example 2
Solve the following set of equations
by Gaussian elimination with partial
pivoting

 25 5 1 a1  106 .8


 64 8    
1 a2   177.2 

144 12 1 a3 279.2
Example 2 Cont.
 25 5 1 a1  106 .8  25 5 1  106 .8
 64 8 1 a2   177.2  64 8 1  177 
  .2 
144 12 1 a3  279 .2 144 12 1  279
.2

1. Forward Elimination
2. Back Substitution
Forward Elimination
Number of Steps of Forward
Elimination

Number of steps of forward elimination is


(n1)=(31)=2
Forward Elimination: Step 1
• Examine absolute values of first column, first row
and below.
25, 64, 144
• Largest absolute value is 144 and exists in row 3.
• Switch row 1 and row 3.

 25 5 1  106.8 144 12 1  279.2


 64 8 1  177.2  64 8 1  177.2
   
144 12 1  279.2  25 5 1  106.8
Forward Elimination: Step 1 (cont.)
144 12 1  279.2 Divide Equation 1 by 144 and
 64 8 1  177.2 64 0.4444
  multiply it by 64, .
 25 5 1  106.8 144
144 12 1  279.20.444463.99 5.333 0.4444 124.1
.
Subtract the result from 64 8 1  177.2
Equation 2 63.995.3330.4444 124.1
 0 2.667 0.5556 53.10
Substitute new equation for 144 12 1  279.2
Equation 2  0 2.667 0.5556 53.10
 
 25 5 1  106.8
Forward Elimination: Step 1 (cont.)
144 12 1  279.2 Divide Equation 1 by 144 and
 0 2.667 0.5556 53.10
  multiply it by 25, 25 0.1736
.
 25 5 1  106.8 144
144 12 1  279.20.173625.00 2.083 0.1736 48.47
.
25 5 1  106.8
25 2.0830.1736 
Subtract the result from
Equation 3 48.47
 0 2.917 0.8264 58.33
Substitute new equation for 144 12 1  279.2
Equation 3  0 2.667 0.5556 53.10
 
 0 2.917 0.8264 58.33
Forward Elimination: Step 2
• Examine absolute values of second column, second row
and below.
2.667, 2.917
• Largest absolute value is 2.917 and exists in row 3.
• Switch row 2 and row 3.

144 12 1  279.2 144 12 1  279.2


 0 2.667 0.5556 53.10  0 2.917 0.8264 58.33
   
 0 2.917 0.8264 58.33  0 2.667 0.5556 53.10
Forward Elimination: Step 2
(cont.)
Divide Equation 2 by 2.917 and
 144 12 1  279.2 
 0 2.917 0.8264 58.33 multiply it by 2.667,
  2.667
 0 2.667 0.5556 53.10 0.9143 .
2.917
0 0.91430 2.667 0.7556 53.33
2.9170.8264 58.33
0 
2.667 0.5556 53.10
0 
.
Subtract the result from 2.667 0.7556 53.33
Equation 3
0 0 0.2  0.23
144 12 1  279.2
Substitute new equation for  0 2.917 0.8264 58.33
Equation 3  
 0 0 0.2  0.23
Back Substitution
Back Substitution
144 12 1  279.2 144 12 1  a1  279.2
 0 2.917 0.8264 58.33  0 2.917 0.8264
 a   58.33
     2  
 0 0 0.2  0.23  0 0 0.2  a3  0.23

Solving for a3
0.2a3  0.23
a3   0.23
0.2
1.15
Back Substitution (cont.)
144 12 1  a1 279.2
 0 2.917 0.8264
 a   58.33
   2  
 0 0 0.2  a3 0.23

Solving for a2
2.917a2 0.8264
a3 58.33
a2  58
.33 0.8264
a 3
2.917
 58
.33 0.82641 .15
2.917
19.67
Back Substitution (cont.)

144 12 1  a1 279.2


 0 2.917 0.8264
 a   58.33
   2  
 0 0 0.2  a3 0.23

Solving for a1
144a1 12a2 a3  279.2
a1  279.2 12
a 2 a 3
144
 279.2 1219 .67 1.15
144
0.2917
Gaussian Elimination with
Partial Pivoting Solution

 25 5 1 a1  106 .8


 64 8    
1 a2   177.2 

144 12 1 a3  279
.2
a1  0.2917 
a    19.67
 2  
a3   1.15 
Gauss Elimination
with Partial Pivoting
Another Example
Partial Pivoting: Example
Consider the system of equations
10x1 7x2 7
3x1 2.099x2 6x3 3.901
5x1 x2 5x3 6
In matrix form
10 7 0 x1   7 
3 2.099 6 x  3.901
   2 =  
 5 1 5 x3   6 
Solve using Gaussian Elimination with Partial Pivoting using five
significant digits with chopping
Partial Pivoting: Example
Forward Elimination: Step 1
Examining the values of the first column
|10|, |-3|, and |5| or 10, 3, and 5
The largest absolute value is 10, which means, to
follow the rules of Partial Pivoting, we switch
row1 with row1.
Performing Forward Elimination


10 7 0x1   7  10 7 0x1   7 
3 2.099 6x   3.901  0 0.001 6x   6.001
  2     2   
 5 1 5x3  6   0 2.5 5x3   2.5 
Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column
|-0.001| and |2.5| or 0.0001 and 2.5
The largest absolute value is 2.5, so row 2 is
switched with row 3

Performing the row swap


10 7 0x1   7 
 0 0.001 6x   6.001
  2   
 0 2.5 5x3   2.5 
 10 7 0x1   7 
 0 2.5 5x    2.5 
  2  
 0 0.001 6x3 6.001
Partial Pivoting: Example

Forward Elimination: Step 2

Performing the Forward Elimination results in:

10 7 0 x1   7 
 0 2.5 5 x    2.5 
  2   
 0 0 6.002x3  6.002
Partial Pivoting: Example

Back Substitution
Solving the equations through back substitution

10 7 0 x1   7  x3  6 .002 1


 0 2.5 5 x    2.5  6.002
  2   
 0 0 6.002x3  6.002 2.5 5 x
x2  3  1
2.5

x1  7  7x2 0x3
0
10
Partial Pivoting: Example
Compare the calculated and exact solution
The fact that they are equal is coincidence, but it
does illustrate the advantage of Partial Pivoting

x1   0  x1   0 
Xcalculated x2  1 Xexact  x2   1
x3  1  x3   1 
THE END
Determinant of a Square
Matrix
Using Naïve Gauss
Elimination
Example
Theorem of Determinants

If a multiple of one row of [A]nxn is added or


subtracted to another row of [A]nxn to result in
[B]nxn then det(A)=det(B)
Theorem of Determinants

The determinant of an upper triangular matrix


[A]nxn is given by

detA  a11a22...aii ...ann

 aii
n

i1
Forward Elimination of a
Square Matrix

Using forward elimination to transform [A]nxn to an


upper triangular matrix, [U]nxn.

Ann Unn
detAdetU
Example
Using naïve Gaussian elimination find the
determinant of the following square
matrix.
 25 5 1
 64 8 1

144 12 1
Forward Elimination
Forward Elimination: Step 1
 25 5 1
 64 8 1 Divide Equation 1 by 25 and
  64 2.56.
144 12 1 multiply it by 64,
25
25 5 12.5664 12.8 2.56
64 8 1
64 12.8 2.56
.
Subtract the result from
Equation 2
0 4.8 1.56
 25 5 1 
Substitute new equation for  0 4.8 1.56
Equation 2  
144 12 1 
Forward Elimination: Step 1
(cont.)
 25 5 1 
 0 4.8 1.56 Divide Equation 1 by 25 and
  multiply it by 144, 1445.76.
144 12 1  25
25 5 15.76144 28.8 5.76
144 12 1
144 28.8 5.76
.
Subtract the result from
Equation 3
0 16.8 4.76
25 5 1 
Substitute new equation for  0 4.8 1.56
Equation 3  
 0 16.8 4.76
Forward Elimination: Step 2
Divide Equation 2 by −4.8
25 5 1 
 0 4.8 1.56 and multiply it by −16.8,
  16.8 3.5.
 0 16.8 4.76 4.8
0 4.8 1.563.50 16.8 5.46
.
0 16.8 4.76
0 5.46
Subtract the result from
Equation 3 16.8
0 0 0.7
Substitute new equation for
25 5 1 
 0 4.8 1.56
Equation 3 
 0 0 0.7 
Finding the Determinant
After forward elimination

 25 5 1 25 5 1 
 64 8 1  0 4.8 1.56

144 12
.
1  0 0 0.7 

detA u11u22u33
 254.80.7
 84.00
Summary

-Forward Elimination
-Back Substitution
-Pitfalls
-Improvements
-Partial Pivoting
-Determinant of a Matrix
THE END
Gauss-Siedel Method
Gauss-Seidel Method
Gauss-Seidel Method

An iterative method.

Basic Procedure:
-Algebraically solve each linear equation for xi
-Assume an initial guess solution array
-Solve for each xi and repeat
-Use absolute relative approximate error after each iteration
to check if error is within a pre-specified tolerance.
Gauss-Seidel Method

Why?
The Gauss-Seidel Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and LU


Decomposition are prone to prone to round-off error.

Also: If the physics of the problem are understood, a close initial


guess can be made, decreasing the number of iterations needed.
Gauss-Seidel Method
Algorithm
A set of n equations and n unknowns:
a11x1 a12x2 a13x3 ...a1nxn  b1 If: the diagonal elements are
non-zero
a21x1 a22x2 a23x3 ...a2nxn b2 Rewrite each equation solving
. .
. . for the corresponding unknown
. .

an1x1 an2x2 an3x3 ...annxn  bn


ex:
First equation, solve for x1
Second equation, solve for x2
Gauss-Seidel Method
Algorithm
Rewriting each equation
x1  c1 a12x2 a13x3 a1nxn From Equation 1
a11

x2  c2 a21x1 a23x3a2nxn From equation 2


a22
  
c a x a x an1,n2xn2 an1,nxn
xn1  n1 n1,1 1 n1,2 2 From equation n-1
an1,n1
cn an1x1 an2x2 an,n1xn1
xn  From equation n
ann
Gauss-Seidel Method
Algorithm
General Form of each equation

c1 a1j xj cn1  an1, j xj


n n

j1 j1
x1  j1 xn1  jn1

a11 an1,n1
cn anjxj
n
c2 a2 j xj
n

j1 j1
x2  j2
xn  jn
a22 ann
Gauss-Seidel Method

Algorithm
General Form for any row ‘i’

ci aijxj
n

j1
xi  ji
,i 1,2,,n.
aii

How or where can this equation be used?


Gauss-Seidel Method

Solve for the unknowns


Assume an initial guess for [X] Use rewritten equations to solve for
each value of xi.
Important: Remember to use the
 x1  most recent value of xi. Which
x  means to apply values calculated to
 2 the calculations remaining in the
 current iteration.
x 
 n-1
 xn 
Gauss-Seidel Method

Calculate the Absolute Relative Approximate Error

  x new
ix 100i
old
ai new
x i
So when has the answer been found?

The iterations are stopped when the absolute relative


approximate error is less than a prespecified tolerance for all
unknowns.
Gauss-Seidel Method:
Example 1
The upward velocity of a rocket
is given at three different times
Table 1 Velocity vs. Time data.

Time, t s Velocity v m/s


5 106.8
8 177.2
12 279.2

vta1t2 a2t a3 , 5t 12.


The velocity data is approximated by a polynomial as:
Gauss-Seidel Method:
Example 1
Using a Matrix template of the form t12 t1 1 a1 v1 
t2 t 1 a   v 
 2 2   2   2
t32 t3 1 a3  v3
 25 5 1 a1 106.8
The system of equations becomes  64 8 1 a2   177.2

144 12 1 a3  279.2
a1  1
a   2
Initial Guess: Assume an initial guess of
 2  
a3  5
Gauss-Seidel Method:
Example 1

Rewriting each equation


a1  106
.8 5a2 a3
25
 25 5 1 a1 106.8
 64 8 1 a2   177.2 a2  177
.2  64a1  a3

144 12 1 a3  279.2 8

a3  279
.2 144a1 12
a 2
1
Gauss-Seidel Method:
Example 1
Applying the initial guess and solving for ai
a1  1 a1 106.85(2) (5) 3.6720
a   2 25
 2  
a3  5
a2  177.2643.6720 5  7.8510
Initial Guess 8
a3  279.21443.6720127.8510 155.36
1

When solving for a2, how many of the initial guess values were used?
Gauss-Seidel Method:
Example 1
Finding the absolute relative approximate error
xnew
 xold
a i  i newi 100 At the end of the first iteration
xi
a1   3.6720
a   7.8510 
a 1  3.67201.0000x10072.76%   
2 
3.6720 a3  155.36

a 2  7.85102.0000x100125.47%
The maximum absolute
relative approximate error is
7.8510 125.47%

a 3  155.365.0000x100103.22%
155.36
Gauss-Seidel Method:
Example 1
Iteration #2
Using
a1   3.6720 the values of ai are found:
a   7.8510
  
2

 a1  106
.857. 
8510155
.3612.056
a3  155.36 25

a2 177.26412.056155.36 54.882
from iteration #1

a3  279.214412.0561254.882  798.34
1
Gauss-Seidel Method:
Example 1
Finding the absolute relative approximate error
a 1  12.0563.6720x100 69.543% At the end of the second iteration
12.056 a1   12.056
a   54.882
 2  
a 2  54.8827.8510
 x10085.695% a3 798.54
54.882
The maximum absolute

a 3  798.34155.36 x10080.540%
relative approximate error is
798.34 85.695%
Gauss-Seidel Method:
Example 1
Repeating more iterations, the following values are obtained
Iteration a1 a 1% a2 a 2 % a3 a 3 %
1 3.6720 72.767 −7.8510 125.47 −155.36 103.22
2 12.056 69.543 −54.882 85.695 −798.34 80.540
3 47.182 74.447 −255.51 78.521 −3448.9 76.852
4 193.33 75.595 −1093.4 76.632 −14440 76.116
5 800.53 75.850 −4577.2 76.112 −60072 75.963
6 3322.6 75.906 −19049 75.972 −249580 75.931
Notice – The relative errors are not decreasing at any significant rate
a1  0.29048 
Also, the solution is not converging to the true solution of a    19.690
 2  
a3  1.0857
Gauss-Seidel Method:
Pitfall
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.

Is there a fix?
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.

Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:

aii  aij aii  aij for at least one ‘i’


n n
for all ‘i’ and
j1 j1
ji ji
Gauss-Seidel Method:
Pitfall
Diagonally dominant: The coefficient on the diagonal must be at least
equal to the sum of the other coefficients in that row and at least one row
with a diagonal coefficient greater than the sum of the other coefficients
in that row.
Which coefficient matrix is diagonally dominant?

 2 5.81 34 124 34 56


A  45 43 1  [B]   23 53 5 
123 16 1   96 34 129

Most physical systems do result in simultaneous linear equations that


have diagonally dominant coefficient matrices.
Gauss-Seidel Method:
Example 2
Given the system of equations The coefficient matrix is:
12x1  3x2- 5x3  1 12 3 5
x1  5x2  3x3  28 A  1 5 3 
3x1  7x2  13x3 76  3 7 13

With an initial guess of Will the solution converge using the


x1  1 Gauss-Siedel method?
x   0
 2  
x3  1
Gauss-Seidel Method:
Example 2
Checking if the coefficient matrix is diagonally dominant
a11  1212 a12  a13  3  5 8
12 3 5
A  1 5 3  a22  5 5 a21  a23  1  3  4
 3 7 13
a33  1313 a31  a32  3  7 10
The inequalities are all true and at least one row is strictly greater than:
Therefore: The solution should converge using the Gauss-Siedel Method
Gauss-Seidel Method:
Example 2
Rewriting each equation With an initial guess of
12 3 5 a1  1  x1  1
 1 5 3  a   28 x   0
   2    2  
 3 7 13 a3  76 x3  1

x1  13 x2 5x3 x1 13051 0.50000


12 12
    
x2  28 x1 3x3 x2  28 0.5 31  4.9000
5 5

x3  76 3 x1 7x2 x3  7630. 
5000074. 
9000 3.0923
13 13
Gauss-Seidel Method:
Example 2
The absolute relative approximate error

a 1  0. 
500001.0000
100100.00%
0.50000

a 2  4.90000 100100.00%
4.9000

a 3  3.09231.0000
10067.662%
3.0923

The maximum absolute relative error after the first iteration is 100%
Gauss-Seidel Method:
Example 2
After Iteration #1
x1  0.5000 
x   4.9000
 2  
x3 3.0923

Substituting the x values into the After Iteration #2
equations
x1  0.14679 
x1  134.9000
53.0923  0.14679 x    3.7153
12  2  
x3  3.8118
x2  280.14679
33.0923 3.7153
5

x3  7630.14679
74.900 3.8118
13
Gauss-Seidel Method:
Example 2
Iteration #2 absolute relative approximate error
a 1  0.146790.50000
100 240.61%
0.14679

a 2  3 .7153 4. 9000
10031.889%
3.7153
a 3  3.8118 3 .092310018.874%
3.8118
The maximum absolute relative error after the first iteration is 240.61%

This is much larger than the maximum absolute relative error obtained in
iteration #1. Is this a problem?
Gauss-Seidel Method:
Example 2
Repeating more iterations, the following values are obtained
Iteration a1 a 1% a2 a 2% a3 a 3%
1 0.50000 100.00 4.9000 100.00 3.0923 67.662
2 0.14679 240.61 3.7153 31.889 3.8118 18.876
3 0.74275 80.236 3.1644 17.408 3.9708 4.0042
4 0.94675 21.546 3.0281 4.4996 3.9971 0.65772
5 0.99177 4.5391 3.0034 0.82499 4.0001 0.074383
6 0.99919 0.74307 3.0001 0.10856 4.0001 0.00101

x1  0.99919  x1  1


The solution obtained x    3.0001 is close to the exact solution of x   3 .
 2    2  
x3  4.0001 x3  4
Gauss-Seidel Method:
Example 3
Given the system of equations

3x1 7x2 13x3 76 Rewriting the equations

x1 5x2 3x3  28 x1  767 x 2 13 x3


12x1 3x2 5x3 1 3
With an initial guess of x2  28 x1  3 x3
5
x1  1
x   0
 2   x3  112 x 1 3x2
x3  1 5
Gauss-Seidel Method:
Example 3
Conducting six iterations, the following values are obtained

Iteration a1 a 1% A2 a 2% a3 a 3%
1 21.000 95.238 0.80000 100.00 50.680 98.027
2 −196.15 110.71 14.421 94.453 −462.30 110.96
3 −1995.0 109.83 −116.02 112.43 4718.1 109.80
4 −20149 109.90 1204.6 109.63 −47636 109.90
5 2.0364×105 109.89 −12140 109.92 4.8144×105 109.89
6 −2.0579×105 109.89 1.2272×105 109.89 −4.8653×106 109.89

The values are not converging.


Does this mean that the Gauss-Seidel method cannot be used?
Gauss-Seidel Method

The Gauss-Seidel Method can still be used


 3 7 13
The coefficient matrix is not
diagonally dominant
A  1 5 3 
12 3 5
But this is the same set of 12 3 5
equations used in example #2,
which did converge.
A  1 5 3 
 3 7 13
If a system of linear equations is not diagonally dominant, check to see if
rearranging the equations can form a diagonally dominant matrix.
Gauss-Seidel Method

Not every system of equations can be rearranged to have a


diagonally dominant coefficient matrix.
Observe the set of equations

x1  x2  x3  3
2x1 3x2 4x3  9
x1 7x2  x3 9
Which equation(s) prevents this set of equation from having a
diagonally dominant coefficient matrix?
Gauss-Seidel Method

Summary

-Advantages of the Gauss-Seidel Method


-Algorithm for the Gauss-Seidel Method
-Pitfalls of the Gauss-Seidel Method
THE END
LU Decomposition
LU Decomposition
LU Decomposition is another method to solve a set of
simultaneous linear equations

Which is better, Gauss Elimination or LU Decomposition?

To answer this, a closer look at LU decomposition is


needed.
LU Decomposition
Method
For most non-singular matrix [A] that one could conduct Naïve Gauss
Elimination forward elimination steps, one can always write it as

[A] = [L][U]
where
[L] = lower triangular matrix
[U] = upper triangular matrix
How does LU Decomposition work?

If solving a set of linear equations [A][X] = [C]


If [A] = [L][U] then [L][U][X] = [C]
Multiply by [L]-1
Which gives [L]-1[L][U][X] = [L]-1[C]
Remember [L]-1[L] = [I] which leads to [I][U][X] = [L]-1[C]
Now, if [I][U] = [U] then [U][X] = [L]-1[C]
Now, let [L]-1[C]=[Z]
Which ends with [L][Z] = [C] (1)
and [U][X] = [Z] (2)
LU Decomposition
How can this be used?

Given [A][X] = [C]


1. Decompose [A] into [L] and [U]
2. Solve [L][Z] = [C] for [Z]
3. Solve [U][X] = [Z] for [X]
Is LU Decomposition better than
Gaussian Elimination?
Solve [A][X] = [B]

T = clock cycle time and nxn = size of the matrix


Forward Elimination Decomposition to LU
8n3  8n3 
CT|FET 8n2  32n CT|DET 4n2  20n
3 3 3 3

Back Substitution Forward Substitution



CT|BST 4n2 12n  
CT|FST 4n2 4n 
Back Substitution

CT|BST 4n2 12n 
Is LU Decomposition better than
Gaussian Elimination?
To solve [A][X] = [B]
Time taken by methods

Gaussian Elimination LU Decomposition

8n3  8n3 
T 12n2  4n T 12n2  4n
3 3 3 3
T = clock cycle time and nxn = size of the matrix

So both methods are equally efficient.


To find inverse of [A]
Time taken by Gaussian Elimination Time taken by LU Decomposition
 nCT|FE CT|BS CT|DE nCT|FS nCT|BS
8n4 3 4n 
2
 32n3 
T 12n   T 12n2  20n
 3 3  3 3
To find inverse of [A]
Time taken by Gaussian Elimination Time taken by LU Decomposition
8n4 3 4n 
2
 32n3 
T 12n   T 12n2  20n
 3 3  3 3

Table 1 Comparing computational times of finding inverse of a matrix using


LU decomposition and Gaussian elimination.
n 10 100 1000 10000
CT|inverse GE / CT|inverse LU 3.288 25.84 250.8 2501

For large n, CT|inverse GE / CT|inverse LU ≈ n/4


Method: [A] Decomposes to [L] and
[U]

 1 0 0u11 u12 u13


ALU 21 1 0 0 u22 u23
31 32 1 0 0 u33
[U] is the same as the coefficient matrix at the end of the forward elimination step.
[L] is obtained using the multipliers that were used in the forward elimination process
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination

 25 5 1
 64 8 1

144 12 1
 25 5 1 
Step 1:
64 12.56  0 4.8 1.56
 2.56; Row2Row
25
144 12 1 
25 5 1 
15.76   0 4.8 1.56
1445.76; Row3Row
25
 0 16.8 4.76
Finding the [U] Matrix
25 5 1 
Matrix after Step 1:  0 4.8 1.56
 
 0 16.8 4.76

25 5 1 
16.8 3.5; Row3Row23.5  0 4.8 1.56
Step 2:
4.8  
 0 0 0.7 
25 5 1 
U  0 4.8 1.56
 0 0 0.7 
Finding the [L] matrix

 1 0 0
 1 0 
 21 
31 32 1

Using the multipliers used during the Forward Elimination Procedure

From the first step  25 5 1 21  a21  64 2.56


of forward  64 8 1
a11 25
elimination  31  a31  144 5.76
144 12 1 a11 25
Finding the [L] Matrix

25 5 1 
 0 4.8 1.56 32  a32  16.8 3.5
From the second
step of forward
elimination   a22 4.8
 0 16.8 4.76

 1 0 0
L 2.56 1 0
5.76 3.5 1
Does [L][U] = [A]?

1 0 025 5 1 
LU 2.56 1 0 0 4.8 1.56  ?
5.76 3.5 1 0 0 0.7 
Using LU Decomposition to solve SLEs

Solve the following set of  25 5 1 x1  106.8


 64 8 1 x   177.2
linear equations using LU
   2  
Decomposition
14412 1 x3 279.2

Using the procedure for finding the [L] and [U] matrices

 1 0 025 5 1 
ALU 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7 
Example

 1 0 0z1  106.8
Set [L][Z] = [C]
2.56 1 0z   177.2
  2   
5.76 3.5 1z3  279.2

Solve for [Z] z1 10


2.56z1  z2 177.2
5.76z1 3.5z2  z3  279.2
Example

Complete the forward substitution to solve for [Z]

z1 106.8
z2 177.22.56z1 z1   106.8 
177.22.56106.8
 96.2
Z z2  96.21
z3  279.25.76z1 3.5z2 z3  0.735
 279.25.76106.83.596.21
0.735
Example
25 5 1  x1   106.8 
 0 4.8 1.56 x   96.21
Set [U][X] = [Z]
   2  
 0 0 0.7  x3  0.735

Solve for [X] The 3 equations become


25a1 5a2 a3 106.8
4.8a2 1.56a3  96.21
0.7a3 0.735
Example
Substituting in a3 and using the
From the 3rd equation second equation

0.7a3 0.735 4.8a2 1.56a3 96.21


a3  0.735
a2  96.21 1.56
a 3
0.7 4.8
a3 1.050 a2  96.21 1. 
561. 
050
4.8
a2 19.70
Example
Substituting in a3 and a2 using Hence the Solution Vector is:
the first equation

25a1 5a2 a3 106.8 a1  0.2900 


a    19.70
a1 106.85a2 a3  2  
25
a3   1.050
106.8519.701.050
25
 0.2900
Finding the inverse of a square
matrix

The inverse [B] of a square matrix [A] is defined as

[A][B] = [I] = [B][A]


Finding the inverse of a square
matrix
How can LU Decomposition be used to find the inverse?
Assume the first column of [B] to be [b11 b12 … bn1]T
Using this and the definition of matrix multiplication

First column of [B] Second column of [B]


b11 1 b12 0
b  0 b  1
A 21   A  22  
b  0 b  0
 n1    n2   
The remaining columns in [B] can be found in the same manner
Example: Inverse of a Matrix
Find the inverse of a square matrix [A]
 25 5 1
A  64 8 1
144 12 1
Using the decomposition procedure, the [L] and [U] matrices are found to be

1 0 0 25 5 1 
ALU 2.56 1 0  0 4.8 1.56
5.76 3.5 1  0 0 0.7 
Example: Inverse of a Matrix
Solving for the each column of [B] requires two steps
1) Solve [L] [Z] = [C] for [Z]
2) Solve [U] [X] = [Z] for [X]

1 0 0z1  1
Step 1: LZC 2.56 1
 0z2  0

5.76 3.5 1z3 0
This generates the equations:
z1 1
2.56z1  z2 0
5.76z1 3.5z2  z3 0
Example: Inverse of a Matrix
Solving for [Z]

z1 1
z1   1 
z2  02.56z1
02.561 Z z2  2.56
 2.56 z3  3.2 
z3 05.76z1 3.5z2
05.7613.52.56
3.2
Example: Inverse of a Matrix

25 5 1  b11  1 
 0 4.8 1.56 b  2.56
Solving [U][X] = [Z] for [X]
   21  
 0 0 0.7  b31  3.2 
25b115b21b31 1
4.8b211.56b31  2.56
0.7b31 3.2
Example: Inverse of a Matrix
Using Backward Substitution

b31  3.2  4.571 So the first column of


0.7 the inverse of [A] is:
b21  2.56 1 .560b31
4.8 b11 0.04762 
 2.561.5604.571  0.9524 b   0.9524 
4.8  21  
b11 15b21b31 b31  4.571
25
150.9524 4.5710.04762
25
Example: Inverse of a Matrix
Repeating for the second and third columns of the inverse

Second Column Third Column


 25 5 1b12 0  25 5 1 b13 0
 64 8 1b22  1  64 8
  1 b23  0
144 12 1b32 0 144 12 1 b33 1
b12 0.08333  b13 0.03571 
b    1.417  b   0.4643 
 22     
23 
b32  5.000 b33  1.429 
Example: Inverse of a Matrix
The inverse of [A] is

0.047620.083330.03571
A1  0.9524 1.417 0.4643
 4.571 5.000 1.429 
To check your work do the following operation

[A][A]-1 = [I] = [A]-1[A]


Exercise

Xylene, styrene, toluene and benzene are to


be separated with the array of distillation
columns that is shown below where F, D, B,
D1, B1, D2 and B2 are the molar flow rates
in mol/min.
Calculate the molar flow rates of streams
D1, D2, B1 and B2.
Figure 2 Separation Train
THE END

You might also like