ACM_lecture-2-1
ACM_lecture-2-1
Secant Method
Gaussian
Elimination
Gauss-Siedel
Method
LU
Decomposition
Gaussian Elimination
Naïve Gauss Elimination
Naïve Gaussian Elimination
Two steps
1. Forward Elimination
2. Back Substitution
Forward Elimination
The goal of forward elimination is to transform the
coefficient matrix into an upper triangular matrix
25 5 1 x1 106.8
64 8
1 x2 177.2
144 12 1 x3 279.2
a21
a (a11x1 a12x2 a13x3 ...a1nxn b1)
11
a a a
a21x1 a12x2 ... a1nxn 21b1
21 21
a11 a11 a11
Forward Elimination
Subtract the result from Equation 2.
a21x1 a22x2 a23x3 ...a2nxn b2
a x a21
a x ... a21
a x a21
b1
− 21 1
a11 12 2
a11 1n n
a11
_________________________________________________
a a a
a
22 a11 12 2
21
a x ... a 21
a x b
2n a11 1n n 2 a11 b1
21
or a x ...a x b
'
22 2
'
2n n
'
2
Forward Elimination
Repeat this procedure for the remaining
equations to reduce the set of equations as
a11x1 a12x2 a13x3 ...a1nxn b1
'
a22 x2 a23
'
x3 ...a2' nxn b2'
'
a32 x2 a33
'
x3 ...a3' nxn b3'
. . .
. . .
. . .
an"3x3 ...ann
"
xn bn"
End of Step 2
Forward Elimination
At the end of (n-1) Forward Elimination steps, the
system of equations will look like
a11x1 a12x2 a13x3 ...a1nxn b1
'
a22x2 a23
'
x3 ...a2' nxn b2'
a x ...a x b
"
33 3
"
3n n
"
3
. .
. .
n1
. .
n1
a x
nn n bn
n1
. .
an1x
nn n bn
Back Substitution
Start with the last equation because it has only one unknown
(n1)
xn b n
(n1)
a nn
Back Substitution
(n1)
xn b n
(n1)
a nn
1. Forward Elimination
2. Back Substitution
Forward Elimination
Number of Steps of Forward
Elimination
Solving for a3
0.7a3 0.76
a3 0.76
0.7
a3 1.08571
Back Substitution (cont.)
25 5 1 a1 106.8
0 4.8 1.56 a 96.208
2
0 0 0.7 a3 0.76
Solving for a2
4.8a2 1.56a3 96.208
a2 96 .2081 .56
a 3
4.8
a2 96.208 1.56 1.08571
4.8
a2 19.6905
Back Substitution (cont.)
25 5 1 a1 106.8
0 4.8 1.56 a 96.2
2
0 0 0.7 a3 0.76
Solving for a1
25a1 5a2 a3 106.8
a1 106
.8 5a2 a 3
25
106
.8 519 .
69051.08571
25
0.290472
Naïve Gaussian Elimination
Solution
25 5 1 a1 106 .8
64 8 1 a2 177.2
144 12 1 a3 279.2
a1 0.290472
a 19.6905
2
a3 1.08571
Example 1 Cont.
a1 0.290472
Solution
a 19.6905
The solution vector is
2
a3 1.08571
The polynomial that passes through the three data points is then:
129.686m/s.
THE END
Naïve Gauss Elimination
Pitfalls
Pitfall#1. Division by zero
10x2 7x3 3
6x1 2x2 3x3 11
5x1 x2 5x3 9
0 10 7 x1 3
6 2 3 x 11
2
5 1 5 x3 9
Is division by zero an issue
here?
12x1 10x2 7x3 15
6x1 5x2 3x3 14
5x1 x2 5x3 9
12 10 7 x1 15
6 5 3 x 14
2
5 1 5 x3 9
Is division by zero an issue
here? YES
12x1 10x2 7x3 15
6x1 5x2 3x3 14
24x1 x2 5x3 28
x1 1
x 1
2
x3 1
Pitfall#2. Large Round-off Errors
Two steps
1. Forward Elimination
2. Back Substitution
Forward Elimination
n1
. .
an1x
nn n bn
Back Substitution
(n1)
xn b n
(n1)
a nn
i1
b
n i1
i ij a x j
xi ji1
i1 fori n1,...,1
aii
THE END
Gauss Elimination
with Partial Pivoting
Example
Example 2
Solve the following set of equations
by Gaussian elimination with partial
pivoting
1. Forward Elimination
2. Back Substitution
Forward Elimination
Number of Steps of Forward
Elimination
Solving for a3
0.2a3 0.23
a3 0.23
0.2
1.15
Back Substitution (cont.)
144 12 1 a1 279.2
0 2.917 0.8264
a 58.33
2
0 0 0.2 a3 0.23
Solving for a2
2.917a2 0.8264
a3 58.33
a2 58
.33 0.8264
a 3
2.917
58
.33 0.82641 .15
2.917
19.67
Back Substitution (cont.)
Solving for a1
144a1 12a2 a3 279.2
a1 279.2 12
a 2 a 3
144
279.2 1219 .67 1.15
144
0.2917
Gaussian Elimination with
Partial Pivoting Solution
10 7 0x1 7 10 7 0x1 7
3 2.099 6x 3.901 0 0.001 6x 6.001
2 2
5 1 5x3 6 0 2.5 5x3 2.5
Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column
|-0.001| and |2.5| or 0.0001 and 2.5
The largest absolute value is 2.5, so row 2 is
switched with row 3
10 7 0 x1 7
0 2.5 5 x 2.5
2
0 0 6.002x3 6.002
Partial Pivoting: Example
Back Substitution
Solving the equations through back substitution
x1 7 7x2 0x3
0
10
Partial Pivoting: Example
Compare the calculated and exact solution
The fact that they are equal is coincidence, but it
does illustrate the advantage of Partial Pivoting
x1 0 x1 0
Xcalculated x2 1 Xexact x2 1
x3 1 x3 1
THE END
Determinant of a Square
Matrix
Using Naïve Gauss
Elimination
Example
Theorem of Determinants
aii
n
i1
Forward Elimination of a
Square Matrix
Ann Unn
detAdetU
Example
Using naïve Gaussian elimination find the
determinant of the following square
matrix.
25 5 1
64 8 1
144 12 1
Forward Elimination
Forward Elimination: Step 1
25 5 1
64 8 1 Divide Equation 1 by 25 and
64 2.56.
144 12 1 multiply it by 64,
25
25 5 12.5664 12.8 2.56
64 8 1
64 12.8 2.56
.
Subtract the result from
Equation 2
0 4.8 1.56
25 5 1
Substitute new equation for 0 4.8 1.56
Equation 2
144 12 1
Forward Elimination: Step 1
(cont.)
25 5 1
0 4.8 1.56 Divide Equation 1 by 25 and
multiply it by 144, 1445.76.
144 12 1 25
25 5 15.76144 28.8 5.76
144 12 1
144 28.8 5.76
.
Subtract the result from
Equation 3
0 16.8 4.76
25 5 1
Substitute new equation for 0 4.8 1.56
Equation 3
0 16.8 4.76
Forward Elimination: Step 2
Divide Equation 2 by −4.8
25 5 1
0 4.8 1.56 and multiply it by −16.8,
16.8 3.5.
0 16.8 4.76 4.8
0 4.8 1.563.50 16.8 5.46
.
0 16.8 4.76
0 5.46
Subtract the result from
Equation 3 16.8
0 0 0.7
Substitute new equation for
25 5 1
0 4.8 1.56
Equation 3
0 0 0.7
Finding the Determinant
After forward elimination
25 5 1 25 5 1
64 8 1 0 4.8 1.56
144 12
.
1 0 0 0.7
detA u11u22u33
254.80.7
84.00
Summary
-Forward Elimination
-Back Substitution
-Pitfalls
-Improvements
-Partial Pivoting
-Determinant of a Matrix
THE END
Gauss-Siedel Method
Gauss-Seidel Method
Gauss-Seidel Method
An iterative method.
Basic Procedure:
-Algebraically solve each linear equation for xi
-Assume an initial guess solution array
-Solve for each xi and repeat
-Use absolute relative approximate error after each iteration
to check if error is within a pre-specified tolerance.
Gauss-Seidel Method
Why?
The Gauss-Seidel Method allows the user to control round-off error.
j1 j1
x1 j1 xn1 jn1
a11 an1,n1
cn anjxj
n
c2 a2 j xj
n
j1 j1
x2 j2
xn jn
a22 ann
Gauss-Seidel Method
Algorithm
General Form for any row ‘i’
ci aijxj
n
j1
xi ji
,i 1,2,,n.
aii
x new
ix 100i
old
ai new
x i
So when has the answer been found?
a3 279
.2 144a1 12
a 2
1
Gauss-Seidel Method:
Example 1
Applying the initial guess and solving for ai
a1 1 a1 106.85(2) (5) 3.6720
a 2 25
2
a3 5
a2 177.2643.6720 5 7.8510
Initial Guess 8
a3 279.21443.6720127.8510 155.36
1
When solving for a2, how many of the initial guess values were used?
Gauss-Seidel Method:
Example 1
Finding the absolute relative approximate error
xnew
xold
a i i newi 100 At the end of the first iteration
xi
a1 3.6720
a 7.8510
a 1 3.67201.0000x10072.76%
2
3.6720 a3 155.36
a 2 7.85102.0000x100125.47%
The maximum absolute
relative approximate error is
7.8510 125.47%
a 3 155.365.0000x100103.22%
155.36
Gauss-Seidel Method:
Example 1
Iteration #2
Using
a1 3.6720 the values of ai are found:
a 7.8510
2
a1 106
.857.
8510155
.3612.056
a3 155.36 25
a2 177.26412.056155.36 54.882
from iteration #1
a3 279.214412.0561254.882 798.34
1
Gauss-Seidel Method:
Example 1
Finding the absolute relative approximate error
a 1 12.0563.6720x100 69.543% At the end of the second iteration
12.056 a1 12.056
a 54.882
2
a 2 54.8827.8510
x10085.695% a3 798.54
54.882
The maximum absolute
a 3 798.34155.36 x10080.540%
relative approximate error is
798.34 85.695%
Gauss-Seidel Method:
Example 1
Repeating more iterations, the following values are obtained
Iteration a1 a 1% a2 a 2 % a3 a 3 %
1 3.6720 72.767 −7.8510 125.47 −155.36 103.22
2 12.056 69.543 −54.882 85.695 −798.34 80.540
3 47.182 74.447 −255.51 78.521 −3448.9 76.852
4 193.33 75.595 −1093.4 76.632 −14440 76.116
5 800.53 75.850 −4577.2 76.112 −60072 75.963
6 3322.6 75.906 −19049 75.972 −249580 75.931
Notice – The relative errors are not decreasing at any significant rate
a1 0.29048
Also, the solution is not converging to the true solution of a 19.690
2
a3 1.0857
Gauss-Seidel Method:
Pitfall
What went wrong?
Even though done correctly, the answer is not converging to the
correct answer
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.
Is there a fix?
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.
x3 76 3 x1 7x2 x3 7630.
5000074.
9000 3.0923
13 13
Gauss-Seidel Method:
Example 2
The absolute relative approximate error
a 1 0.
500001.0000
100100.00%
0.50000
a 2 4.90000 100100.00%
4.9000
a 3 3.09231.0000
10067.662%
3.0923
The maximum absolute relative error after the first iteration is 100%
Gauss-Seidel Method:
Example 2
After Iteration #1
x1 0.5000
x 4.9000
2
x3 3.0923
Substituting the x values into the After Iteration #2
equations
x1 0.14679
x1 134.9000
53.0923 0.14679 x 3.7153
12 2
x3 3.8118
x2 280.14679
33.0923 3.7153
5
x3 7630.14679
74.900 3.8118
13
Gauss-Seidel Method:
Example 2
Iteration #2 absolute relative approximate error
a 1 0.146790.50000
100 240.61%
0.14679
a 2 3 .7153 4. 9000
10031.889%
3.7153
a 3 3.8118 3 .092310018.874%
3.8118
The maximum absolute relative error after the first iteration is 240.61%
This is much larger than the maximum absolute relative error obtained in
iteration #1. Is this a problem?
Gauss-Seidel Method:
Example 2
Repeating more iterations, the following values are obtained
Iteration a1 a 1% a2 a 2% a3 a 3%
1 0.50000 100.00 4.9000 100.00 3.0923 67.662
2 0.14679 240.61 3.7153 31.889 3.8118 18.876
3 0.74275 80.236 3.1644 17.408 3.9708 4.0042
4 0.94675 21.546 3.0281 4.4996 3.9971 0.65772
5 0.99177 4.5391 3.0034 0.82499 4.0001 0.074383
6 0.99919 0.74307 3.0001 0.10856 4.0001 0.00101
Iteration a1 a 1% A2 a 2% a3 a 3%
1 21.000 95.238 0.80000 100.00 50.680 98.027
2 −196.15 110.71 14.421 94.453 −462.30 110.96
3 −1995.0 109.83 −116.02 112.43 4718.1 109.80
4 −20149 109.90 1204.6 109.63 −47636 109.90
5 2.0364×105 109.89 −12140 109.92 4.8144×105 109.89
6 −2.0579×105 109.89 1.2272×105 109.89 −4.8653×106 109.89
x1 x2 x3 3
2x1 3x2 4x3 9
x1 7x2 x3 9
Which equation(s) prevents this set of equation from having a
diagonally dominant coefficient matrix?
Gauss-Seidel Method
Summary
[A] = [L][U]
where
[L] = lower triangular matrix
[U] = upper triangular matrix
How does LU Decomposition work?
8n3 8n3
T 12n2 4n T 12n2 4n
3 3 3 3
T = clock cycle time and nxn = size of the matrix
25 5 1
64 8 1
144 12 1
25 5 1
Step 1:
64 12.56 0 4.8 1.56
2.56; Row2Row
25
144 12 1
25 5 1
15.76 0 4.8 1.56
1445.76; Row3Row
25
0 16.8 4.76
Finding the [U] Matrix
25 5 1
Matrix after Step 1: 0 4.8 1.56
0 16.8 4.76
25 5 1
16.8 3.5; Row3Row23.5 0 4.8 1.56
Step 2:
4.8
0 0 0.7
25 5 1
U 0 4.8 1.56
0 0 0.7
Finding the [L] matrix
1 0 0
1 0
21
31 32 1
25 5 1
0 4.8 1.56 32 a32 16.8 3.5
From the second
step of forward
elimination a22 4.8
0 16.8 4.76
1 0 0
L 2.56 1 0
5.76 3.5 1
Does [L][U] = [A]?
1 0 025 5 1
LU 2.56 1 0 0 4.8 1.56 ?
5.76 3.5 1 0 0 0.7
Using LU Decomposition to solve SLEs
Using the procedure for finding the [L] and [U] matrices
1 0 025 5 1
ALU 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7
Example
1 0 0z1 106.8
Set [L][Z] = [C]
2.56 1 0z 177.2
2
5.76 3.5 1z3 279.2
z1 106.8
z2 177.22.56z1 z1 106.8
177.22.56106.8
96.2
Z z2 96.21
z3 279.25.76z1 3.5z2 z3 0.735
279.25.76106.83.596.21
0.735
Example
25 5 1 x1 106.8
0 4.8 1.56 x 96.21
Set [U][X] = [Z]
2
0 0 0.7 x3 0.735
1 0 0 25 5 1
ALU 2.56 1 0 0 4.8 1.56
5.76 3.5 1 0 0 0.7
Example: Inverse of a Matrix
Solving for the each column of [B] requires two steps
1) Solve [L] [Z] = [C] for [Z]
2) Solve [U] [X] = [Z] for [X]
1 0 0z1 1
Step 1: LZC 2.56 1
0z2 0
5.76 3.5 1z3 0
This generates the equations:
z1 1
2.56z1 z2 0
5.76z1 3.5z2 z3 0
Example: Inverse of a Matrix
Solving for [Z]
z1 1
z1 1
z2 02.56z1
02.561 Z z2 2.56
2.56 z3 3.2
z3 05.76z1 3.5z2
05.7613.52.56
3.2
Example: Inverse of a Matrix
25 5 1 b11 1
0 4.8 1.56 b 2.56
Solving [U][X] = [Z] for [X]
21
0 0 0.7 b31 3.2
25b115b21b31 1
4.8b211.56b31 2.56
0.7b31 3.2
Example: Inverse of a Matrix
Using Backward Substitution
0.047620.083330.03571
A1 0.9524 1.417 0.4643
4.571 5.000 1.429
To check your work do the following operation