0% found this document useful (0 votes)
40 views28 pages

LUDecomposition Partial Total Pivoting 19

Uploaded by

ww031712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views28 pages

LUDecomposition Partial Total Pivoting 19

Uploaded by

ww031712
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Gaussian Elimination with Pivoting

Partial Pivoting

Abdellatif Serghini

Department of Mathematics and Computer Science


Laurentian University

October 1st, 2019


Why is our basic GE naive?

The reduction of a matrix to its row echelon form may necessitate row
interchanges as the example shows
Example
◮ A row having the zero pivot
 
0 1
A=
1 1

◮ After interchanging the two rows we obtain x1 = 1, x2 = 1 as the


unique solution of the linear system

0x1 + x2 = 1
x1 + x2 = 2

◮ To avoid division by zero, swap the row having the zero pivot with
one of the rows below it.
Why is our basic GE naive?

Example
Let us modify the matrix in the above example by replacing a11 = 0 by
a small number 0.0001 and consider the following linear system
◮ After interchanging the two rows we obtain x1 = 1, x2 = 1 as the
unique solution of the linear system

0.0001x1 + x2 = 1
x1 + x2 = 2

Let us use the Gaussian Elimination to solve the above linear


system by using floating point (rounded) arithmetic with base 10
and precision 3 (in real life problems, large systems of linear
equations need to be solved on computers).
Why is our basic GE naive?
Example
◮ Subtract 1/0.0001 times the first row from the second row
R2 ← −1/0.0001R1 + R2 ,
The result of this operation is:
 
(1) (1) 0.0001 1 | 1
[A |b ] =
0 −9999 | −9998
◮ Since we are using base 10 and precision 3 in the rounded
arithmetic, the system will be written as

0.1 × 10−3 x1 + x2 = 1
−0.1 × 105 x2 = −0.1 × 105
◮ By back substitution we obtain x2 = 1 and x1 = 0 is a completely
wrong solution of the given linear system (x1 = 1/(1 − 0.0001)
and x2 = 2 − 1/(1 − 0.0001) will be close to 1.)
◮ We can trace the reason for this anomaly to the pivot 0.0001
being too small

0.0001x1 + x2 = 1
x1 + x2 = 2
Why is our basic GE naive?

Example
◮ Although 0.0001 6= 0 and we can in theory use it as a pivot in
the Gaussian Elimination, it is not advisable to do so in practice.
◮ Using small pivots means dividing rows by small numbers for
elimination.
◮ This may introduce errors of underflow, overflow and roundoff.
◮ Therefore, to minimize the effect of roundoff, in each step, it is
advisable to choose the row that puts the largest (in absolute
value) pivot element on the diagonal, i.e., find ip such that

|aip ,i | = max(|ak ,i |) for k = i, ..., n


Why is our basic GE naive?
Example
◮ A subtle example is the following backward instability. Take
 
1 1 1
A = 2 2 + ǫ 5
4 6 8
with small ǫ
◮ LU factorization will result in
 
1 1 1
M1 A = 0 ǫ 3
0 2 4
and  
1 1 1
M2 M1 A = 0 ǫ 3  = U.
0 0 4 − 6ǫ
◮ The multipliers were
 
1 0 0
L = 2 1 0
4 2ǫ 1
Why is our basic GE naive?
Example
◮ Now we assume that a right-hand side b is given as
bT = [1, 0, 0]T , and we attempt to solve Ax = b via
◮ Solve Ly = b.
◮ Solve Ux = y .
◮ If ǫ is on the order of machine accuracy, then the 4 in the entry
4 − 6ǫ in U is insignificant. Therefore we have
 
1 1 1
U = 0 ǫ 3  and e
L=L
0 0 − 6ǫ

◮ which leads to  
1 1 1
e
LUe = 2 2+ǫ 5 6= A
4 6 4
◮ In fact, the product is significantly different from A. Thus, using e
L
and Ue we are not able to solve a ”nearby problem”, and thus LU
factorization is not backward stable.
Why is our basic GE naive?

Example
◮ Using e e with the above right-hand side b, we obtain
L and U
 11 2   11 
2 − 3ǫ 2
xe =  −2  ≈  −2 
2 2
3ǫ − 3 − 32

◮ Whereas if we were to use the exact factorization A = LU, then


we get the exact answer
 4ǫ−7   7

2ǫ−3 3
x=  2  ≈ − 32 
2ǫ−3
−2 2ǫ−3
ǫ−1 − 32

◮ Even though e e are close to L and U, the product e


L and U LUe is not
e
close to LU = A and the computed solution x is worthless.
Partial Pivoting

Example
◮ Let us carry out the Gaussian elimination with partial pivoting, let
us interchange the two rows, and solve the above system

x1 + x2 = 2
0.0001x1 + x2 = 1

◮ Subtract 0.0001 times the first row from the second row
R2 ← −0.0001R1 + R2 ,
The result of this operation is:
 
1 1 | 2
[A(1) |b(1) ] =
0 0.9999 | 0.9998

◮ the equation 0.9999x2 = 0.9998 will be written as


0.1 × 101 x2 = 0.1 × 101 . This gives x2 = 1.
◮ The equation x1 + x2 = 2 gives x1 = 1.
◮ Hence, we obtain a reasonably correct solution x1 = 1, x2 = 1 of
the given system (in our arithmetic).
Gaussian Elimination

◮ The diagonal elements of U are called pivots.


The k th pivot is the coefficient of the k th variable in the k th
equation at the k th step of the elimination.
◮ The computation of the multipliers requires divisions by the
pivots. Consequently, the algorithm cannot be carried out if any
of the pivots are zero. (Intuition should tell us that it is a bad idea
to complete the computation if any of the pivots are nearly zero).
◮ If some pivot (in Gaussian elimination without pivoting) is exactly
equal to 0, then the elimination fails (divide by 0), and if some
pivot is very small in magnitude relative to other numbers in the
matrix A, then the computation may be numerically unstable.
◮ The use of nonzero pivots is sufficient for the theoretical
correctness of simple Gaussian elimination, but more care must
be taken if one is to obtain reliable result.
Partial and complete pivoting
Gaussian elimination with partial pivoting:
 
2 4 3 2
3 6 5 2
A= 2 5 2 −3

4 5 14 14

◮ step 1:
Interchange rows
   
0 0 0 1 4 5 14 14
0 1 0 0 3 6 5 2
P1 = 
0 0
, P1 A =  
1 0 2 5 2 −3
1 0 0 0 2 4 3 2

Elimination
   
1 0 0 0 4 5 14 14
−3/4 1 0 0 0 9/4 −11/2 −17/2
M1 = −1/2
, M1 P1 A =  
0 1 0 0 5/2 −5 −10 
−1/2 0 0 1 0 3/2 −4 −5
Partial and complete pivoting

Gaussian elimination with partial pivoting:


◮ step 2:
Interchange rows
   
1 0 0 0 4 5 14 14
0 0 1 0 0 5/2 −5 −10 
P2 = 0 1 0 0 ,
 P2 M1 P1 A =  
0 9/4 −11/2 −17/2
0 0 0 1 0 3/2 −4 −5

Elimination
   
1 0 0 0 4 5 14 14
0 1 0 0 0 5/2 −5 −10
M2 =  , 
M2 P2 M1 P1 A =  
0 −9/10 1 0 0 0 −1 1/2 
0 −3/5 0 1 0 0 −1 1
Partial and complete pivoting

Gaussian elimination with partial pivoting:


◮ step 3:
Interchange rows
   
1 0 0 0 4 5 14 14
0 1 0 0 0 5/2 −5 −10
P3 = 0 0 1 0 ,
 P3 M2 P2 M1 P1 A = 
0

0 −1 1/2 
0 0 0 1 0 0 −1 1

Elimination
   
1 0 0 0 4 5 14 14
0 1 0 0 0 5/2 −5 −10
M3 =  , M3 P3 M2 P2 M1 P1 A =  
0 0 1 0 0 0 −1 1/2 
0 0 −1 1 0 0 0 1/2
Partial and complete pivoting

◮ Notice:
Let
   
0 0 0 1 4 5 14 14
0 0 1 0 2 5 2 −3
P = P3 P2 P1 = 
0
, Â = PA =  
1 0 0 3 6 5 2
1 0 0 0 2 4 3 2

Apply Gaussian elimination without pivoting to Â


   
1 0 0 0 4 5 14 14
−1/2 1 0 0 0 5/2 −5 −10 
    (1)
−3/4 0 1 0 Â = 0 9/4 −11/2 −17/2 = A
−1/2 0 0 1 0 3/2 −4 −5
Partial and complete pivoting


   
1 0 0 0 4 5 14 14
0 0  
 1 0  A(1) = 0 5/2 −5 −10 = A(2)
0 −9/10 1 0 0 0 −1 1/2 
0 −3/5 0 1 0 0 −1 1

◮ Apply Gaussian elimination without pivoting to Â


   
1 0 0 0 4 5 14 14
0 1 0 0 (2) 0 5/2 −5 −10
    = Û
0 0 1 0 A = 0 0 −1 1/2 
0 0 −1 1 0 0 0 1/2

◮ Û is identical to the upper triangular matrix U determined by


Gaussian elimination with partial pivoting.
Partial and complete pivoting
if we were to make the appropriate row interchanges in A to form a
new matrix  , and then apply Gaussian elimination without row
interchanges to  , we would get exactly the same upper triangular
matrix U that is computed by Gaussian elimination with partial
pivoting applied to A.
Theorem
Using Gaussian elimination with partial pivoting. Let the reduction of
A to upper triangular form

A(1) = M1 P1 A
A(2) = M2 P2 A(1)
A(3) = M3 P3 A(2)
..
.
A(n−1) = Mn−1 Pn−1 A(n−2) = U

Let  = Pn−1 Pn−2 · · · P2 P1 A.


If Gaussian elimination without pivoting is applied to  giving an
upper triangular Û, then Û = U.
Partial and complete pivoting

The above Theorem also proves the following result.


Theorem
For any n × n nonsingular matrix A, there exists a permutation P such
that PA has an LU factorization.

PA = LU or A = P T LU

◮ Note
The MATLAB function lu uses Gaussian elimination with partial
pivoting.
Execution of
[L, U, P] = lu(A)
determines matrices L, U and P such that PA = LU.
Partial and complete pivoting
The above Theorem also proves the following result.
Theorem
For any n × n nonsingular matrix A, there exists a permutation P such
that PA has an LU factorization.

PA = LU or A = P T LU

◮ Unless A has special properties (e.g., A is positive definite or


diagonally dominant), pivoting must be done to insure stability.
◮ A program that reduces A to upper triangular form must keep
track of any interchanges made in order to solve Ax = b.
◮ The matrices L and U can be stored in one n × n (as the 1’s on
the diagonal of L do not have to be stored)
 
u1,1 u1,2 u1,3 ··· u1,n
m2,1 u2,2 u2,3 ··· u2,n 
 
m3,1 m3,2 u3,3 · · · u 
3,n 

 .. .. . .. . .. .. 
.
 . . 
mn,1 mn,2 · · · mn,n−1 un,n
Partial Pivoting

◮ Partial Pivoting: Exchange only rows


◮ Exchanging rows does not affect the order of the xi
◮ For increased numerical stability, make sure the largest possible
pivot element is used. This requires searching in the partial
column below the pivot element.
◮ Partial pivoting is usually sufficient.
Gaussian elimination with pivoting

◮ The reduced matrix obtained after k − 1 steps of the forward


elimination is:
 
a1,1 a1,2 · · · ··· a1,n
 (1)
a2,2 · · · ··· a2,n 
(1)
 
 .. .. 
 . 
 . 
 (k −2) (k −2) (k −2) 
A(k −1) = 
 ak −1,k −1 ak −1,k · · · ak −1,n 

 O
(k −1)
ak ,k · · · ak ,n 
(k −1)
 
 .. .. .. 
 
 . . . 
(k −1) (k −1)
an,k ··· an,n

◮ We have two common pivoting strategies:


◮ Partial pivoting
◮ Complete pivoting (or total pivoting)
Partial and complete pivoting
◮ Partial pivoting:
(k −1)
Choose am,k as the pivot for step k , where

(k −1) (k −1)
am,k = max | ai,k |
k ≤i≤n

If m 6= k , then interchange rows m and k .


◮ Matrix formulation of partial pivoting at step 1:

A(1) = M1 P1 A

where
◮ M1 is the elementary matrix of type 1 at step 1
◮ P1 is a permutation matrix that does the appropriate row
interchange at step 1.
◮ After n − 1 steps, we obtain:

U = An−1 = Mn−1 Pn−1 Mn−2 Pn−2 · · · M1 P1 A

◮ Here the matrix Mn−1 Pn−1 Mn−2 Pn−2 · · · M1 P1 is not lower triangular
→ this is not an LU decomposition of A
Partial Pivoting: Usually sufficient, but not always

◮ Partial pivoting is usually sufficient


◮ Consider  
2 2α | 2α
[A|b] =
1 1 | 2
◮ With Partial Pivoting, the first row is the pivot row:
 
(1) (1) 2 2α | 2α
[A |b ] =
0 1−α | 2−α

◮ and for large α, 1 − α ≈ −α and 2 − α ≈ −α:


 
2 2α | 2α
0 −α | −α

◮ so that x2 = 1 and x1 = 0 (exact is x1 = α−1 α


≈ 1, x2 = α−2
α−1 ≈ 1)
◮ The pivot is selected as the largest in the column, but it should
be the largest relative to the full submatrix.
Total pivoting

◮ k th Step:
In the case of total pivoting (or complete pivoting), we search for
the largest number (in absolute value) in the entire array
◮ in the submatrix as a result of rows’ elimination from row 1 to k − 1
and columns’ elimination from column 1 to k − 1
(k −1) (k −1) (k −1) (k −1)
◮ instead of just the column (ak ,k ak +1,k ak +2,k · · · an,k )T
◮ We shall probably need to interchange the columns as well as
the rows.
◮ When solving a system of equations using complete pivoting
◮ each row interchange is equivalent to interchanging two equations
◮ each column interchange is equivalent to interchanging the two
unknowns.
◮ at the k th step:

|akk | = max |aij | for i = k , k + 1, · · · , n j = k , k + 1, · · · , n


Complete pivoting
◮ Complete pivoting:
(k −1)
Choose am,p as the pivot for step k , where
(k −1) (k −1)
am,p = max | ai,j |
k ≤i≤n,k ≤j≤n

If m 6= k or p 6= k , then interchange rows m and k and columns p


and k .
◮ Matrix formulation of complete pivoting at step 1:
A(1) = M1 P1 AQ1
where
◮ M1 is the elementary matrix of type 1 at step 1
◮ P1 is a permutation matrix that does the appropriate row
interchange at step 1.
◮ Q1 is a permutation matrix that does the appropriate column
interchange at step 1.
◮ After n − 1 steps, we obtain:
U = An−1 = Mn−1 Pn−1 Mn−2 Pn−2 · · · M1 P1 AQ1 Q2 · · · Qn−2 Qn−1
◮ Here the matrix Mn−1 Pn−1 Mn−2 Pn−2 · · · M1 P1 is not lower triangular
◮ Partial pivoting is usually sufficient.
Full (or Complete) Pivoting:

◮ Exchange both rows and columns


◮ Column exchange requires changing the order of the xi
◮ For increased numerical stability, make sure the largest possible
pivot element is used. This requires searching in the pivot row,
and in all rows below the pivot row, starting the pivot column.
◮ Full pivoting is less susceptible to roundoff, but the increase in
stability comes at a cost of more complex programming (not a
problem if you use a library routine) and an increase in work
associated with searching and data movement.
Complete pivoting
◮ The following MATLAB function solves the system of linear
equations Ax = b using total pivoting
◮ MATLAB commands:
≫ [L U P Q] = lucp(A);
≫ L = Lower triangular matrix with ones as diagonals;
≫ U = Upper triangular matrix;
≫ P and Q permutations matrices so that PAQ = LU

◮ Complete pivoting requires N elements to be examined in total.

n(n + 1)(2n + 1) n3
N = n2 + (n − 1)2 + · · · + 22 + 12 = ≈ ,
6 3
for large enough n.
◮ It offers little advantage over partial pivoting and is significantly
slower.
◮ It is rarely used in practice (adding complexity to the computer
program)
◮ For getting good results. Partial pivoting has proven to be a very
reliable procedure.
Complete pivoting

◮ There are times when the partial pivoting procedure is


inadequate. When some rows have coefficients that are very
large in comparison to those in other rows, partial pivoting may
not give a correct solution
◮ When in doubt, use total pivoting.
◮ No amount of pivoting will remove inherent ”ill-conditioning”
(discussed in Chapter II) from the system, but it helps to ensure
that no further ”ill-conditioning” is introduced in the course of
computation
Gaussian elimination

Numerical Notes
Let A be an n × n matrix. We assume n is large enough.
1. Computing an LU factorization of A takes about 2n3 /3 flops,
whereas finding A−1 requires about 2n3 flops.
2. Solving Ly = b and Ux = y requires about 2n2 flops.
3. Partial pivoting requires M elements to be examined in total.

n(n − 1) n2
M = (n − 1) + (n − 2) + · · · + 2 + 1 = ≈ ,
2 2
for large enough n.
4. Complete pivoting requires N elements to be examined in total.

n(n + 1)(2n + 1) n3
N = n2 + (n − 1)2 + · · · + 22 + 12 = ≈ ,
6 3
for large enough n.

You might also like