LUDecomposition Partial Total Pivoting 19
LUDecomposition Partial Total Pivoting 19
Partial Pivoting
Abdellatif Serghini
The reduction of a matrix to its row echelon form may necessitate row
interchanges as the example shows
Example
◮ A row having the zero pivot
0 1
A=
1 1
◮ To avoid division by zero, swap the row having the zero pivot with
one of the rows below it.
Why is our basic GE naive?
Example
Let us modify the matrix in the above example by replacing a11 = 0 by
a small number 0.0001 and consider the following linear system
◮ After interchanging the two rows we obtain x1 = 1, x2 = 1 as the
unique solution of the linear system
0.0001x1 + x2 = 1
x1 + x2 = 2
Example
◮ Although 0.0001 6= 0 and we can in theory use it as a pivot in
the Gaussian Elimination, it is not advisable to do so in practice.
◮ Using small pivots means dividing rows by small numbers for
elimination.
◮ This may introduce errors of underflow, overflow and roundoff.
◮ Therefore, to minimize the effect of roundoff, in each step, it is
advisable to choose the row that puts the largest (in absolute
value) pivot element on the diagonal, i.e., find ip such that
◮ which leads to
1 1 1
e
LUe = 2 2+ǫ 5 6= A
4 6 4
◮ In fact, the product is significantly different from A. Thus, using e
L
and Ue we are not able to solve a ”nearby problem”, and thus LU
factorization is not backward stable.
Why is our basic GE naive?
Example
◮ Using e e with the above right-hand side b, we obtain
L and U
11 2 11
2 − 3ǫ 2
xe = −2 ≈ −2
2 2
3ǫ − 3 − 32
Example
◮ Let us carry out the Gaussian elimination with partial pivoting, let
us interchange the two rows, and solve the above system
x1 + x2 = 2
0.0001x1 + x2 = 1
◮ Subtract 0.0001 times the first row from the second row
R2 ← −0.0001R1 + R2 ,
The result of this operation is:
1 1 | 2
[A(1) |b(1) ] =
0 0.9999 | 0.9998
4 5 14 14
◮ step 1:
Interchange rows
0 0 0 1 4 5 14 14
0 1 0 0 3 6 5 2
P1 =
0 0
, P1 A =
1 0 2 5 2 −3
1 0 0 0 2 4 3 2
Elimination
1 0 0 0 4 5 14 14
−3/4 1 0 0 0 9/4 −11/2 −17/2
M1 = −1/2
, M1 P1 A =
0 1 0 0 5/2 −5 −10
−1/2 0 0 1 0 3/2 −4 −5
Partial and complete pivoting
Elimination
1 0 0 0 4 5 14 14
0 1 0 0 0 5/2 −5 −10
M2 = ,
M2 P2 M1 P1 A =
0 −9/10 1 0 0 0 −1 1/2
0 −3/5 0 1 0 0 −1 1
Partial and complete pivoting
Elimination
1 0 0 0 4 5 14 14
0 1 0 0 0 5/2 −5 −10
M3 = , M3 P3 M2 P2 M1 P1 A =
0 0 1 0 0 0 −1 1/2
0 0 −1 1 0 0 0 1/2
Partial and complete pivoting
◮ Notice:
Let
0 0 0 1 4 5 14 14
0 0 1 0 2 5 2 −3
P = P3 P2 P1 =
0
, Â = PA =
1 0 0 3 6 5 2
1 0 0 0 2 4 3 2
◮
1 0 0 0 4 5 14 14
0 0
1 0 A(1) = 0 5/2 −5 −10 = A(2)
0 −9/10 1 0 0 0 −1 1/2
0 −3/5 0 1 0 0 −1 1
A(1) = M1 P1 A
A(2) = M2 P2 A(1)
A(3) = M3 P3 A(2)
..
.
A(n−1) = Mn−1 Pn−1 A(n−2) = U
PA = LU or A = P T LU
◮ Note
The MATLAB function lu uses Gaussian elimination with partial
pivoting.
Execution of
[L, U, P] = lu(A)
determines matrices L, U and P such that PA = LU.
Partial and complete pivoting
The above Theorem also proves the following result.
Theorem
For any n × n nonsingular matrix A, there exists a permutation P such
that PA has an LU factorization.
PA = LU or A = P T LU
(k −1) (k −1)
am,k = max | ai,k |
k ≤i≤n
A(1) = M1 P1 A
where
◮ M1 is the elementary matrix of type 1 at step 1
◮ P1 is a permutation matrix that does the appropriate row
interchange at step 1.
◮ After n − 1 steps, we obtain:
◮ Here the matrix Mn−1 Pn−1 Mn−2 Pn−2 · · · M1 P1 is not lower triangular
→ this is not an LU decomposition of A
Partial Pivoting: Usually sufficient, but not always
◮ k th Step:
In the case of total pivoting (or complete pivoting), we search for
the largest number (in absolute value) in the entire array
◮ in the submatrix as a result of rows’ elimination from row 1 to k − 1
and columns’ elimination from column 1 to k − 1
(k −1) (k −1) (k −1) (k −1)
◮ instead of just the column (ak ,k ak +1,k ak +2,k · · · an,k )T
◮ We shall probably need to interchange the columns as well as
the rows.
◮ When solving a system of equations using complete pivoting
◮ each row interchange is equivalent to interchanging two equations
◮ each column interchange is equivalent to interchanging the two
unknowns.
◮ at the k th step:
n(n + 1)(2n + 1) n3
N = n2 + (n − 1)2 + · · · + 22 + 12 = ≈ ,
6 3
for large enough n.
◮ It offers little advantage over partial pivoting and is significantly
slower.
◮ It is rarely used in practice (adding complexity to the computer
program)
◮ For getting good results. Partial pivoting has proven to be a very
reliable procedure.
Complete pivoting
Numerical Notes
Let A be an n × n matrix. We assume n is large enough.
1. Computing an LU factorization of A takes about 2n3 /3 flops,
whereas finding A−1 requires about 2n3 flops.
2. Solving Ly = b and Ux = y requires about 2n2 flops.
3. Partial pivoting requires M elements to be examined in total.
n(n − 1) n2
M = (n − 1) + (n − 2) + · · · + 2 + 1 = ≈ ,
2 2
for large enough n.
4. Complete pivoting requires N elements to be examined in total.
n(n + 1)(2n + 1) n3
N = n2 + (n − 1)2 + · · · + 22 + 12 = ≈ ,
6 3
for large enough n.