Numerical Recipes
Numerical Recipes
2 Gaussian Elimination with Backsubstitution which (peeling of the C1 s one at a time) implies a solution x = C1 C2 C3 b
41
(2.1.8)
Notice the essential difference between equation (2.1.8) and equation (2.1.6). In the latter case, the Cs must be applied to b in the reverse order from that in which they become known. That is, they must all be stored along the way. This requirement greatly reduces the usefulness of column operations, generally restricting them to simple permutations, for example in support of full pivoting.
CITED REFERENCES AND FURTHER READING: Wilkinson, J.H. 1965, The Algebraic Eigenvalue Problem (New York: Oxford University Press). [1] Carnahan, B., Luther, H.A., and Wilkes, J.O. 1969, Applied Numerical Methods (New York: Wiley), Example 5.2, p. 282. Bevington, P.R. 1969, Data Reduction and Error Analysis for the Physical Sciences (New York: McGraw-Hill), Program B-2, p. 298. Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley). Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), 9.31.
Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to [email protected] (outside North America).
42
Chapter 2.
Backsubstitution
But how do we solve for the xs? The last x (x 4 in this example) is already isolated, namely x4 = b4 /a44 With the last x known we can move to the penultimate x, x3 = 1 [b x4 a34 ] a33 3 (2.2.3) (2.2.2)
Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to [email protected] (outside North America).
and then proceed with the x before that one. The typical step is N 1 xi = a xj b aii i j =i+1 ij
(2.2.4)
The procedure dened by equation (2.2.4) is called backsubstitution. The combination of Gaussian elimination and backsubstitution yields a solution to the set of equations. The advantage of Gaussian elimination and backsubstitution over Gauss-Jordan elimination is simply that the former is faster in raw operations count: The innermost loops of Gauss-Jordan elimination, each containing one subtraction and one multiplication, are executed N 3 and N 2 M times (where there are N equations and M unknowns). The corresponding loops in Gaussian elimination are executed 3 only 1 3 N times (only half the matrix is reduced, and the increasing numbers of 2 predictable zeros reduce the count to one-third), and 1 2 N M times, respectively. 1 2 Each backsubstitution of a right-hand side is 2 N executions of a similar loop (one multiplication plus one subtraction). For M N (only a few right-hand sides) Gaussian elimination thus has about a factor three advantage over Gauss-Jordan. (We could reduce this advantage to a factor 1.5 by not computing the inverse matrix as part of the Gauss-Jordan scheme.) For computing the inverse matrix (which we can view as the case of M = N right-hand sides, namely the N unit vectors which are the columns of the identity 3 matrix), Gaussian elimination and backsubstitution at rst glance require 1 3 N (matrix 1 3 1 3 reduction) + 2 N (right-hand side manipulations) + 2 N (N backsubstitutions) 3 3 = 4 3 N loop executions, which is more than the N for Gauss-Jordan. However, the unit vectors are quite special in containing all zeros except for one element. If this 3 is taken into account, the right-side manipulations can be reduced to only 1 6 N loop executions, and, for matrix inversion, the two methods have identical efciencies. Both Gaussian elimination and Gauss-Jordan elimination share the disadvantage that all right-hand sides must be known in advance. The LU decomposition method in the next section does not share that deciency, and also has an equally small operations count, both for solution with any number of right-hand sides, and for matrix inversion. For this reason we will not implement the method of Gaussian elimination as a routine.
CITED REFERENCES AND FURTHER READING: Ralston, A., and Rabinowitz, P. 1978, A First Course in Numerical Analysis, 2nd ed. (New York: McGraw-Hill), 9.31.
43
Isaacson, E., and Keller, H.B. 1966, Analysis of Numerical Methods (New York: Wiley), 2.1. Johnson, L.W., and Riess, R.D. 1982, Numerical Analysis, 2nd ed. (Reading, MA: AddisonWesley), 2.2.1. Westlake, J.R. 1968, A Handbook of Numerical Matrix Inversion and Solution of Linear Equations (New York: Wiley).
Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copying of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to [email protected] (outside North America).
where L is lower triangular (has elements only on the diagonal and below) and U is upper triangular (has elements only on the diagonal and above). For the case of a 4 4 matrix A, for example, equation (2.3.1) would look like this: 21 22
31 41 32 42 11 0 0 0 33 43 0 11 0 0 0 0 44 0 12 22 0 0 13 23 33 0 14 24 34 44 a a a a = 21 22 23 24 a31 a41 a32 a42 a33 a43 a34 a44 a11 a12 a13 a14
(2.3.2) We can use a decomposition such as (2.3.1) to solve the linear set A x = (L U) x = L (U x) = b by rst solving for the vector y such that Ly=b and then solving Ux= y (2.3.5) What is the advantage of breaking up one linear set into two successive ones? The advantage is that the solution of a triangular set of equations is quite trivial, as we have already seen in 2.2 (equation 2.2.4). Thus, equation (2.3.4) can be solved by forward substitution as follows, y1 = b1 11 (2.3.4) (2.3.3)
i1 1 bi yi = ij yj ii j =1
(2.3.6) i = 2, 3, . . . , N
while (2.3.5) can then be solved by backsubstitution exactly as in equations (2.2.2) (2.2.4), yN xN = N N N (2.3.7) 1 xi = ij xj yi i = N 1, N 2, . . . , 1 ii j =i+1