Chapter 3 Linear Algebraic Equations
Chapter 3 Linear Algebraic Equations
CHAPTER -III
LINEAR ALGEBRAIC EQUATIONS
3.1 Introduction
In this chapter, we will deal with the case of determining the values of x1 , x 2 ,..., x n that
simultaneously satisfy the set of equations:
f 1 ( x1 , x2 ,..., x n ) = 0
f 2 ( x1 , x 2 ,..., x n ) = 0
(3.1)
......
f n ( x1 , x 2 ,..., x n ) = 0
In particular we will consider linear algebraic equations which are of the form:
a11 x1 + a12 x 2 + ... + a1n xn = b1
a 21 x1 + a 22 x 2 + ... + a 2 n x n = b2
(3.2)
....
a n1 x1 + a n12 x 2 + ... + a n1n xn = bn
where the a 's are constant coefficients, the b 's are constants, and n is the number of
equations.
The above system of linear equations may also be written in matrix form as:
[A]{X } = {B} (3.3)
b1 a12 a13
b2 a 22 a 23
b3 a32 a33
x1 =
D
For more than three equations, Cramer's rule becomes impractical because, as the number
of equations increase, the determinants are time-consuming to evaluate by hand (or by
computer). Consequently, more efficient alternatives are used.
3.2.3 Elimination of Unknowns
The basic strategy is to multiply the equations by constants so that one of the unknowns
will be eliminated when the equations are combined. The result is a single equation that
can be solved for the remaining unknown. This can then be substituted into either of the
original equations to compute the other variable.
The elimination of unknowns can be extended to systems with more than two or three
equations. However, the numerous calculations that are required for larger systems make
the method extremely tedious to implement by hand. However, the technique can be
formalized and readily programmed for the computer.
For the foregoing steps, Eq.(3.5a) is called the pivot equation, and a11 is the pivot
coefficient or element.
Now repeat the above to eliminate the second unknown from Eq. (3.7c) through Eq.
′ a 22
(3.7d). To this multiply Eq. (3.7b) by a32 ′ , and subtract the result from Eq. (3.7c).
Perform a similar elimination for the remaining equations to yield
where the double prime indicates that the elements have been modified twice.
The procedure can be continued using the remaining pivot equations. The final
manipulation in the sequence is to use the (n − 1)th equation to eliminate the x n −1 term
from the nth equation. At this point, the system will have been transformed to an upper
triangular system:
a11 x1 + a12 x 2 + a13 x3 + ... + a1n xn = b1n (3.8a)
′ x 2 + a ′23 x3 + ... + a 2′ n x n = b2′
a 22 (3.8b)
′′ x3 + ... + a3′′n xn = b3′′
a33 (3.8c)
.
.
.
( n −1)
a nn x n = bn(n −1) (3.8d)
bn(n −1)
xn = (3.9)
a n(n −1)
This result can be back substituted into the (n − 1)th equation to solve for x n −1 . The
procedure, which is repeated to evaluate the remaining x 's, can be represented by the
following formula:
n
bi(i −1) − ∑ a( ij
i −1)
xj
j = i +1
xi = for i = n − 1, n − 2,...,1 (3.10)
aii(i −1)
3.3.2 Pitfalls of Gauss Elimination
Whereas there are many systems of equations that can be solved with naive Gauss
elimination, there are some pitfalls that must be explored before writing general computer
program to implement the method.
i, Division by Zero
The primary reason that the foregoing technique is called "naive" is that during both
elimination and back-substitution phases, it is possible that a division by zero can occur.
Problems also arise when the coefficient is very close to zero. The technique of pivoting
(to be discussed later) has been developed to partially avoid these problems.
ii, Round-off Errors
The problem of round-off errors can become particularly important when large numbers
of equations are to be solved. A rough rule of thumb is that round-off errors may be
important when dealing with 100 or more equations. In any event, one should always
substitute the answers back into the original equations to check whether a substantial
error has occurred.
Gauss-Jordan Elimination
Gauss-Jordan is a variation of the Gauss elimination. The major difference is that when
an unknown is eliminated in the Gauss-Jordan method, it is eliminated from all other
equations rather than just the subsequent ones. In addition, all rows are normalized by
dividing them by their pivot elements. Thus, the elimination step results in an identity
matrix rather than a triangular matrix. Thus, back-substitution is not necessary.
The method is attributed to Johann Carl Friedrich Gauss (1777-1855) and Wilhelm
Jordan (1842 to 1899).
Example Use the Gauss-Jordan elimination method to solve the linear system
LU-Decomposition
Gauss elimination is a sound way to solve systems of algebraic equations of the form
However, it becomes inefficient when solving equations with the same coefficients [A] ,
but with different right-hand side constants.
Eq. (3.13) can also be expressed in matrix notation and rearranged to give:
[U ]{X } − {D} = 0 (3.14)
Now, assume that there is a lower diagonal matrix with 1's on the diagonal,
1 0 0
[L] = l 21 1 0 (3.15)
l31 l32 1
that has the property that when Eq. (3.14) is premultiplied by it, Eq. (3.12) is the result.
That is,
[L]{[U ]{X } − {D}} = [A]{X } − [B ] (3.16)
If this equation holds, it follows that
[L][U ] = [A] (3.17)
and
[L]{D} = {B} (3.18)
A two-step strategy (see Fig. 3.1) for obtaining solutions can be based on Eqs. (3.14),
(3.17) and (3.18):
1. LU decomposition step. [A] is factored or decomposed into the lower [L] and upper
[U ] triangular matrices.
2. Substitution step. [L] and [U ] are used to determine a solution {X } for a right-hand
side {B}. This step itself consists of two steps. First, Eq. (3.18) is used to generate an
intermediate vector by forward substitution. Then, the result is substituted into Eq.
(3.14) which can be solved by back substitution for {X }.
[A] {X } = {B}
(a) Decomposition
[U ] [L]
[D]
Substitution
[U ] {X } = {D}
(c) Backward
[X ]
Example Given
Hence,
Gauss-Seidel Method
Iteration is a popular technique finding roots of equations. Generalization of fixed point
iteration can be applied to systems of linear equations to produce accurate results. The
Gauss-Seidel mehtod is the most common iterative method and is attributed to Philipp
Ludwig von Seidel (1821-1896).
Consider that the n×n square matrix A is split into three parts, the main diagonal D, below
diagonal L and above diagonal U. We have A = D - L - U.
A=
D=
U=
L=
The solution to the linear system AX=B can be obtained starting with P0, and using
iteration scheme
where
and
.
.
Try 10 iterations.
Hence,
For the purpose of hand calculation let’s see 3 set of linear equations containing 3
unknowns.
If the diagonal elements are all nonzero, the first equation can be solved for x1 , the
second for x 2 and the third for x3 to yield:
b1 − a12 x 2 − a13 x3
x1 = (a)
a11
b2 − a 21 x1 − a 23 x3
x2 = (b)
a 22
b3 − a31 x1 − a32 x 2
x3 = (c)
a33
Steps to be followed
Example 2 Use the Gauss-Seidel method to obtain the solution of the following system of
linear equations.
5 x1 − x 2 + x3 = 4
x1 + 3x 2 + x3 = 2
− x1 + x 2 + 4 x3 = 3
4 + x 2 − x3
Solving for: x1 from eq1 x1 =
5
2 − x1 − x3
x 2 from eq2 x2 =
3
3 + x1 − x 2
x3 from eq3 x3 =
4
Executing the above steps repetitively we will have the following result.
x1 x2 x3
0.8 0.4 0.85
0.71 0.146667 0.890833
0.651167 0.152667 0.874625
0.655608 0.156589 0.874755
0.656367 0.156293 0.875019
0.656255 0.156242 0.875003
0.656248 0.15625 0.875
0.65625 0.15625 0.875
0.65625 0.15625 0.875
As we can see the values start to repeat after the 8th iteration hence we can stop the
calculation and take the final values as the solution of the linear system of equations.
Hence, x1 = 0.65625
x 2 = 0.15625
x3 = 0.875