0% found this document useful (0 votes)
2 views

lec4_gaussian

The document outlines the process of Naive Gaussian Elimination for solving linear systems, detailing both the forward elimination and back substitution procedures. It emphasizes the importance of error and residual vectors in assessing the accuracy of computed solutions. Additionally, it provides pseudocode for implementing the elimination process and examples of how to apply it.

Uploaded by

nadanasri007
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

lec4_gaussian

The document outlines the process of Naive Gaussian Elimination for solving linear systems, detailing both the forward elimination and back substitution procedures. It emphasizes the importance of error and residual vectors in assessing the accuracy of computed solutions. Additionally, it provides pseudocode for implementing the elimination process and examples of how to apply it.

Uploaded by

nadanasri007
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Naive Gaussian Elimination

give some insight into the difficulty of assessing the accuracy of computed solutions of
linear systems.

Summary

(1) The basic forward elimination procedure using equation k to operate on equations k +
1, k + 2, . . . , n is
#
ai j ← ai j − (aik /akk )ak j (k ! j ! n, k < i ! n)
bi ← bi − (aik /akk )bk

Here we assume akk ̸= 0. The basic back substitution procedure is


$ n
&
1 %
xi = bi − ai j x j (i = n − 1, n − 2, . . . , 1)
aii j=i+1

(2) When solving the linear system Ax = b, if the true or exact solution is x and the
x , then important quantities are
approximate or computed solution is !

error vectors e=! x−x


residual vectors r = A!
x−b

roblems 7.1
a
1. Show that the system of equations

⎨ x1 + 4x2 + αx3 = 6
Forward Elimination

250 Chapter 7 Systems of Linear Equations

Just prior to the kth step in the forward elimination, the system will appear as follows:
⎡ ⎤⎡ ⎤ ⎡ ⎤
a11 a12 a13 ··· ··· ··· a1n x1 b1
⎢ 0 a22 a23 a2n ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ··· ··· ··· ⎥ ⎢ x2 ⎥ ⎢ b2 ⎥
⎢ 0 0 a33 a3n ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ··· ··· ··· ⎥ ⎢ x3 ⎥ ⎢ b3 ⎥
⎢ .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎢ . . . . . ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥⎢ . ⎥ ⎢ . ⎥
⎢ 0 0 0 akk ak j · · · akn ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ··· ··· ⎥ ⎢ xk ⎥ = ⎢ bk ⎥
⎢ .. .. .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎢ . . . . . . . ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥⎢ . ⎥ ⎢ . ⎥
⎢ 0 0 0 aik ai j · · · ain ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ··· ··· ⎥ ⎢ xi ⎥ ⎢ bi ⎥
⎢ . .. .. .. .. .. .. ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎣ .. . . . . . . ⎦⎣ . ⎦ ⎣ . ⎦
0 0 0 · · · ank · · · an j · · · ann xn bn

Here, a wedge of 0 coefficients has been created, and the first k equations have been proc-
essed and are now fixed. Using the kth equation as the pivot equation, we select multipliers
to create 0’s as coefficients for each xi below the akk coefficient. Hence, we compute for
each remaining equation (k + 1 ! i ! n)
⎧ + ,
⎪ aik
⎪ a
⎨ ij ← aij − ak j (k ! j ! n)
akk
+ ,

⎪ aik
⎩ bi ← bi − bk
akk

Obviously, we must assume that all the divisors in this algorithm are nonzero.
Forward Elimination: Pseudocode
7.1 Naive Gaussian Elimination 2

j = k, we have

aik ← aik − (aik /akk )akk

Since we expect this to be 0, no purpose is served in computing it. The location where th
is being created is a good place to store the multiplier. If these remarks are put into practi
the pseudocode will look like this:

integer i, j, k; real xmult; real array (ai j )1:n×1:n , (bi )1:n


for k = 1 to n − 1 do
for i = k + 1 to n do
xmult ← aik /akk
aik ← xmult
for j = k + 1 to n do
ai j ← ai j − (xmult)ak j
end for
bi ← bi − (xmult)bk
end for
end for
ann xn = bn

Back
where the ai j ’s and bi ’s are Substitution
not the original ones from System (6) but instead are the ones
that have been altered by the elimination process.
The back substitution starts by solving the nth equation for xn :

bn
xn =
ann

Then, using the (n − 1)th equation, we solve for xn−1 :


252 Chapter 7 Systems of Linear Equations
1 % &
xn−1 = bn−1 − an−1,n xn
an−1,n−1
We continue working upward, recovering each xi by the formula
252 Chapter 7 Systems of Linear Equations
! n
#
1 "
We continue xworking
i = upward, − 1,formula
ai j x j each(ixi=byn the
bi − recovering n − 2, . . . , 1) (7)
aii j=i+1
! n
#
1 "
Here is pseudocodexi to
=do this: bi − ai j x j (i = n − 1, n − 2, . . . , 1) (7)
aii j=i+1

integer i, j, n; real sum; real array (ai j )1:n×1:n , (xi )1:n


Here is pseudocode to do this:
xn ← bn /ann
for i = n − 1 to 1 step −1 do
integer
sum ←i, bj,i n; real sum; real array (ai j )1:n×1:n , (xi )1:n
xnfor
← jb=n /ai nn+ 1 to n do
for i =sum
n −← 1 to
sum1−step x j do
ai j −1
endsum
for← bi
for sum/a
xi ← j = i ii+ 1 to n do
end for sum ← sum − ai j x j
! program bgauss ! ---------------------------
! subroutine ngauss(n,a,b,x)
! Example of basic Gaussian elimination ! real, dimension (n,n) :: a
!! Ref. Kincaid ! real, dimension (n) :: b,x
!
parameter (n = 4) do k=1,n-1
real, dimension (n,n):: a do i=k+1,n
real, dimension (n) :: b,x xmult = a(i,k)/a(k,k)
data (a(1,j),j=1,n) /6.0,-2.0,2.0,4.0/ do j=k+1,n
data (a(2,j),j=1,n) /12.0,-8.0,6.0,10.0/ a(i,j) = a(i,j) - xmult*a(k,j)
data (a(3,j),j=1,n) /3.0,-13.0,9.0,3.0/ end do
data (a(4,j),j=1,n) /-6.0,4.0,1.0,-18.0/ ! ! b(i) = b(i) - xmult*b(k)
data (b(j), j=1,n) /1.1,2.1,2.0,3.0/ end do
! end do
print * ! Back Substitution
print *,' Basic gaussian elimination'
print *,' Section 4.3, Kincaid-Cheney' ! x(n) = b(n)/a(n,n)
print * ! do i=n-1,1,-1
! sum = b(i)
call ngauss(4,a,b,x) ! do j=i+1,n
print *, (x(j), j=1,4) ! ! ! sum = sum - a(i,j)*x(j)
! ! end do
end program bgauss ! ! x(i) = sum/a(i,i)
! end do
!
! end subroutine ngauss

You might also like