Linear Algebra
Linear Algebra
TRUE
Linear algebra has become as basic and as applicable as calculus,and fortunately it is easier.
-Gilbert Strang
Calculus
SCALARS
What you re used to dealing with Have magnitude, but no direction
VECTORS
Represent both a magnitude and a direction Can add or subtract, multiply by scalars, or do dot or cross products
THE MATRIX
It s an mxn array Holds a set of numerical values Especially useful in solving certain types of equations Operations: Transpose, Scalar Multiply, Matrix Add, Matrix Multiply
EIGENVALUES
You can choose a matrix A, a vector x, and a scalar x so that Ax = sx, meaning the matrix just scales the vector X in this case is called an eigenvector, and s is its eigenvalue
CHARACTERISTIC EQUATION
det(M-tI) = 0 M: the matrix I: the identity t: eigenvalues
CAYLEY-HAMILTON THEOREM
IF AND
IN THE BEGINNING
(Grassmann s Linear Algebra)
Grassmann is considered to be the father of linear algebra Developed the idea of a linear algebra in which the symbols representing geometric objects can be manipulated Several of his operations: the interior product, the exterior product, and the multiproduct
VECTOR SPACE
Another idea which is kind of tied with Grassman Vector Space refers to some set of vectors that contains the origin It is usually infinite Subspace is a subset of vector space. It, of course, is also vector space
Cholesky Decomposition
Algorithm developed by Arthur Cayley Takes a matrix and factors it into a triangular matrix times its transpose A=R R Useful for matrix applications Becomes even more worthwhile in parallel
BIBLIOGRAPHY
Hermann Grassmann. Online. http://members.fortunecity.com/johnhays/grassmann .htm Abstract Linear Spaces. Online. http://wwwgroups.dcs.stand.ac.uk/~history/HistTopics/Abstract_ linear_spaces.html Liberman, M. Linear Algebra Review. Online. http://www.ling.upenn.edu/courses/ling525/linear_al gebra_review.html Cholesky Factorization. Online. http://www.netlib.org/utk/papers/factor/node9.html
Born: April 30, 1777 (Germany) Died: Feb 23, 1855 (Germany)
Gaussian Elimination
LU Factorization Operation Count Instability of Gaussian Elimination without Pivoting Gaussian Elimination with Partial Pivoting
Linear systems
A linear system of equations (n equations with n unknowns) can be written:
a11 x1 + a12 x2 + ... + a1n xn = b1 a21 x1 + a22 x2 + ... + a2n xn = b2 ... an1 x1 + an2 x2 + ... + ann xn = bn
Using matrices, the above system of linear equations can be written:
LU Factorization
Gaussian elimination transforms a full linear system into an upper-triangular one by applying simple linear transformations on the left. Let A be a square matrix. The idea is to transform A into upper-triangular matrix U by introducing zeros below the diagonal.
LU Factorization
This elimination process is equivalent to multiplying by a sequence of lowertriangular matrices Lk on the left: Lm-1 L2L1A = U
LU Factorization
Setting L = (Lm-1 )-1 (L2)-1(L1)-1 We obtain an LU factorization of A A = LU
In order to find a general solution of a system of equations, it is helpful to simplify the system as much as possible. Gauss elimination is a standard method (which has the advantage of being easy to implement on a computer) for doing this. Gauss elimination uses elementary operations. We can: interchange any two equations multiply an equation by a (nonzero) constant add a multiple of one equation to any other one and aim to reduce the system to triangular form. The system obtained after each operation is equivalent to the original one, meaning that they have the same solutions.
Algorithm of Gaussian Elimination without Pivoting U = A, L = I For k = 1 to m-1 for j = k +1 to m ljk = ujk/ukk uj,k:m = uj,k:m ljkuk,k:m
Operation Count
There are 3 loops in the previous algorithm There are 2 flops per entry For each value of k, the inner loop is repeated for rows k+1, , m. Work for Gaussian elimination is ~2
3
m3 flops
1 1
1 1
A2 =
1020 1
Pivoting
Pivots Partial Pivoting Example Complete Pivoting
Pivot
Partial Pivoting
Example
A
2 4 =8 6
2 4 8 6
1 3 7 7
1 3 9 9
0 1 5 8
0 1 5 8
1 1
1 3 7 7
1 3 9 9
8 4 2 6
7 3 1 7
9 3 1 9
5 1 0 8
P1
L1
1 1/2 1/4 3/4 1 1 1 8 4 2 6
7 3 1 7
9 3 1 9
5 1 0 8
8 0 0 0
1/ 2 3/ 4 7/ 4
3/ 2 5/ 4 9/ 4
Reference: http://www.maths.soton.ac.uk/teaching/units/ma273/node8.html http://www.maths.soton.ac.uk/teaching/units/ma273/node9.html Numerical Linear Algebra by Lloyd Trefethen and David Bau, III http://www.sosmath.com/matrix/system1/system1.html
What I ll Be Covering
How Computers made Numerical Linear Algebra relevant. LAPACK Solving Dense Matrices on Parallel Computers.
What is LAPACK?
Linear Algebra PACKage Software package designed specifically for linear algebra applications. The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors.
LAPACK continued
LAPACK is written in Fortran77 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value problems. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.
Checkerboard Partitioning
Striped Partitioning
Matrix is divided into groups of complete rows or columns, and each processor is assigned one such group.
P(I) contains columns with indices (n/p)I, (n/p)I + 1, , (n/p)(I+1) 1. P(I) contains rows with indices I, I+p, I+2p, , I+n-p.
In row-wise striping:
Checkerboard Partitioning
The matrix is divided into smaller square or rectangular blocks or submatrices that are distributed among processors.
Square matrix blocks are treated as indivisible units, and whole blocks are communicated instead of individual elements. Then do a local rearrangement within the blocks.
Conclusion
Linear algebra is flourishing in an age of computers, where there are limitless applications. LAPACK exists as an efficient code library for processing large systems of equations on parallel processing computers. Parallel Computers are very well suited to these kinds of problems.
Useful Links
http://www.crpc.rice.edu/CRPC/brochure/res_la.html http://citeseer.nj.nec.com/26050.html http://www.maa.org/features/cowen.html http://www.nersc.gov/~dhbailey/cs267/Lectures/Lect_10_2000. pdf http://www.cacr.caltech.edu/ASAP/news/specialevents/tutorialnl a.htm http://www.netlib.org/scalapack/ http://citeseer.nj.nec.com/125513.html http://discolab.rutgers.edu/classes/cs528/lectures/lecture7/ http://www.cse.uiuc.edu/cse302/lec20/lec-matrix/lecmatrix.html