Linear Algebra
Linear Algebra
Chapter 1.
Vectors in R n and C n , spatial vectors
( u v ) u v , ( )u u u ,
( u ) ( )u are all in V .
( 4 ,3 )
vector addition, vector amplifications.
u1
u
If u ( u1 ,u 2 ,...,u n ) then v u t 2 is the
..
u n
transpose of the vector u . Conversely, v t u .
n
u i1u1 i2 u 2 ... ik u k ... in u n k uk
i
k 1
Therefore,
v
v̂ is a unit vector.
|| v ||
n
PQ ( ui vi )
2
i 1
P
u
Q
u .v u | v ( u1v1 u 2 v2 ... u n vn ) ui vi
i
In terms of the physics-metaphor
n
l1 norm : || u ||1 | ui |
i 1
l 2 norm : || u ||2 || u || as we have defined it.
1/ p
n
l p norm : || u ||p | ui | p
i 1
l norm : || u || max | ui |
i
For our sake, we’d mostly consider l 2 norm .
2. u | v v | u symmetry
3. u w | v u | v w | v
4. u | v u | v with a constant
5. u v | u v u | u v | v triangle
inequality
6. u | v u | u v | v Cauchy-Schwarz
inequality
ex. u ( 2 0 - 1) v (1 3 - 2)
u |u 4 0 1 5
v | v 1 9 4 14 and
u | v (-2 1 0 3 (-1) (-2)) 0 5 14
ex. Let V R 2 be an inner product space where
the dot products are defined in the following term:
(( a ,b ),( c , d )) ac bd
p( x ) a0 a1 x a2 x 2 ... an x n
q( x ) b0 b1 x b2 x 2 ... bn x n
1 1 1
v̂1 , v̂2 cos t , v̂3 sin t ,
2
1 1
v̂2 n cos nt , v̂2 n 1 sin nt
cos mt sin nt dt 0 for m n ,
cos mt cos nt dt 0 for m n
sin mt sin nt dt 0 for m n
u | v 1u1v1 2 u 2 v2 ... n u n vn
with 1 2 ... n 1
The neural networks are developed on such
spaces.
More observation.
i j | ik jk (Kronecker delta)
jk 1 if j k, 0 otherwise
e1 ( 1 0 0) , e2 ( 0 1 0) and e3 ( 0 0 1)
Any three-dimensional vector can be expressed in
this basis. e.g.
u1 ( 1 - 2 1), u 2 (0 3 2) and u 3 ( 2 1 - 1)
Then,
d ( u ,v ) u | v
u|v
cos
|| u |||| v ||
u
v
u cos
<u.v> u.v
uv proj( u.v ) u cos = ||u||
||u||||v|| || v ||
Lecture 2. About matrices.
1 1 0 1 1
e.g. A25
1 1 1 0 0
d11 0 0
D 0 d 22 0 . Notice that all dij 0, if i j
0 d 33
0
1 0 0
I 0 1 0
0 0 1
Trace of a matrix Tr( A ) is the sum of all its
n
diagonal elements. Tr( A ) aii .
i 1
2i 0 2
e.g. C33 1 i i 1 i
2 i 3i 3
2 i 1 i 2 i
C t D33 0 i 3i
2 1 i 3
1 i p -i
e.g. i 1 , i 1
0 1 0 i 1 0
1 , 2 i 0 , 3 0 1
1 0
uij aij i j
= 0, otherwise
3 1 2 0 9
0 2 2 1 4
e.g. U 0 0 3 0 1
0 0 0 1 9
0 0 0 0 4
lij aij i j
= 0, otherwise
e.g.
3 0 0 0 0
1 2 0 0 0
L 3 2 3 0 0
8 3 1 1 0
1 1 2 1 4
e.g.
0 2 0 0 1
5 0 0 0 3
A 0 0 0 0 6
2 0 0 0 0
0 0
0 0 1
A B
X where each element unit is a matrix
C D
such as
1 1
2 3 4 3 2 2 2
A ,B 4 0 5 ,C
4 1
3 2
3 0 3
D 2 2 0
1 3 4
2 3 4 3 2
4 1 4 0 5
with the original X 1 1 3 0 3
2 2 2 2 0
3 2 3 0
1
2 0 3 -2 0 -3
Thus, A 1 1 9 , -A = -1 1 -9
4 3 5 -4 -3 -5
a. uj
j u1 u2 u3 ... un on an n-component
vector
n 1
b. ( j 1 )u
j 1
j 2u1 3u2 4u3 ... nun 1
n
c. u v
k 3
k k 2 u3 v1 u4 v2 ... un vn 2
Similarly, on matrices
d. ai
ij a1 j a2 j a3 j ... amj
a ( a
i j j
ij
i j
i2 ai3 ai4 ... ain )
n
f. a
k 1
b ( ai1b1 j ai 2b2 j ... ainbnj )
ik kj
ai. | b. j
b1 j
b
2j
b3 j
(ai1 ai2 ai3 ... aij ... ain )
...
...
bnj
3. 1 Matrix multiplication
e.g.
1 2 3 1 0 1
A 2 1 1 and B 2 2 3
2 0 2 1 1 2
Then
2 1 11
C 1 3 3
4 2 6
2 1 11 3 2 1
AB 1 3 3 , BA= -8 6 10
4 2 6 -3 -3 2
AI IA A if I is a n n identity matrix.
A1 A2 B1 B2 A1 B1 A2 B2 A1 B2 A2 B4
A
3 A4
B3
B4 A3 B1 A4 B3 A3 B2 A4 B4
Therefore, AB I BA
d11 0 0 0
0 d 22 0 0
D then
0 0 .. 0
0 0 0 d nn
1 / d11 0 0 0
0 1 / d 22 0 0
D
1
0 0 .. 0
0 0 0 1 / d nn
e.g.
a a12
a. A 11 we want its inverse
a21 a 22
b11 b12
B such that AB I
21 22
b b
a11 a12 b11 b12 1 0
Now
a22 0 1 implies that
a21 21 22
b b
a11b11 a12b21 1
a21b12 a22b22 1
a11b12 a12b22 0
a21b11 a22b21 0
2 1 1 x1 x2 x3
A 0 1 1 Assume, A1 y1 y2 y3
1 1 2 z z3
1 z2
2 1 1 x1 x2 x3
AA1 0 1 1 y1 y2 y3 I 3
1 1 2 z z3
1 z2
0.5 0.5 0
A1 0.5 2.5 1
0.5 1.5 1
Some observation:
kLi L j L j
Li L j
2x 3 y 5
we have here x 1, y 1
3x y 2
If one equation is the multiple of another equation,
we have infinite solutions. e.g.
2x 3 y 5
4x 6 y 10
a11 x1 a12 x2 b1
The equations have unique
a21 x1 a22 x2 b2
a11 a12
| A| a11a22 a12 a21 0
a21 a22
Show an example.
0 1 4 0 0 4
e.g. 0 0 0 1 0 2 is in row-canonical form.
0 0 0 0 1 3
L1 : 2x1 x2 x3 4
L2 : x1 2x2 3x3 6
L3 : x1 x2 x3 1
The augmented matrix is
2 1 1 4 2 1 1 4
1 2 3 6 0 1 4 5
1 1 1 1 0 0 7 7
x3 1,x2 1,x1 1
x2 1.001 (tolerable),
this gives us
x1 10.00 (outrageous)_
x1 1.000 x2 10.00
Failure of partial pivoting. If we scale equation 1 by
10 4 , then we get
| ak 1 | |a |
step 2: Choose k such that max j1
sk j 1..n sj
Augmented equation.
3 13 9 3 19
6 4 1 18 34
6 2 2 4 16
12 8 26
6 10
The last column of the matrix is augmented to the
original matrix.
l 1 2 3 4
s 13 18 6 12 si max j {aij }
| ali ,1 | 3 6 6 12
Compute now i 4 , , ,
sli 13 18 6 12
0 12 8 1 27
0 2 3 14 18
A
6 2 2 4 16
0 4 6
2 2
Now we are to select the pivot number 2 given that
the vectors are now
| ali , 2 2 12 4
i 2, 3, 4 , ,
sli 18 13 12
0 12 8 1 27
0 0 13 / 3 83 / 6 45 / 2
A
6 2 2 4 16
0 0 2/3 5/3 3
We select the final pivot now, pivot 3. The vectors
are now l 3 1 2 4 and s 13 18 6 12.
0 12 8 1 27
0 0 13 / 3 83 / 6 45 / 2
A
6 2 2 4 16
0 0 6 / 13 6 / 13
0
x( k ) 4 1.5 y( k 1 ) 0.5z ( k 1 )
y( k ) 1 x( k 1 ) 2z ( k 1 )
z ( k ) 2 0.3333x( k 1 ) 0.3333 y( k 1 )
bi aij x(j k )
1
xi( k 1 )
aii j 1
j i
2 5 3 2 0 0 0 0 0 0 5 3
1 6 4 0 6 0 1 0 0 0 0 4
3 2 1 0 0 1 3 2 0 0 0 0
1 0 0 0 -1 1
D 0 3 0 L+U= 1 0 0
0 0 2 -1 0 0
1 0 0
Now, D 1 0 0.3333 0
0 0 0.5
Therefore,
1 0 0 0 1 1
D 1 ( L U ) 0 0.3333 0 1 0 0
0 0 0.5 1 0 0
0 1 1
= 0.3333 0 0
0.5 0 0
1 0 0 0 0
and D 1b 0 0.3333 0 2 0.6666
0 0 0.5 3 1.5
0 1 1 0
x ( k 1 ) 0.3333 0 0 x( k ) 0.6666
0.5 0 0 1.5
0 0.666667 1.5
0.833333 0.944444 1.91667
0.972222 0.990741 1.98611
0.99537 0.998457 1.99769
0.999228 0.999743 1.99961
0.999871 0.999957 1.99994
0.999979 0.999993 1.99999
0.999996 0.999999 2
0.999999 1 2
1 1 2
Gauss-Seidel's Iteration Method:
( D L )x Ux b or x=(D-L)-1Ux ( D L )1 b .
x( k 1 ) ( D L )1Ux( k ) ( D L )1 b
1 i 1 n
k
i ij j
( k 1 ) ( k 1 )
xi b a x aij j
x
aii j 1 j i 1
Define error
e( k ) x x( k ) Tx c ( Tx( k 1 ) c ) T( x x( k 1 ) )
Te( k 1 )
Thus, the magnitude of the error
|| e( k ) ||||Te( k 1 ) ||||T || || e( k 1 ) ||
This will be reduced only if ||T || 1.
where w1 , w2 W
example:
Verification:
a b e f ae b f ae
f : f
c d g h c g d h b f
a b e f a e a e
Also, f f
c d g h b f b f
( a , a b, b ) ( c , c d , d ) ( a c , a b c d , b d )
and (a, a b, b) (a, a b, b)
1 1 0 1
f (1,0,0) f (0,1,0) and
0 1 1 0
0 0
f (0,0,1)
0 1
1 1 0 1 0 0
a b c
0 1 1 0 0 1
a a b
b a c
That is, even the vectors in this space are subject
to the same linear transformation. This shows that
a linear mapping f : n m is possible.
n m
v a i vi and w b j v j
i 1 j 1
n n m
f ( v ) f a i vi a i f ( vi ) b j w j
i 1 i 1 j 1
m
This is possible only when each f (vi ) c ji w j .
j 1
n n m
Check. ai f (vi ) ai c ji w j
i 1 i 1 j 1
n m
But ai c ji is some b j . Therefore, f (v) b j w j
i 1 j 1
For V: p1 ( x) 1 p2 ( x) x, p3 ( x) x 2 , p4 ( x) x 3
d
The function f transforms vectors as
dx
d d
f :V V . p1 ( x) 0, p2 ( x) 1 p1 ( x)
dx dx
d d
p3 ( x) 2 x 2 p2 ( x), p4 ( x) 3 x 2 3 p3 ( x)
dx dx
f : ( a , b, c ) ( a , a b, b c , a b c )
Consider the basis vectors in 3 . How are they
transformed?
1 1 0 1
f 0 1 1 1
0 0 1 1
a
a ab
Thus the vector b maps into as
bc
c
a b c
a 1 0 0
a b 1 a
1 0
= b
b c 0 1 1
a b c 1 c
1 1
Example. A mapping f : (a, b) (a 3b,5b) is
thus the matrix transformation
a 3b 1 3 a
5b 0 5 b
f (a, b) f (c d ) f (a c, b d ) and
f (a , b ) f ( a , b)
http://turnbull.mcs.st-and.ac.uk/~sophieh/LinearAlg/SHlintran.pdf
More on mapping.
U
V
A mapping f : U V is onto (or surjective) if
every element v V is a mapping of one or more
element in U .
U
V
Output Layer
Pattern NN
System
A hashing function where many addresses are
hashed onto the same hashed address is an
example of onto mapping. A cluster is another
example.
We do your
Finger-printing
here.
Identity mapping.
Example. f : . f ( x) 2e x is an one-to-
one mapping. x , y f ( x) . But this is not
onto. We don’t have the situation that y , x .
g : where g ( x) 2 x 3 is a bijective
mapping. For every x , there is a unique y
and for every y there is a unique x .
Kernel
Zero
x3
vector y where
x1
2 3 4 2 x1 3 x2 4 x3
y x2
1 2 8 x1 2 x2 8 x3
x3
1
2 3 4 2
Ae1 0
1
2 8 1
0
0
2 3 4 3
Ae2 1
1 2 8 2
0
4
and Ae3
8
Indeed, these are all columns of the
transformation matrix A .
2 3 4
Suppose A 1 2 3 . The rowspace of A is
0 7 2
the subspace of 3 spanned by the vectors
u1 (2 3 4), u 2 (1 - 2 3) and u3 (0 - 7 2)
and
Colrank ( A) Colrank ( PA) Colrank ( AQ)
2 1 1 2
A 1 2 0 1
6 6 6 6
Revisit to Ax b .
A LU where
L11 y1 b1 y1 b1 / L11
L21 y1 L22 y 2 b2 y 2 (b2 L21 y1 ) / L22
….
Ux y
Cholesky decomposition
x * Ax 0 where x * = x t a complex-conjugate
vector.
In this case,
l11 0 0 0 l11 l 21 .. l n1
l l 0 0 0 l 22 .. l n 2
21 22
.. .. .. .. 0 .. .. ..
l .. l nn 0 0 0 l nn
n1 l n 2
This gives us the matrix coefficients.
i 1 i 1
lii (aii lik2 ) and l ji a ji l jk lik / lii
k 1 k 1
Matrix inversion problem.
a b c
P d e f .
g h i
A B C
e f
Q D E F where A det
h i
G H I
d f d e
B det , C det … etc.
g i g h
Then the inverse of P , P 1 is
1
P 1 Q
det( P )
1 0 0
e.g I 3 0 1 0
0 0 1
We can generate a number of elementary matrices
from it.
0 0 1 0 1 1
a. E1 0 1 0 b. E2 0 1 0
1 0 0 1 0 0
1 0 0 1 0 0
c. E3 2 1 0 d. E4 0 1 0
0 0 1 1 0 1
R1
Let A R2
R3
For instance,
R3
E1 A R2 R1 , R3 interchange.
R1
R2 R3
E2 A R2 R1 is replaced by R2 R3 , R3 by R1
R1
R1
E3 A 2 R1 R2 Replace R2 by 2 R1 R2
R3
R1
and E4 A R2
R1 R3
Thus,
1 0 0
Em El Ek ...E2 E1 A 0 1 0
0 0 1
Em El Ek ...E2 E1 A 1
Observe:
1. A has an inverse.
2. Ax b has a unique solution for any b .
3. A is row-equivalent to I n
4. A can be expressed as product of
elementary matrices.
The world of Eigenvalues-eigenfunctions
Ax x
d
Suppose the operator is A ( x ) . A operating on
dx
d n
x n produces Ax n x x nx n .
dx
Therefore, the operator A has an eigenvalue n
corresponding to eigenfunction x n .
2 1
Let A be the matrix and we want to
2 5
compute its eigenvalues and eigenfunctions. Its
characteristic equation (CE) is:
2 1
det 0 (2 - )(5 - ) 2 0
2 5
This gives 2 7 12 0 ( 3 )( 4 ) 0
x1
Let the eigenfunction be the vector x
x2
corresponding to e-value 3. Then
2 1 x1 x1 3 x1
2 5 x 3 x 3 x
2 2 2
Therefore, we have 2 x1 x2 3 x1 yielding
x1 x2 . Also, we get 2 x1 5 x2 3 x2 which gives
us no new result. Therefore, we can arbitrarily take
1
the following solution: e1 corresponding to e-
1
value 3 for the matrix A.
Define P1 A, p1 trace( A ) 12 16 16 44
P2 A( P1 p1I )
12 6 6 32 6 6
6 16 2 6 28 2
6 2 16 6 2 28
The CA polynomial =
( 1 )3 3 442 564 1728
The eigenvalues are next found solving
3 442 564 1728 0
Proof of e.
Suppose, the eigenfunction of P 1 AP is y with
eigenvalue k .
Then,
P 1 APy ky APy Pky kPy
Therefore, Py x and k must be equal to .
Therefore the eigenvalues of A and P 1 AP are
identical and the eigenvector of one is a linear
mapping of the other one.
Then the matrix P x( 1 ) , x( 2 ) ,..., x( n )
Then AP Ax( 1 ) , Ax( 2 ) ,..., Ax( n )
1 x( 1 ) , 2 x( 2 ) ,..., n x( n )
x( 1 ) , x( 2 ) ,..., x( n ) 1e( 1 ) , 2 e( 2 ) ,..., n e( n )
PD
Therefore, P 1 AP D
matrix Q u ( 1 ) ,u ( 2 ) ,...,u ( n ) would be an
orthogonal matrix. i.e. Q t AQ D
Matrix-norm.
l 2 -norm of A || A ||2 ( A A ) t
1/ 2
1 1 0
e.g. A 1 2 1
1 1 2
1 1 1 1 1 0 3 2 1
Then At A 1 2 1 1 2 1 2 6 4
0 1 2 1 1 2 1 4 5
1 0 , 2 7 7 , 3 7 7
Therefore, A 2 ( At A ) 7 7 3.106
3
a3 j 6 Therefore, A max( 2,4,6 ) 6
j 1
1
0
Example. Is A 2 convergent?
1 1
4 2
1 1 1
0 0 0
2 4 3 8 4 16
A , A , A ,
1 1 3 1 1 1
4 4 16 8 8 16
It appears that
1
2k 0
A
k
k 1
2 1 2
k k
1
In the limit k , k 0 . Therefore, A is a
2
convergent matrix.
Note the following equivalent results:
a. A is a convergent matrix
b1. lim A k 0
k 2
b2. lim A k 0
k
c. ( A ) 1
d. lim A k x 0 x
k
K ( A ) A . A-1
k maxi
i
A m x a11m v1 a2 m
2 v2 a33 v3 ... a n n vn
m m
m
m
m
m
k a1 m v1 a 2 m v2 ... a k vk ...a n m
1 2 n
k k k
mj
For a large m , 0 j k . Therefore,
m
k
1
lim A m
x ak vk
m k
m
…. (2)
A m x . y m
k a k vk . y … (3)
A m 1 x. y
k lim m
… (5)
m A x.y
1
Let’s start with a vector x
0
1
Now let y . With m 5, equation (5) yields the
0
most dominant eigenvalue as
A6 x. y 2254
4.0106
A5 x . y 562
The dominant eigenvalue seems to be 4, and the
1
corresponding eigenfunction seems to be .
1
From (4), A( A m x ) A m x when m .
Therefore, if the system converges well A m x is
roughly equal to the eigenvector for which the
eigenvalue is . We’ll return to it later on.
Rayleigh Quotient
x t Ax
R( A, x ) t
x x
It would be a scalar whose magnitude would be
bounded. If A is symmetric then all is eigenvalues
are real and its RQ (Rayleigh Quotient) would be
bounded as follows:
min R( A, x ) max
1comp
x t Ax
t with x A m x0
x x
we can approximate the error bound as
Ax.Ax
| 1comp 1actual |
x.x
1comp 2
example.
5 2 1
A and starting x0
2 8 1
1
starting with x0 = we get
1
3 3 69 4 1005
Ax0 , A x0 , A x0
2 3
, A x0 2778
6
42 330
101373
A5 x0
215034
Ax.Ax A6 x0 .A5 x0
Using A x0 , we get 1
5
5
x.x A x0 . A5 x0
8.9865
The error estimate is:
A 6 x 0 . A 6 x0 2
| 8.9865 1comp | 8.9865
A 5 x . A5 x
0 0
0.26
| 1comp ( n ) 1comp ( n 1 ) |
error( n 1 )
| 1comp ( n 1 ) |
Example.
5 2 1
A and again we start at x
0
2 8 1
3 0 .5
e.g. Ax0 scale = w1
6 1
0. 5 0.07143
Aw1 scale w2
7 1
1.64 0.4949
Aw2 … Aw10
7.85 1
Thus,
| 1comp ( 3 ) 1comp ( 4 ) |
error( 4 ) 0.04858
| 1comp ( 4 ) |
similarly, we can compute
error( 5 ), error(6)
v
u1 1 and  A 1u1u1t
v1
5 2
Example. Again A and let 1 9 and its
2 8
1
eigenvector is v1
2
1/ 5
Therefore, u1
2 / 5
5 2 1 / 5 1 2
We compute  9
8 2 / 5 5
.
2 5
4 4 2
5 2 1
1
By applying the Power method on  with x0
1
We get …
Power method with shift
q
Any example?
__________________
Stability of numerical eigenvalue problems.
Ex. Consider
1 1000 1 1000
A and B
0 1 0.001 1
A 1,1 B 0 , 2
1 1000
consider this time also C
0.001 1
C has no real eigenvalue since its characteristic
polynomial 2 2 2 has no real root.
Changes in
Eigenvalues
Changes in a
symmetric Matrix
Numerical methods are generally successful in
situations when we deal with essentially symmetric
matrices to compute eigenvalues.
n
2
AF aij
i , j 1
2
i ˆ i E
n 2
F
i 1
2
Note that k ˆ k i ˆ i E
2 n 2
F
i 1
Thus, the above constraint.
1 1 2
A 1 2 7
2 7 5
Therefore,
E F
( 0.01 )2 2( 0.05 )2 5( 0.1 )2 0.032
0.23664
| k ˆ k | 0.23664
| k ˆ k | and k
3
b1
b
a .b a b = a1 a2 ... an 2
t
...
b
n
a1b1 a2b2 ... an bn
2
a .a a12 a2 2 ... an 2 a
u
Let u1 v1 and e1 1
u1
u2
Now, let u 2 v2 ( u1 .v2 )u1 and e2
u2
Therefore, u1 is perpendicular to u 2 .
And, u 2 .u 3 = 0
v1 u1
v2 u 2 ( u1 .v2 )u1
v3 u 3 ( u1 .v3 )u1 ( u 2 .v3 )u 2
v4 u 4 ( u1 .v4 )u1 ( u 2 .v4 )u 2 ( u 3 .v4 )u 3
…
v v1 v2 .. vn QR
where Q u1 u 2 .. u n
2 1 1
v1 1 , v2 1 and v3 3
3 2 2
A( m ) Q ( m ) R ( m )
A( m 1 ) R ( m )Q ( m ) for m 1,2 ,..
( m )t
Since R (m)
Q A( m ) we get recursive definition
( m 1 ) ( m )t
A Q A( m )Q ( m )
As we progress the sequence A( m ) will converge to
a triangular matrix with its eigenvalues on the
diagonal (or to a nearly triangular matrix whence we
can compute the egienvalues).
More on iterative solution of Ax b
Iterative solution
Jacobi’s method
Gauss’-Seidel’s
Approach
SOR method
4 x1 3 x2 24
3x1 4 x2 x3 30
x2 4 x3 24
For Gauss’-Seidel,
1 i 1 n
( k 1 )
xi( k ) (k )
aij x j aij x j bi
aii j 1 j i 1
xi( k ) ( 1 )xi( k 1 )
i 1 ( k ) n
( k 1 )
aij x j aij x j bi
aii j 1 j i 1
….
n n
Let S ei2 ( a0 a1 xi yi )2
i 1 i 1
S n
2( a0 a1 xi yi ) 0 … (1) and
a0 i 1
S n
2 xi ( a0 a1 xi yi ) 0 … (2)
a1 i 1
n xi yi xi yi
a1
n xi2 xi 2
cov( x , y )
( x , y )
x y
n xy x y 2
where ( x , y )
n x 2 x2 n y 2 y 2
The correlation coefficient is bounded: 1 1.
A good fit implies absolute value of is closer to 1.
a X x Y log y y ae bx
b. X log x Y log y y ax b
c. X log x Y y
y log ax
b 2
d. X x2 Y ey e y a bx 2
>> a
a=
1 2
2 4
3 5
4 6
5 9
6 3
7 12
8 15
9 14
>> x=a(:,1)'
x=
1 2 3 4 5 6 7 8 9
>> y=a(:,2)'
y=
2 4 5 6 9 3 12 15 14
>> pcoeff=polyfit(x,y,1)
pcoeff =
1.5333 0.1111
>>
>> pcoeff=polyfit(x,y,2)
pcoeff =
>> r=corrcoef(a)
r=
1.0000 0.8582
0.8582 1.0000
>>
These assignments are suggested for sharpening our Linear Algebra skills. Our next exam
would definitely post similar questions (note the word is “similar”, not “same”) and it
would be to our advantage to spend time on it now. You may try Matlab to get their
answers in some cases just to ensure yourself that you did solve them correctly. Matlab is
available in the lab C014. A quick tutorial for Matlab could be procured from
http://www.math.siu.edu/matlab/tutorials.html. In particular, tutorial 3 and tutorial 4
show how to use Matlab to solve Linear Algebra problems.
1. In these two questions you’re supposed to solve the simultaneous linear equations
using Gaussian elimination procedure which is usually known as naïve pivoting
scheme.
x y z t 2
2x 3 y z 2
2x y z t 1
a. x y 3z 1 and b.
x 2 y z 2t 6
x 2y z 2
x y z t 2
2 4 1 3
1 2 2 5 4
1 0 1 2 1
a. b.
0 0 2 2
1 4 5
3 6 2 5
4. Write a program to solve the following set of equations using Jacoby’s iteration
scheme outlined in our lecture notes. If you are having difficulty to converge,
indicate why. Indicate what must be done to converge to a correct set of solution
if the latter does exist.
2x 3 y z 2 3x y z 6
a. x y 3z 1 b. x 4 y z 1
x 2y z 2 x y 4z 8
5. Solve the above problems using Gauss-Seidel’s approach. Is the method faster
than pure Jacoby’s iteration scheme? Comment on the number of iterations taken
in these cases if you start at the same initial positions as in (4).
Assignment 2.
This is just like the previous assignment; this is supposed to provide a practice drill for your upcoming
exam.
Given this, obtain elementary matrices for each one of the following operations:
a. 8R2 b. R3 2R1 c. R1 R3
1 0 0 0 1 0 0 0
0 0 1 0 0 1 1 0
A B
0 1 0 0 0 0 1 0
0 0 0 1 0 0 0 1
5 7
3. Let A , which of the following statements are true?
3 4
a. A is singular b. A is invertible
c. A is non-singular d. A 1 does exist
4 7
e. The inverse of A
3 5
4. In this question you are given two matrices (a) and (b). For each case, obtain a set of elementary
matrices E i such that E k El E m ...E 2 E1 operating on the relevant matrix transforms it into an upper
diagonal matrix U . Given this sequence, obtain next the product matrix L E11 E 21 E31 ...E k1 such
that each of the matrices in (a) and (b) could be expressed in A LU form. Write down their
corresponding matrix factors.
1 2 1 3 1 1 1 2 5
a. A 2 5 4 b. A 1 2 3 and c. A 1 3 6
0 1 3 0 1 1 1 3 7
5. Given the above matrices obtain their inverses from the elementary matrices identified.
d1 0 0
6. What would be the inverse of diagonal matrices like D 0 d 2 0 ?
0 0 d 3
4 2 2
1. Consider the symmetric matrix A 2 10 5 . Show that A can be expressed as A LLt .
2 5 9
Obtain this decomposition.
2. Find a factorization of the form A LDLt for the following symmetric matrices:
4 1 1 1
2 1 0 1 3 1 1
(a) A 1 2 1 and (b) B
1 1 2 0
0 1 2
1 1 0 2
3. Using the decomposition in (1), show how you are going to solve the following equations:
4x 2 y 2z 0
2 x 10 y 5 z 3
2x 5 y 9z 2
2 2
We first solve (a) Ly b to get the vector y and then solve (b) Lt x y for the vector x )
4. Find a sequence of elementary matrices Ei such that when they operate on the matrix A from left as
shown below the matrix A is transformed into the identity matrix I .
E i E j E k ...E m A I
Show how from such an expression we can get A 1 given that A is the matrix shown in question 1.
5. Using Gramm-Schmidt orthogonalization process show how to transform the following vectors into an
orthogonal set:
2 1 1
v1 1 v 2 2 and v3 1
2 2 1
Convert the orthogonal set into an orthonormal set.
6. Write a program (in a language you prefer) to compute the most dominant and the least dominant
eigenvalues of an invertible matrix given that they are real. Your program should identify the associated
eigenvectors as well.