MatricesAndVectors PDF
MatricesAndVectors PDF
38
Basic Matrix Concepts (cont’d)
• Two vectors can be added if they have the same dimension. Addition
is carried out elementwise.
x1 y1 x1 + y1
x2 y2 x2 + y2
x+y =
... +
... =
...
xp yp xp + yp
2
0
h i
x = 1 and x = 2 1 −4
−4
2 6×2 12
6×x=6× 1 = 6×1 = 6
−4 6 × (−4) −24
2 5 2+5 7
x+y = 1 + −2 = 1 − 2 = −1
−4 0 −4 + 0 −4
40
Basic Matrix Concepts (cont’d)
41
Basic Matrix Concepts (cont’d)
• If c = L−1
x , then cx is a vector of unit length.
42
Examples
2
1
The length of x = is
−4
−2
q √
Lx = (2)2 + (1)2 + (−4)2 + (−2)2 = 25 = 5
Then
2 0.4
1
1
0.2
z = × =
5 −4 −0.8
−2 −0.4
is a vector of unit length.
43
Angle Between Vectors
44
Angle Between Vectors (cont’d)
45
Inner Product
√ q
• Then Lx = x0x, Ly = y0y and
x0y
cos(θ) = q q
(x0x) (y 0 y )
c1 x + c2 y = 0
x = (c2/c1)y
Let
1 1 1
x1 =
2 , x2 = 0 , −2
x3 =
1 −1 1
49
Matrices
50
Matrix Algebra
51
Matrix Addition
Two matrices An×p = {aij } and Bn×p = {bij } of the same dimen-
sions can be added element by element. The resulting matrix is Cn×p =
{cij } = {aij + bij }
C = A+B
a11 a12 · · · a1p b11 b12 · · · b1p
a21 a22 · · · a2p b21 b22 · · · b2p
= +
... ... ... ... ... ... ... ...
an1 an2 · · · anp bn1 bn2 · · · bnp
a11 + b11 a12 + b12 · · · a1p + b1p
a21 + b21 a22 + b22 · · · a2p + b2p
=
... ... ... ...
an1 + bn1 an2 + bn2 · · · anp + bnp
52
Examples
" #0 2 5
2 1 −4
= 1 7
5 7 0
−4 0
" # " #
2 1 −4 12 6 −24
6× =
5 7 0 30 42 0
53
Matrix Multiplication
• Multiplication of two matrices An×p and Bm×q can be carried out only
if the matrices are compatible for multiplication:
The element in the i-th row and the j-th column of A × B is the inner
product of the i-th row of A with the j-th column of B.
54
Multiplication Examples
" # 1 4 " #
2 0 1 2 10
× −1 3 =
5 1 3 4 29
0 2
55
Identity Matrix
56
Symmetric Matrices
• Examples
" # 5 1 −3
4 2
1 12 −5
2 4
−3 −5 9
57
Inverse Matrix
AB = BA = I
then B is the inverse of A, denoted A−1.
58
Inverse Matrix
59
Orthogonal Matrices
QQ0 = Q0Q = I,
or Q0 = Q−1.
60
Eigenvalues and Eigenvectors
61
Spectral Decomposition
λ1 0 · · · 0
0 λ2 · · · 0
= [e1 e2 · · · ek ]
... ... . . . ... [e1 e2 · · ·
ek ]0
0 0 · · · λk
= P ΛP 0
62
Determinant and Trace
63
Rank of a Matrix
rank(A) = k
i.e., there are no zero eigenvalues
64
Positive Definite Matrix
• If x0Ax ≥ 0 for any vector x, both A and the quadratic form are said
to be non-negative definite.
• If x0Ax > 0 for any vector x 6= 0, both A and the quadratic form are
said to be positive definite.
65
Example 2.11
√
• Show that the matrix of the quadratic form 3x2
1 + 2x 2−2
2 2x1x2 is
positive definite.
• For
" √ #
A= √3 − 2 ,
− 2 2
the eigenvalues are λ1 = 4, λ2 = 1. Then A = 4e1e01 + e2e02. Write
66
Example 2.11 (cont’d)
67
Distance and Quadratic Forms
d2 = x0Ax > 0
when x 6= 0. Thus, a positive definite quadratic form can be inter-
preted as a squared distance of x from the origin and vice versa.
68
Distance and Quadratic Forms (cont’d)
• We can interpret distance in terms of eigenvalues and eigenvectors of
A as well. Any point x at constant distance c from the origin satisfies
p p
x0Ax = x0( λj ej e0j )x = λj (x0ej )2 = c2,
X X
j=1 j=1
the expression for an ellipsoid in p dimensions.
−1/2
• Note that the point x = cλ1 e1 is at a distance c (in the direction of
e1) from the origin because it satisfies x0Ax = c2. The same is true
−1/2
for points x = cλj ej , j = 1, ..., p. Thus, all points at distance c lie
on an ellipsoid with axes in the directions of the eigenvectors and with
−1/2
lengths proportional to λj .
69
Distance and Quadratic Forms (cont’d)
70
Square-Root Matrices
j=1 λj
1/2
• With Λ1/2 = diag{λj }, a square-root matrix is
p q
A1/2 = P Λ1/2P 0 = λj ej e0j
X
j=1
71
Square-Root Matrices
The square root of a positive definite matrix A has the
following properties:
2. A1/2A1/2 = A
Pp −1/2
3. A−1/2 = λ
j=1 j e j e 0 =
j P Lambda−1/2P 0
4. A1/2A−1/2 = A−1/2A1/2 = I
5. A−1/2A−1/2 = A−1
Note that there are other ways of defining the square root of a positive
definite matrix: in the Cholesky decomposition A = LL0, with L a matrix
of lower triangular form, L is also called a square root of A.
72
Random Vectors and Matrices
• A random matrix (vector) is a matrix (vector) whose elements are ran-
dom variables.
73
Linear Combinations
• The usual rules for expectations apply. If X and Y are two random
matrices and A and B are two constant matrices of the appropriate
dimensions, then
E(cX) = cE(X).
74
Mean Vectors and Covariance Matrices
75
Mean Vectors and Covariance Matrices (cont’d)
76
Mean Vectors and Covariance Matrices (cont’d)
77
Mean Vectors and Covariance Matrices (cont’d)
78
Mean Vectors and Covariance Matrices (cont’d)
79
Correlation Matrix
80
Correlation Matrix (cont’d)
(V 1/2)−1Σ(V 1/2)−1
81
Partitioning Random vectors
82
Partitioning Covariance Matrices (cont’d)
83
Linear Combinations of Random variables
84
Cauchy-Schwarz Inequality
0 (b0d)2
0<bb− ⇒ (b0d)2 < (b0b)(d0d)
d0d
for b 6= cd (otherwise, we have equality).
86
Extended Cauchy-Schwarz Inequality
If b and d are any two p × 1 vectors and B is a p × p positive definite
matrix, then
(b0d)2 ≤ (b0B b)(d0B −1d)
with equality if and only if b = cB −1d or d = cB b for some
constant c.
Pp √ Pp
Proof: Consider B 1/2 0
= i=1 λieiei, and B −1/2 = i=1 √1 eie0i.
( λi )
Then we can write
0 ∗
b0d = b0 I d = b0B 1/2B −1/2d = (B 1/2b)0(B −1/2d) = ∗
b d .
To complete the proof, simply apply the Cauchy-Schwarz
inequality to the vectors b∗ and d∗.
87
Optimization
x0B x
min 0 = λp, attained when x = ep.
x6=0 x x
• Furthermore, for k = 1, 2, · · · , p − 1,
x0B x
max 0
= λk+1 is attained when x = ek+1.
x⊥e1,e2,··· ,ek x x
See proof at end of chapter 2 in the textbook (pages 80-81).
89