Matrix Cookbook PDF
Matrix Cookbook PDF
Errors: Very likely there are errors, typos, and mistakes for which we apolo-
gize and would be grateful to receive corrections at [email protected].
1
CONTENTS CONTENTS
Contents
1 Basics 5
1.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Derivatives 7
2.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 7
2.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 9
2.4 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 12
3 Inverses 15
3.1 Basic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Complex Matrices 19
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 19
5 Decompositions 22
5.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . 22
5.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 22
5.3 Triangular Decomposition . . . . . . . . . . . . . . . . . . . . . . 24
7 Gaussians 28
7.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.4 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 33
8 Special Matrices 34
8.1 Units, Permutation and Shift . . . . . . . . . . . . . . . . . . . . 34
8.2 The Singleentry Matrix . . . . . . . . . . . . . . . . . . . . . . . 35
8.3 Symmetric and Antisymmetric . . . . . . . . . . . . . . . . . . . 37
8.4 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . 37
8.5 Toeplitz Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.6 The DFT Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 2
CONTENTS CONTENTS
A One-dimensional Results 50
A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 51
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 3
CONTENTS CONTENTS
det(A) Determinant of A
Tr(A) Trace of the matrix A
diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = ij Aij
vec(A) The vector-version of the matrix A (see Sec. 9.2.2)
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
A Complex conjugated matrix
AH Transposed and complex conjugated matrix (Hermitian)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 4
1 BASICS
1 Basics
(AB)1 = B1 A1
(ABC...)1 = ...C1 B1 A1
(AT )1 = (A1 )T
(A + B)T = AT + B T
(AB)T = B T AT
(ABC...)T = ...CT BT AT
(AH )1 = (A1 )H
(A + B)H = AH + B H
(AB)H = B H AH
(ABC...)H = ...CH BH AH
P
Tr(A) = Aii
Pi
Tr(A) = i i , i = eig(A)
T
Tr(A) = Tr(A )
Tr(AB) = Tr(BA)
Tr(A + B) = Tr(A) + Tr(B)
Tr(ABC) = Tr(BCA) = Tr(CAB)
Q
det(A) = i i i = eig(A)
det(AB) = det(A) det(B)
det(A1 ) = 1/ det(A)
det(I + uvT ) = 1 + uT v
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 5
1.2 The Special Case 2x2 1 BASICS
p p
Tr(A) + Tr(A)2 4 det(A) Tr(A) Tr(A)2 4 det(A)
1 = 2 =
2 2
1 + 2 = Tr(A) 1 2 = det(A)
Eigenvectors
A12 A12
v1 v2
1 A11 2 A11
Inverse
1 1 A22 A12
A =
det(A) A21 A11
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 6
2 DERIVATIVES
2 Derivatives
This section is covering differentiation of a number of expressions with respect to
a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.5 for differentiation of structured matrices. The basic
assumptions can be written in a formula as
Xkl
= ik lj
Xij
The following rules are general and very useful when deriving the differential of
an expression ([13]):
A = 0 (A is a constant) (1)
(X) = X (2)
(X + Y) = X + Y (3)
(Tr(X)) = Tr(X) (4)
(XY) = (X)Y + X(Y) (5)
(X Y) = (X) Y + X (Y) (6)
(X Y) = (X) Y + X (Y) (7)
(X1 ) = X1 (X)X1 (8)
(det(X)) = det(X)Tr(X1 X) (9)
(ln(det(X))) = Tr(X1 X) (10)
XT = (X)T (11)
XH = (X)H (12)
det(X)
= det(X)(X1 )T
X
det(AXB)
= det(AXB)(X1 )T = det(AXB)(XT )1
X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 7
2.2 Derivatives of an Inverse 2 DERIVATIVES
det(XT AX)
= 2 det(XT AX)XT
X
If X is not square but A is symmetric, then
det(XT AX)
= 2 det(XT AX)AX(XT AX)1
X
If X is not square and A is not symmetric, then
det(XT AX)
= det(XT AX)(AX(XT AX)1 + AT X(XT AT X)1 ) (13)
X
ln det(XT X)|
= 2(X+ )T
X
ln det(XT X)
= 2XT
X+
ln | det(X)|
= (X1 )T = (XT )1
X
det(Xk )
= k det(Xk )XT
X
Y1 Y 1
= Y1 Y
x x
from which it follows
(X1 )kl
= (X1 )ki (X1 )jl
Xij
aT X1 b
= XT abT XT
X
det(X1 )
= det(X1 )(X1 )T
X
Tr(AX1 B)
= (X1 BAX1 )T
X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 8
2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
xT a aT x
= = a
x x
aT Xb
= abT
X
aT XT b
= baT
X
aT Xa aT XT a
= = aaT
X X
X
= Jij
Xij
(XA)ij
= im (A)nj = (Jmn A)ij
Xmn
(XT A)ij
= in (A)mj = (Jnm A)ij
Xmn
X X
Xkl Xmn = 2 Xkl
Xij
klmn kl
bT XT Xc
= X(bcT + cbT )
X
(Bx + b)T C(Dx + d)
= BT C(Dx + d) + DT CT (Bx + b)
x
(XT BX)kl
= lj (XT B)ki + kj (BX)il
Xij
(XT BX)
= XT BJij + Jji BX (Jij )kl = ik jl
Xij
See Sec 8.2 for useful properties of the Single-entry matrix Jij
xT Bx
= (B + BT )x
x
bT XT DXc
= DT XbcT + DXcbT
X
(Xb + c)T D(Xb + c) = (D + DT )(Xb + c)bT
X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 9
2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
n1
X
T n
a X b= (Xr )T abT (Xn1r )T (14)
X r=0
T n T n Xh
n1
a (X ) X b = Xn1r abT (Xn )T Xr
X r=0
i
+(Xr )T Xn abT (Xn1r )T (15)
f = xT Ax + bT x
f
x f = = (A + AT )x + b
x
2f
= A + AT
xxT
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 10
2.4 Derivatives of Traces 2 DERIVATIVES
Tr(X) = I
X
Tr(XA) = AT (16)
X
Tr(AXB) = AT B T
X
Tr(AXT B) = BA
X
Tr(XT A) = A
X
Tr(AXT ) = A
X
Tr(X2 ) = 2XT
X
Tr(X2 B) = (XB + BX)T
X
Tr(XT BX) = BX + BT X
X
Tr(XBXT ) = XBT + XB
X
Tr(AXBX) = AT XT BT + BT XT AT
X
Tr(XT X) = 2X
X
Tr(BXXT ) = (B + BT )X
X
Tr(BT XT CXB) = CT XBBT + CXBBT
X
Tr XT BXC = BXC + BT XCT
X
Tr(AXBXT C) = AT CT XBT + CAXB
X
h i
Tr (AXb + c)(AXb + c)T = 2AT (AXb + c)bT
X
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 11
2.5 Derivatives of Structured Matrices 2 DERIVATIVES
Tr(Xk ) = k(Xk1 )T
X
k1
X
Tr(AXk ) = (Xr AXkr1 )T
X r=0
T T T
T
X Tr B X CXX CXB = CXX CXBBT
+CT XBBT XT CT X
+CXBBT XT CX
+CT XXT CT XBBT
2.4.4 Other
Tr(AX1 B) = (X1 BAX1 )T = XT AT BT XT
X
Assume B and C to be symmetric, then
h i
Tr (XT CX)1 A = (CX(XT CX)1 )(A + AT )(XT CX)1
X
h i
Tr (XT CX)1 (XT BX) = 2CX(XT CX)1 XT BX(XT CX)1
X
+2BX(XT CX)1
See [7].
If A has no special structure we have simply Sij = Jij , that is, the structure
matrix is simply the singleentry matrix. Many structures have a representation
in singleentry matrices, see Sec. 8.2.6 for more examples of structure matrices.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 12
2.5 Derivatives of Structured Matrices 2 DERIVATIVES
g(U) h g(U) U i
= Tr ( )T . (19)
Xij U Xij
2.5.2 Symmetric
If A is symmetric, then Sij = Jij + Jji Jij Jij and therefore
T
df f f f
= + diag
dA A A A
Tr(AX)
= A + AT (A I), see (23) (20)
X
det(X)
= det(X)(2X1 (X1 I)) (21)
X
ln det(X)
= 2X1 (X1 I) (22)
X
2.5.3 Diagonal
If X is diagonal, then ([13]):
Tr(AX)
= AI (23)
X
2.5.4 Toeplitz
Like symmetric matrices and diagonal matrices also Toeplitz matrices has a
special structure which should be taken into account when the derivative with
respect to a matrix with Toeplitz structure.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 13
2.5 Derivatives of Structured Matrices 2 DERIVATIVES
Tr(AT)
(24)
T
Tr(TA)
=
T
Tr(A) Tr([AT ]n1 ) Tr([[AT ]1n ]n1,2 ) An1
.
.
.
.
.
.
Tr([AT ]1n )) Tr(A) . . .
= Tr([[AT ]1n ]2,n1 )
.
.
.
.
.
.
Tr([[AT ]1n ]n1,2 )
. . .
.
.
.
.
.
.
.
.
. . . . Tr([AT ]n1 )
A1n Tr([[AT ]1n ]2,n1 ) Tr([AT ]1n )) Tr(A)
(A)
As it can be seen, the derivative (A) also has a Toeplitz structure. Each value
in the diagonal is the sum of all the diagonal valued in A, the values in the
diagonals next to the main diagonal equal the sum of the diagonal next to the
main diagonal in AT . This result is only valid for the unconstrained Toeplitz
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
Tr(AT) Tr(TA)
=
T T
= (A) + (A)T (A) I (25)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 14
3 INVERSES
3 Inverses
3.1 Basic
3.1.1 Definition
The inverse A1 of a matrix A Cnn is defined such that
AA1 = A1 A = I, (26)
3.1.3 Determinant
The determinant of a matrix A Cnn is defined as (see [9])
n
X
det(A) = (1)j+1 A1j det ([A]1j )
j=1
n
X
= A1j cof(A, 1, j). (30)
j=1
3.1.4 Construction
The inverse matrix can be constructed, using the adjoint matrix, by
1
A1 = adj(A) (31)
det(A)
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 15
3.2 Exact Relations 3 INVERSES
d+
c(A) =
d
The condition number can be used to measure how singular a matrix is. If the
condition number is large, it indicates that the matrix is nearly singular. The
condition number can also be estimated from the matrix norms. Here
where k k is a norm such as e.g the 1-norm, the 2-norm, the -norm or the
Frobenius norm (see Sec 9.4 for more on matrix norms).
(I + A1 )1 = A(A + I)1
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 16
3.3 Implication on Inverses 3 INVERSES
See [22].
3.4 Approximations
(I + A)1 = I A + A2 A3 + ...
A A(I + A)1 A
= I A1 if A large and symmetric
If 2 is small then
(Q + 2 M)1
= Q1 2 Q1 MQ1
I AA+ A = A
II A+ AA+ = A+
III AA+ symmetric
IV A+ A symmetric
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 17
3.6 Pseudo Inverse 3 INVERSES
3.6.2 Properties
Assume A+ to be the pseudo-inverse of A, then (See [3])
(A+ )+ = A
(AT )+ = (A+ )T
(cA)+ = (1/c)A+
T +
(A A) = A+ (AT )+
(AAT )+ = (AT )+ A+
3.6.3 Construction
Assume that A has full rank, then
A nn Square rank(A) = n A+ = A1
A nm Broad rank(A) = n A+ = AT (AAT )1
A nm Tall rank(A) = m A+ = (AT A)1 AT
Assume A does not have full rank, i.e. A is nm and rank(A) = r < min(n, m).
The pseudo inverse A+ can be constructed from the singular value decomposi-
tion A = UDVT , by
A+ = VD+ UT
A different way is this: There does always exists two matrices C n r and D
r m of rank r, such that A = CD. Using these matrices it holds that
See [3].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 18
4 COMPLEX MATRICES
4 Complex Matrices
4.1 Complex Derivatives
In order to differentiate an expression f (z) with respect to a complex z, the
Cauchy-Riemann equations have to be satisfied ([7]):
f (z) f (z)
=i . (35)
=z <z
A complex function that satisfies the Cauchy-Riemann equations for points in a
region R is said yo be analytic in this region R. In general, expressions involving
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann
equations. In order to avoid this problem, a more generalized definition of
complex derivative is used ([16], [6]):
Generalized Complex Derivative:
df (z)
f (z) = 2 (39)
dz
f (z) f (z)
= +i .
<z =z
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 19
4.1 Complex Derivatives 4 COMPLEX MATRICES
df (Z)
f (Z) = 2 (40)
dZ
f (Z) f (Z)
= +i .
<Z =Z
These expressions can be used for gradient descent algorithms.
g(u) g u g u
= + (41)
x u x u x
g u g u
= +
u x u x
Notice, if the function is analytic, the second term reduces to zero, and the func-
tion is reduced to the normal well-known chain rule. For the matrix derivative
of a scalar function g(U), the chain rule can be written the following way:
Tr(X ) Tr(XH )
= = I (43)
<X <X
Tr(X ) Tr(XH )
i =i = I (44)
=X =X
Since the two results have the same sign, the conjugate complex derivative (37)
should be used.
Tr(X) Tr(XT )
= = I (45)
<X <X
Tr(X) Tr(XT )
i =i = I (46)
=X =X
Here, the two results have different signs, the generalized complex derivative
(36) should be used. Hereby, it can be seen that (16) holds even if X is a
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 20
4.1 Complex Derivatives 4 COMPLEX MATRICES
complex number.
Tr(AXH )
= A (47)
<X
Tr(AXH )
i = A (48)
=X
Tr(AX )
= AT (49)
<X
Tr(AX )
i = AT (50)
=X
Tr(XXH ) Tr(XH X)
= = 2<X (51)
<X <X
H
Tr(XX ) Tr(XH X)
i =i = i2=X (52)
=X =X
By inserting (51) and (52) in (36) and (37), it can be seen that
Tr(XXH )
= X (53)
X
Tr(XXH )
=X (54)
X
Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (40) is given by
Tr(XXH )
Tr(XXH ) = 2 = 2X (55)
X
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 21
5 DECOMPOSITIONS
5 Decompositions
5.1 Eigenvalues and Eigenvectors
5.1.1 Definition
The eigenvectors v and eigenvalues are the ones satisfying
Avi = i vi
AV = VD, (D)ij = ij i
where the columns of V are the vectors vi
eig(AB) = eig(BA)
A is n m At most min(n, m) distinct i
rank(A) = r At most r non-zero i
5.1.3 Symmetric
Assume A is symmetric, then
A = UDVT
where
U = eigenvectors
p of AAT n n
D = diag(eig(AAT )) nm
V = eigenvectors of AT A m m
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 22
5.2 Singular Value Decomposition 5 DECOMPOSITIONS
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 23
5.3 Triangular Decomposition 5 DECOMPOSITIONS
A = BT B
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 24
6 STATISTICS AND PROBABILITY
6.1.1 Mean
The vector of means, m, is defined by
(m)i = hxi i
6.1.2 Covariance
The matrix of covariance M is defined by
or alternatively as
M = h(x m)(x m)T i
as h i
(3) (3)
M3 = m::1 m::2 ...m(3)
::n
where : denotes all elements within the given index. M3 can alternatively be
expressed as
M3 = h(x m)(x m)T (x m)T i
as h i
(4) (4) (4) (4) (4) (4) (4) (4)
M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4)
::nn
or alternatively as
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 25
6.2 Expectation of Linear Combinations
6 STATISTICS AND PROBABILITY
E[Ax + b] = Am + b
E[Ax] = Am
E[x + b] = m+b
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 26
6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY
E[(Ax + a)bT (Cx + c)(Dx + d)T ] = (Ax + a)bT (CMDT + (Cm + c)(Dm + d)T )
+(AMCT + (Am + a)(Cm + c)T )b(Dm + d)T
+bT (Cm + c)(AMDT (Am + a)(Dm + d)T )
hyi = wT m
h(y hyi)2 i = wT M2 w
h(y hyi)3 i = wT M3 w w
h(y hyi)4 i = wT M4 w w w
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 27
7 GAUSSIANS
7 Gaussians
7.1 Basics
7.1.1 Density and normalization
The density of x N (m, ) is
1 1
p(x) = p exp (x m)T 1 (x m)
det(2) 2
p(x)
= p(x)1 (x m)
x
2p
= p(x) 1 (x m)(x m)T 1 1
xxT
then
p(xa ) = Nxa (a , a )
p(xb ) = Nxb (b , b )
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 28
7.1 Basics 7 GAUSSIANS
then
n = a + c 1
p(xa |xb ) = Nxa (a , a ) a b (xb b )
a = a c 1
b c
T
n = b + Tc 1
b a (xa a )
p(xb |xa ) = Nxb (b , b )
b = b Tc 1
a c
Ax + By + c N (Amx + Bmy + c, Ax AT + By BT )
p
det(2(AT 1 A)1 )
NAx [m, ] = p Nx [A1 m, (AT 1 A)1 ]
det(2)
1
c = 1
1 + 2
1
mc = (1 1 1
1 + 2 ) (1 1
1 m1 + 2 m2 )
1 T 1
C = (m + mT2 1 1 1 1
2 )(1 + 2 ) (1 1
1 m1 + 2 m2 )
2 1 1
1
mT1 1 T 1
1 m1 + m2 2 m2
2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 29
7.2 Moments 7 GAUSSIANS
1
c = 1
1 + 2
1
Mc = (1 1 1
1 + 2 ) (1 1
1 M1 + 2 M2 )
1 h i
C = Tr (1
1 M 1 + 1
2 M2 )T
(1
1 + 1 1
2 ) ( 1
1 M1 + 1
2 M2 )
2
1
Tr(MT1 1 T 1
1 M1 + M2 2 M2 )
2
cc = Nm1 (m2 , (1 + 2 ))
1 1 T 1
= p exp (m1 m2 ) (1 + 2 ) (m1 m2 )
det(2(1 + 2 )) 2
mc = (1 1 1
1 + 2 ) (1 1
1 m1 + 2 m2 )
1 1 1
c = (1 + 2 )
but note that the product is not normalized as a density of x.
7.2 Moments
7.2.1 Mean and covariance of linear forms
First and second moments. Assume x N (m, )
E(x) = m
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 30
7.2 Moments 7 GAUSSIANS
E(xxT ) = + mmT
E[xT Ax] = Tr(A) + mT Am
Var(xT Ax)
= 2 4 Tr(A2 ) + 4 2 mT A2 m
E[(x m ) A(x m )] = (m m0 )T A(m m0 ) + Tr(A)
0 T 0
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 31
7.3 Miscellaneous 7 GAUSSIANS
See [7].
7.2.5 Moments
X
E[x] = k mk
k
XX
Cov(x) = k k0 (k + mk mTk mk mTk0 )
k k0
7.3 Miscellaneous
7.3.1 Whitening
Assume x N (m, ) then
z = 1/2 (x m) N (0, I)
x = 1/2 z + m N (m, )
Note that 1/2 means the matrix which fulfils 1/2 1/2 = , and that it exists
and is unique since is positive definite.
z = (x m)T 1 (x m) 2n
7.3.3 Entropy
Entropy of a D-dimensional gaussian
Z p D
H(x) = N (m, ) ln N (m, )dx = ln det(2) +
2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 32
7.4 Mixture of Gaussians 7 GAUSSIANS
7.4.2 Derivatives
P
Defining p(s) = k k Ns (k , k ) one get
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )]
j k k Ns (k , k ) j
j Ns (j , j ) 1
= P
k k Ns (k , k ) j
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )]
j
k k s N (k , k ) j
j Ns (j , j )
= P 1 k (s k )
k k Ns (k , k )
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )]
j
k k s N (k , k ) j
j Ns (j , j ) 1 T
= P j + T T T
j (s j )(s j ) j
k k s N (k , k ) 2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 33
8 SPECIAL MATRICES
8 Special Matrices
8.1 Units, Permutation and Shift
8.1.1 Unit vector
Let ei Rn1 be the ith unit vector, i.e. the vector which is zero in all entries
except the ith at which it is 1.
8.1.3 Permutations
Let P be some permutation matrix, e.g.
0 1 0 eT2
P= 1 0 0 = e2 e1 e3 = eT1
0 0 1 eT3
For permutation matrices it holds that
PPT = I
and that
eT2 A
AP = Ae2 Ae1 Ae3 PA = eT1 A
eT3 A
That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted se-
quence.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 34
8.2 The Singleentry Matrix 8 SPECIAL MATRICES
A related but slightly different matrix is the recurrent shifted operator defined
on a 4x4 example by
0 0 0 1
1 0 0 0
L =
0 1 0 0
0 0 1 0
i.e. a matrix defined by (L)ij = i,j+1 + i,1 j,dim(L) . On a signal x it has the
effect
(Ln x)t = xt0 , t0 = [(t n) mod N ] + 1
That is, L is like the shift operator L except that it wraps the signal as if it
was periodic and shifted (substituting the zeros with the rear end of the signal).
Note that L is invertible and orthogonal, i.e.
L1 = LT
0 0 0 0
The single-entry matrix is very useful when working with derivatives of expres-
sions involving matrices.
i.e. an n p matrix of zeros with the i.th column of A in place of the j.th
column. Assume A to be n m and Jij to be p n
0
..
.
0
Jij A =
Aj
0
.
..
0
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 35
8.2 The Singleentry Matrix 8 SPECIAL MATRICES
i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.th
row.
Tr(AJji B) = (BA)ij
Tr(AJij Jij B) = diag(AT BT )ij
Assume A is n n, Jij is n m B is m n, then
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 36
8.3 Symmetric and Antisymmetric 8 SPECIAL MATRICES
8.3.2 Antisymmetric
The antisymmetric matrix is also known as the skew symmetric matrix. It has
the following property from which it is defined
A = AT
Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n n antisymmetric matrices also have the following properties.
det(AT ) = det(A) = (1)n det(A)
det(A) = det(A) = 0, if n is odd
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 37
8.5 Toeplitz Matrices 8 SPECIAL MATRICES
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 38
8.6 The DFT Matrix 8 SPECIAL MATRICES
The DFT of the vector x = [x(0), x(1), , x(N 1)]T can be written in matrix
form as
X = WN x, (69)
where X = [X(0), X(1), , x(N 1)]T . The IDFT is similarly given as
x = W1
N X. (70)
Some properties of WN exist:
1
W1 = W (71)
N
N N
WN WN = NI (72)
WN = WHN (73)
j2
If WN = e N , then [15]
m+N/2
WN = WNm (74)
Notice, the DFT matrix is a Vandermonde Matrix.
The following important relation between the circulant matrix and the dis-
crete Fourier transform (DFT) exists
TC = W1
N (I (WN t))WN , (75)
where t = [t0 , t1 , , tn1 ]T is the first row of TC .
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 39
8.7 Positive Definite and Semi-definite Matrices 8 SPECIAL MATRICES
xT Ax > 0, x
xT Ax 0, x
8.7.2 Eigenvalues
The following holds with respect to the eigenvalues:
8.7.3 Trace
The following holds with respect to the trace:
8.7.4 Inverse
If A is positive definite, then A is invertible and A1 is also positive definite.
8.7.5 Diagonal
If A is positive definite, then Aii > 0, i
8.7.6 Decomposition I
The matrix A is positive semi-definite of rank r there exists a matrix B of
rank r such that A = BBT
8.7.7 Decomposition II
Assume A is an n n positive semi-definite, then there exists an n r matrix
B of rank r such that BT AB = I.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 40
8.8 Block matrices 8 SPECIAL MATRICES
8.8.1 Multiplication
Assuming the dimensions of the blocks matches we have
A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22
C1 = A11 A12 A1
22 A21
C2 = A22 A21 A1
11 A12
as
A11 A12
det = det(A22 ) det(C1 ) = det(A11 ) det(C2 )
A21 A22
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 41
8.8 Block matrices 8 SPECIAL MATRICES
C1 = A11 A12 A1
22 A21
C2 = A22 A21 A1
11 A12
as 1
A11 A12 C1
1 A1 1
11 A12 C2
=
A21 A22 C2 A21 A1
1
11 C21
A1 1 1 1
11 + A11 A12 C2 A21 A11 C1 1
1 A12 A22
=
A1 1
22 A21 C1 A1 1 1 1
22 + A22 A21 C1 A12 A22
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 42
9 FUNCTIONS AND OPERATORS
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 43
9.2 Kronecker and Vec Operator 9 FUNCTIONS AND OPERATORS
X
(1)n A2n+1 1 1
sin(A) = A A3 + A5 ...
n=0
(2n + 1)! 3! 5!
X
(1)n A2n 1 1
cos(A) = I A2 + A4 ...
n=0
(2n)! 2! 4!
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 44
9.3 Solutions to Systems of Equations 9 FUNCTIONS AND OPERATORS
A (B + C) = A B + A C
A B 6= B A
A (B C) = (A B) C
(A A B B) = A B (A B)
(A B)T = AT BT
(A B)(C D) = AC BD
(A B)1 = A1 B1
rank(A B) = rank(A)rank(B)
Tr(A B) = Tr(A)Tr(B)
det(A B) = det(A)rank(B) det(B)rank(A)
Ax = b
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 45
9.3 Solutions to Systems of Equations 9 FUNCTIONS AND OPERATORS
Ax = b x = A1 b
Ax = b x = (AT A)1 AT b = A+ b
Ax = b xmin = AT (AAT )1 b
The equation have many solutions x. But xmin is the solution which minimizes
||Ax b||2 and also the solution with the smallest norm ||x||2 . The same holds
for a matrix version: Assume A is n m, X is m n and B is n n, then
AX = B Xmin = A+ B
The equation have many solutions X. But Xmin is the solution which minimizes
||AX B||2 and also the solution with the smallest norm ||X||2 . See [3].
Similar but different: Assume A is square n n and the matrices B0 , B1
are n N , where N > n, then if B0 has maximal rank
where Amin denotes the matrix which is optimal in a least square sense. An
interpretation is that A is the linear approximation which maps the columns
vectors of B0 into the columns vectors of B1 .
Ax = 0, x A=0
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 46
9.4 Matrix Norms 9 FUNCTIONS AND OPERATORS
xT Ax = 0, x A=0
AX + XB = C
vec(X) = (I A + BT I)1 vec(C)
See Sec 9.2.1 and 9.2.2 for details on the Kronecker product and the vec operator.
P
n An XBn
=C
P T 1
vec(X) = n Bn An vec(C)
See Sec 9.2.1 and 9.2.2 for details on the Kronecker product and the vec operator.
||A|| 0 ||A|| = 0 A = 0
||cA|| = |c|||A||, cR
||A + B|| ||A|| + ||B||
9.4.2 Examples
P
||A||1 = max
p j i |Aij |
||A||2 = max eig(AT A)
1/p
||A||p = (max||x||
P p =1 ||Ax||p )
||A|| = maxi j |Aij |
qP p
||A||F = 2 Tr(AAH )
ij |Aij | = (Frobenius)
||A||max = maxij |Aij |
||A||KF = ||sing(A)||1 (Ky Fan)
where sing(A) is the vector of singular values of the matrix A.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 47
9.5 Rank 9 FUNCTIONS AND OPERATORS
9.4.3 Inequalities
E. H. Rasmussen has in yet unpublished material derived and collected the
following inequalities. They are collected in a table as below, assuming A is an
m n, and d = min{m, n}
||A||max ||A||1 ||A|| ||A||2 ||A||F ||A||KF
||A||max 1 1 1 1 1
||A||1 m m m m m
||A|| n n n n n
||A||2 mn n m 1 1
||A||F mn n m d 1
||A||KF mnd nd md d d
which are to be read as, e.g.
||A||2 m ||A||
9.5 Rank
9.5.1 Sylvesters Inequality
If A is m n and B is n r, then
See [8].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 48
9.7 Miscellaneous 9 FUNCTIONS AND OPERATORS
9.7 Miscellaneous
For any A it holds that
It holds that
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 49
A ONE-DIMENSIONAL RESULTS
A One-dimensional Results
A.1 Gaussian
A.1.1 Density
1 (x )2
p(x) = exp
2 2 2 2
A.1.2 Normalization
Z
(s)2
e 2 2 ds = 2 2
Z r 2
2 b 4ac
e(ax +bx+c)
dx = exp
a 4a
Z r 2
2 c 4c2 c0
e c2 x +c1 x+c0
dx = exp 1
c2 4c2
A.1.3 Derivatives
p(x) (x )
= p(x)
2
ln p(x) (x )
=
2
p(x) 1 (x )2
= p(x) 1
2
ln p(x) 1 (x )2
= 1
2
c2 x2 + c1 x + c0 = a(x b)2 + w
1 c1 1 c21
a = c2 b= w= + c0
2 c2 4 c2
or
1
c2 x2 + c1 x + c0 = (x )2 + d
2 2
c1 1 c21
= 2 = d = c0
2c2 2c2 4c2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 50
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
A.1.5 Moments
If the density is expressed by
1 (s )2
p(x) = exp or p(x) = C exp(c2 x2 + c1 x)
2 2 2 2
From the un-centralized moments one can derive other entities like
1
hx2 i hxi2 = 2 = 2c2
2c1
hx3 i hx2 ihxi = 2 2 = (2c2 )2 h i
2 c2
hx4 i hx2 i2 = 2 4 + 42 2 = (2c2 )2 1 4 2c12
K
X
k 1 (s k )2
p(s) = p exp
2k2 2 k2
k
A.2.2 Moments
An useful fact of MoG, is that
X
hxn i = k hxn ik
k
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 51
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
where hik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities
X
1 1 (x k )2
p(x) = k p exp
2k2 2 k2
k
X
p(x) = k Ck exp ck2 x2 + ck1 x
k
as
P P h i
ck1
hxi = k k k = k k
2ck2 2
P P 1 ck1
hx2 i = k k (k2 + 2k ) = k k 2ck2 + 2ck2
P P h h ii
ck1 c2k1
hx3 i = k k (3k2 k + 3k ) =
k k (2ck2 )2 3 2ck2
2 2
P P 1 c c2k1
hx4 i = k k (4k + 62k k2 + 3k4 ) =
k k 2ck2
k1
2ck2 6 2ck2 + 3
From the un-centralized moments one can derive other entities like
P 2
hx2 i hxi2 = 2
k,k0 k k0 k + k k k0
P
hx3 i hx2 ihxi = 2 3 2 2
k,k0 k k0 3k k + k (k + k )k0
P
hx4 i hx2 i2 = 4 2 2 4 2 2 2 2
k,k0 k k0 k + 6k k + 3k (k + k )(k0 + k0 )
A.2.3 Derivatives
P
Defining p(s) = k k Ns (k , k2 ) we get for a parameter j of the j.th compo-
nent
ln p(s) j Ns (j , j2 ) ln(j Ns (j , j2 ))
=P 2
j k k Ns (k , k ) j
that is,
ln p(s) j Ns (j , j2 ) 1
= P 2
j k k Ns (k , k ) j
ln p(s) j Ns (j , j2 ) (s j )
= P 2
j k k Ns (k , k ) j2
" #
ln p(s) j Ns (j , j2 ) 1 (s j )2
= P 2 1
j k k Ns (k , k ) j j2
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 52
B PROOFS AND DETAILS
ln p(s) X ln p(s) l l
= where = l (lj j )
rj l rj rj
l
(Xn )kl X
= Xk,u1 Xu1 ,u2 ...Xun1 ,l
Xij Xij u1 ,...,un1
= k,i u1 ,j Xu1 ,u2 ...Xun1 ,l
+Xk,u1 u1 ,i u2 ,j ...Xun1 ,l
..
.
+Xk,u1 Xu1 ,u2 ...un1 ,i l,j
n1
X
= (Xr )ki (Xn1r )jl
r=0
n1
X
= (Xr Jij Xn1r )kl
r=0
Using the properties of the single entry matrix found in Sec. 8.2.4, the result
follows easily.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 53
B.1 Misc Proofs B PROOFS AND DETAILS
Through the calculations, (16) and (47) were used. In addition, by use of (48),
the derivative is found with respect to the imaginary part of X
Notice, for real X, A, the sum of (56) and (57) is reduced to (13).
Similar calculations yield
and
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 54
REFERENCES REFERENCES
References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-
vationer. Studenterlitteratur, 1992.
[2] Jorn Anemuller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-
pendent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):13111323, November 2003.
[3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe-
matics and Computin Science Series. Clarendon Press, 1990.
[4] Christoffer Bishop. Neural Networks for Pattern Recognition. Oxford Uni-
versity Press, 1995.
[5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes.
[6] D. H. Brandwood. A complex gradient operator and its application in
adaptive array theory. IEE Proceedings, 130(1):1116, February 1983. PTS.
F and H.
[7] M. Brookes. Matrix Reference Manual, 2004. Website May 20, 2004.
[8] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004.
[9] Gene H. Golub and Charles F. van Loan. Matrix Computations. The Johns
Hopkins University Press, Baltimore, 3rd edition, 1996.
[10] Robert M. Gray. Toeplitz and circulant matrices: A review. Technical
report, Information Systems Laboratory, Department of Electrical Engi-
neering,Stanford University, Stanford, California 94305, August 2002.
[11] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River,
NJ, 4th edition, 2002.
[12] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge
University Press, 1985.
[13] Thomas P. Minka. Old and new matrix algebra useful for statistics, De-
cember 2000. Notes.
[14] L. Parra and C. Spence. Convolutive blind separation of non-stationary
sources. In IEEE Transactions Speech and Audio Processing, pages 320
327, May 2000.
[15] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing.
Prentice-Hall, 1996.
[16] Laurent Schwartz. Cours dAnalyse, volume II. Hermann, Paris, 1967. As
referenced in [11].
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 55
REFERENCES REFERENCES
[17] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and
Sons, 1982.
[18] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons,
2002.
[19] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974.
Petersen & Pedersen, The Matrix Cookbook, Version: February 16, 2006, Page 56