Matrixcookbook PDF
Matrixcookbook PDF
[ http://matrixcookbook.com ]
1
Introduction
What is this? These pages are a collection of facts (identities, approxima-
tions, inequalities, relations, ...) about matrices and matters relating to them.
It is collected in this form for the convenience of anyone who wants a quick
desktop reference .
Errors: Very likely there are errors, typos, and mistakes for which we apolo-
gize and would be grateful to receive corrections at [email protected].
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2
CONTENTS CONTENTS
Contents
1 Basics 6
1.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Derivatives 8
2.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 8
2.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Derivatives of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 10
2.5 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 Derivatives of vector norms . . . . . . . . . . . . . . . . . . . . . 14
2.7 Derivatives of matrix norms . . . . . . . . . . . . . . . . . . . . . 14
2.8 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 14
3 Inverses 17
3.1 Basic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Complex Matrices 24
4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Higher order and non-linear derivatives . . . . . . . . . . . . . . . 26
4.3 Inverse of complex sum . . . . . . . . . . . . . . . . . . . . . . . 27
7 Multivariate Distributions 37
7.1 Cauchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2 Dirichlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.3 Normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.4 Normal-Inverse Gamma . . . . . . . . . . . . . . . . . . . . . . . 37
7.5 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.6 Multinomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3
CONTENTS CONTENTS
7.7 Students t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.8 Wishart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.9 Wishart, Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8 Gaussians 40
8.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.2 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.3 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 44
9 Special Matrices 46
9.1 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.2 Discrete Fourier Transform Matrix, The . . . . . . . . . . . . . . 47
9.3 Hermitian Matrices and skew-Hermitian . . . . . . . . . . . . . . 48
9.4 Idempotent Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.5 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . 49
9.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 50
9.7 Singleentry Matrix, The . . . . . . . . . . . . . . . . . . . . . . . 52
9.8 Symmetric, Skew-symmetric/Antisymmetric . . . . . . . . . . . . 54
9.9 Toeplitz Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.10 Transition matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.11 Units, Permutation and Shift . . . . . . . . . . . . . . . . . . . . 56
9.12 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . 57
A One-dimensional Results 64
A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 65
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4
CONTENTS CONTENTS
A Matrix
Aij Matrix indexed for some purpose
Ai Matrix indexed for some purpose
Aij Matrix indexed for some purpose
An Matrix indexed for some purpose or
The n.th power of a square matrix
A1 The inverse matrix of the matrix A
A+ The pseudo inverse matrix of the matrix A (see Sec. 3.6)
A1/2 The square root of a matrix (if unique), not elementwise
(A)ij The (i, j).th entry of the matrix A
Aij The (i, j).th entry of the matrix A
[A]ij The ij-submatrix, i.e. A with i.th row and j.th column deleted
a Vector (column-vector)
ai Vector indexed for some purpose
ai The i.th element of the vector a
a Scalar
det(A) Determinant of A
Tr(A) Trace of the matrix A
diag(A) Diagonal matrix of the matrix A, i.e. (diag(A))ij = ij Aij
eig(A) Eigenvalues of the matrix A
vec(A) The vector-version of the matrix A (see Sec. 10.2.2)
sup Supremum of a set
||A|| Matrix norm (subscript if any denotes what norm)
AT Transposed matrix
AT The inverse of the transposed and vice versa, AT = (A1 )T = (AT )1 .
A Complex conjugated matrix
AH Transposed and complex conjugated matrix (Hermitian)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5
1 BASICS
1 Basics
(AB)1 = B1 A1 (1)
1 1 1 1
(ABC...) = ...C B A (2)
T 1 1 T
(A ) = (A ) (3)
T T T
(A + B) = A +B (4)
(AB)T = BT AT (5)
(ABC...)T = ...CT BT AT (6)
H 1 1 H
(A ) = (A ) (7)
(A + B)H = AH + BH (8)
(AB)H = BH AH (9)
H H H H
(ABC...) = ...C B A (10)
1.1 Trace
P
Tr(A) = Aii (11)
Pi
Tr(A) = i i , i = eig(A) (12)
T
Tr(A) = Tr(A ) (13)
Tr(AB) = Tr(BA) (14)
Tr(A + B) = Tr(A) + Tr(B) (15)
Tr(ABC) = Tr(BCA) = Tr(CAB) (16)
aT a = Tr(aaT ) (17)
1.2 Determinant
Let A be an n n matrix.
Q
det(A) = i i i = eig(A) (18)
det(cA) = cn det(A), if A Rnn (19)
det(AT ) = det(A) (20)
det(AB) = det(A) det(B) (21)
det(A1 ) = 1/ det(A) (22)
det(An ) = det(A)n (23)
T T
det(I + uv ) = 1+u v (24)
For n = 2:
det(I + A) = 1 + det(A) + Tr(A) (25)
For n = 3:
1 1
det(I + A) = 1 + det(A) + Tr(A) + Tr(A)2 Tr(A2 ) (26)
2 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6
1.3 The Special Case 2x2 1 BASICS
For n = 4:
1
det(I + A) = 1 + det(A) + Tr(A) +
2
1
+Tr(A)2 Tr(A2 )
2
1 1 1
+ Tr(A) Tr(A)Tr(A2 ) + Tr(A3 )
3
(27)
6 2 3
For small , the following approximation holds
1 1
det(I + A)
= 1 + det(A) + Tr(A) + 2 Tr(A)2 2 Tr(A2 ) (28)
2 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7
2 DERIVATIVES
2 Derivatives
This section is covering differentiation of a number of expressions with respect to
a matrix X. Note that it is always assumed that X has no special structure, i.e.
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive
definite). See section 2.8 for differentiation of structured matrices. The basic
assumptions can be written in a formula as
Xkl
= ik lj (32)
Xij
The following rules are general and very useful when deriving the differential of
an expression ([19]):
A = 0 (A is a constant) (33)
(X) = X (34)
(X + Y) = X + Y (35)
(Tr(X)) = Tr(X) (36)
(XY) = (X)Y + X(Y) (37)
(X Y) = (X) Y + X (Y) (38)
(X Y) = (X) Y + X (Y) (39)
(X1 ) = X1 (X)X1 (40)
(det(X)) = Tr(adj(X)X) (41)
(det(X)) = det(X)Tr(X1 X) (42)
(ln(det(X))) = Tr(X1 X) (43)
XT = (X)T (44)
XH = (X)H (45)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8
2.2 Derivatives of an Inverse 2 DERIVATIVES
det(X)
= det(X)(X1 )T (49)
X
X det(X)
Xjk = ij det(X) (50)
Xik
k
det(AXB)
= det(AXB)(X1 )T = det(AXB)(XT )1 (51)
X
det(XT AX)
= 2 det(XT AX)XT (52)
X
If X is not square but A is symmetric, then
det(XT AX)
= 2 det(XT AX)AX(XT AX)1 (53)
X
If X is not square and A is not symmetric, then
det(XT AX)
= det(XT AX)(AX(XT AX)1 + AT X(XT AT X)1 ) (54)
X
ln det(XT X)|
= 2(X+ )T (55)
X
ln det(XT X)
= 2XT (56)
X+
ln | det(X)|
= (X1 )T = (XT )1 (57)
X
det(Xk )
= k det(Xk )XT (58)
X
Y1 Y 1
= Y1 Y (59)
x x
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 9
2.3 Derivatives of Eigenvalues 2 DERIVATIVES
xT a aT x
= = a (69)
x x
aT Xb
= abT (70)
X
aT XT b
= baT (71)
X
aT Xa aT XT a
= = aaT (72)
X X
X
= Jij (73)
Xij
(XA)ij
= im (A)nj = (Jmn A)ij (74)
Xmn
(XT A)ij
= in (A)mj = (Jnm A)ij (75)
Xmn
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 10
2.4 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES
X X
Xkl Xmn = 2 Xkl (76)
Xij
klmn kl
bT XT Xc
= X(bcT + cbT ) (77)
X
(Bx + b)T C(Dx + d)
= BT C(Dx + d) + DT CT (Bx + b) (78)
x
(XT BX)kl
= lj (XT B)ki + kj (BX)il (79)
Xij
(XT BX)
= XT BJij + Jji BX (Jij )kl = ik jl (80)
Xij
See Sec 9.7 for useful properties of the Single-entry matrix Jij
xT Bx
= (B + BT )x (81)
x
bT XT DXc
= DT XbcT + DXcbT (82)
X
(Xb + c)T D(Xb + c) = (D + DT )(Xb + c)bT (83)
X
Assume W is symmetric, then
(x As)T W(x As) = 2AT W(x As) (84)
s
(x s)T W(x s) = 2W(x s) (85)
x
(x s)T W(x s) = 2W(x s) (86)
s
(x As)T W(x As) = 2W(x As) (87)
x
(x As)T W(x As) = 2W(x As)sT (88)
A
As a case with complex values the following holds
(a xH b)2
= 2b(a xH b) (89)
x
This formula is also known from the LMS algorithm [14]
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 11
2.5 Derivatives of Traces 2 DERIVATIVES
n1
T n T n Xh
a (X ) X b = Xn1r abT (Xn )T Xr
X r=0
i
+(Xr )T Xn abT (Xn1r )T (92)
(Ax)T (Ax) xT AT Ax
= (94)
x (Bx)T (Bx) x xT BT Bx
AT Ax xT AT AxBT Bx
= 2 T 2 (95)
x BBx (xT BT Bx)2
Tr(X) = I (99)
X
Tr(XA) = AT (100)
X
Tr(AXB) = AT BT (101)
X
Tr(AXT B) = BA (102)
X
Tr(XT A) = A (103)
X
Tr(AXT ) = A (104)
X
Tr(A X) = Tr(A)I (105)
X
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 12
2.5 Derivatives of Traces 2 DERIVATIVES
Tr(X2 ) = 2XT (106)
X
Tr(X2 B) = (XB + BX)T (107)
X
Tr(XT BX) = BX + BT X (108)
X
Tr(BXXT ) = BX + BT X (109)
X
Tr(XXT B) = BX + BT X (110)
X
Tr(XBXT ) = XBT + XB (111)
X
Tr(BXT X) = XBT + XB (112)
X
Tr(XT XB) = XBT + XB (113)
X
Tr(AXBX) = AT XT BT + BT XT AT (114)
X
Tr(XT X) = Tr(XXT ) = 2X (115)
X X
Tr(BT XT CXB) = CT XBBT + CXBBT (116)
X
Tr XT BXC = BXC + BT XCT
(117)
X
Tr(AXBXT C) = AT CT XBT + CAXB (118)
X
h i
Tr (AXB + C)(AXB + C)T = 2AT (AXB + C)BT (119)
X
Tr(X X) = Tr(X)Tr(X) = 2Tr(X)I(120)
X X
See [7].
Tr(Xk ) = k(Xk1 )T (121)
X
k1
X
Tr(AXk ) = (Xr AXkr1 )T (122)
X r=0
T T T
= CXXT CXBBT
X Tr B X CXX CXB
+CT XBBT XT CT X
+CXBBT XT CX
+CT XXT CT XBBT (123)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 13
2.6 Derivatives of vector norms 2 DERIVATIVES
2.5.4 Other
Tr(AX1 B) = (X1 BAX1 )T = XT AT BT XT (124)
X
Assume B and C to be symmetric, then
h i
Tr (XT CX)1 A = (CX(XT CX)1 )(A + AT )(XT CX)1 (125)
X
h i
Tr (XT CX)1 (XT BX) = 2CX(XT CX)1 XT BX(XT CX)1
X
+2BX(XT CX)1 (126)
h i
Tr (A + XT CX)1 (XT BX) = 2CX(A + XT CX)1 XT BX(A + XT CX)1
X
+2BX(A + XT CX)1 (127)
See [7].
Tr(sin(X))
= cos(X)T (128)
X
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 14
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
If A has no special structure we have simply Sij = Jij , that is, the structure
matrix is simply the single-entry matrix. Many structures have a representation
in singleentry matrices, see Sec. 9.7.6 for more examples of structure matrices.
g(U) h g(U) U i
= Tr ( )T . (137)
Xij U Xij
2.8.2 Symmetric
If A is symmetric, then Sij = Jij + Jji Jij Jij and therefore
T
df f f f
= + diag (138)
dA A A A
Tr(AX)
= A + AT (A I), see (142) (139)
X
det(X)
= det(X)(2X1 (X1 I)) (140)
X
ln det(X)
= 2X1 (X1 I) (141)
X
2.8.3 Diagonal
If X is diagonal, then ([19]):
Tr(AX)
= AI (142)
X
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 15
2.8 Derivatives of Structured Matrices 2 DERIVATIVES
2.8.4 Toeplitz
Like symmetric matrices and diagonal matrices also Toeplitz matrices has a
special structure which should be taken into account when the derivative with
respect to a matrix with Toeplitz structure.
Tr(AT)
(143)
T
Tr(TA)
=
T
Tr([AT ]n1 ) Tr([[AT ]1n ]n1,2 )
Tr(A) An1
. . .
. . .
Tr([AT ]1n ))
Tr(A) . . .
. . .
=
. . .
Tr([[AT ]1n ]2,n1 ) . . . Tr([[AT ]1n ]n1,2 )
. . . .
. . . .
. . . . Tr([AT ]n1 )
A1n Tr([[AT ]1n ]2,n1 ) Tr([AT ]1n )) Tr(A)
(A)
As it can be seen, the derivative (A) also has a Toeplitz structure. Each value
in the diagonal is the sum of all the diagonal valued in A, the values in the
diagonals next to the main diagonal equal the sum of the diagonal next to the
main diagonal in AT . This result is only valid for the unconstrained Toeplitz
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields
Tr(AT) Tr(TA)
= = (A) + (A)T (A) I (144)
T T
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 16
3 INVERSES
3 Inverses
3.1 Basic
3.1.1 Definition
The inverse A1 of a matrix A Cnn is defined such that
AA1 = A1 A = I, (145)
3.1.3 Determinant
The determinant of a matrix A Cnn is defined as (see [12])
n
X
det(A) = (1)j+1 A1j det ([A]1j ) (149)
j=1
Xn
= A1j cof(A, 1, j). (150)
j=1
3.1.4 Construction
The inverse matrix can be constructed, using the adjoint matrix, by
1
A1 = adj(A) (151)
det(A)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 17
3.2 Exact Relations 3 INVERSES
d+
c(A) = (152)
d
The condition number can be used to measure how singular a matrix is. If the
condition number is large, it indicates that the matrix is nearly singular. The
condition number can also be estimated from the matrix norms. Here
where k k is a norm such as e.g the 1-norm, the 2-norm, the -norm or the
Frobenius norm (see Sec 10.4p for more on matrix norms).
The 2-norm of A equals (max(eig(AH A))) [12, p.57]. For a symmetric
matrix, this reduces to ||A||2 = max(|eig(A)|) [12, p.394]. If the matrix is
symmetric and positive definite, ||A||2 = max(eig(A)). The condition number
based on the 2-norm thus reduces to
max(eig(A))
kAk2 kA1 k2 = max(eig(A)) max(eig(A1 )) = . (154)
min(eig(A))
3.2.4 Sherman-Morrison
A1 bcT A1
(A + bcT )1 = A1 (160)
1 + cT A1 b
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 18
3.2 Exact Relations 3 INVERSES
Case 1 of 6: If ||w|| =
6 0 and ||m|| =
6 0. Then
G = vw+ (m+ )T nT + (m+ )T w+ (174)
1 1
= vwT mnT + mwT (175)
||w||2 ||m||2 ||m||2 ||w||2
Case 2 of 6: If ||w|| = 0 and ||m|| =
6 0 and = 0. Then
G = vv+ A+ (m+ )T nT (176)
1 1
= 2
vvT A+ mnT (177)
||v|| ||m||2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 19
3.3 Implication on Inverses 3 INVERSES
See [30].
3.4 Approximations
The following identity is known as the Neuman series of a matrix, which holds
when |i | < 1 for all eigenvalues i
X
(I A)1 = An (186)
n=0
which is equivalent to
X
(I + A)1 = (1)n An (187)
n=0
When |i | < 1 for all eigenvalues i , it holds that A 0 for n , and the
following approximations holds
(I A)1
= I + A + A2 (188)
(I + A)1
= I A + A2 (189)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 20
3.5 Generalized Inverse 3 INVERSES
The following approximation is from [22] and holds when A large and symmetric
A A(I + A)1 A
= I A1 (190)
(Q + 2 M)1
= Q1 2 Q1 MQ1 (191)
Proof:
(Q + 2 M)1 = (192)
1 2 1 1
(QQ Q + MQ Q) = (193)
((I + 2 MQ1 )Q)1 = (194)
1 2 1 1
Q (I + MQ ) (195)
Q1 (I + 2 MQ1 )1 = (196)
1
Q 2
(I MQ1 2
+ ( MQ 1 2
) ...)
= 1
Q 2
Q 1 1
MQ (197)
I AA+ A = A
II A+ AA+ = A+
III AA+ symmetric
IV A+ A symmetric
The matrix A+ is unique and does always exist. Note that in case of com-
plex matrices, the symmetric condition is substituted by a condition of being
Hermitian.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 21
3.6 Pseudo Inverse 3 INVERSES
3.6.2 Properties
Assume A+ to be the pseudo-inverse of A, then (See [3] for some of them)
(A+ )+ = A (199)
(AT )+ = (A+ )T (200)
H + + H
(A ) = (A ) (201)
+ +
(A ) = (A ) (202)
(A+ A)AH = AH (203)
+ T T
(A A)A 6= A (204)
+ +
(cA) = (1/c)A (205)
+ T + T
A = (A A) A (206)
A+ = AT (AAT )+ (207)
(AT A)+ = A+ (AT )+ (208)
T + T + +
(AA ) = (A ) A (209)
+ H + H
A = (A A) A (210)
A+ = AH (AAH )+ (211)
H + + H +
(A A) = A (A ) (212)
H + H + +
(AA ) = (A ) A (213)
+ + + + +
(AB) = (A AB) (ABB ) (214)
f (AH A) f (0)I = A+ [f (AAH ) f (0)I]A (215)
H H +
f (AA ) f (0)I = A[f (A A) f (0)I]A (216)
where A Cnm .
Assume A to have full rank, then
3.6.3 Construction
Assume that A has full rank, then
A nn Square rank(A) = n A+ = A1
A nm Broad rank(A) = n A+ = AT (AAT )1
A nm Tall rank(A) = m A+ = (AT A)1 AT
The so-called broad version is also known as right inverse and the tall ver-
sion as the left inverse.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 22
3.6 Pseudo Inverse 3 INVERSES
Assume A does not have full rank, i.e. A is n m and rank(A) = r <
min(n, m). The pseudo inverse A+ can be constructed from the singular value
decomposition A = UDVT , by
A+ = Vr D1 T
r Ur (223)
where Ur , Dr , and Vr are the matrices with the degenerated rows and columns
deleted. A different way is this: There do always exist two matrices C n r
and D r m of rank r, such that A = CD. Using these matrices it holds that
See [3].
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 23
4 COMPLEX MATRICES
4 Complex Matrices
The complex scalar product r = pq can be written as
<r <p =p <q
= (225)
=r =p <p =q
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 24
4.1 Complex Derivatives 4 COMPLEX MATRICES
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 25
4.2 Higher order and non-linear derivatives 4 COMPLEX MATRICES
Tr(AX )
= AT (242)
<X
Tr(AX )
i = AT (243)
=X
Tr(XXH ) Tr(XH X)
= = 2<X (244)
<X <X
H
Tr(XX ) Tr(XH X)
i =i = i2=X (245)
=X =X
By inserting (244) and (245) in (229) and (230), it can be seen that
Tr(XXH )
= X (246)
X
Tr(XXH )
=X (247)
X
Since the function Tr(XXH ) is a real function of the complex matrix X, the
complex gradient matrix (233) is given by
Tr(XXH )
Tr(XXH ) = 2 = 2X (248)
X
(Ax)H (Ax) xH AH Ax
= (251)
x (Bx)H (Bx) x xH BH Bx
AH Ax xH AH AxBH Bx
= 2 H 2 (252)
x BBx (xH BH Bx)2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 26
4.3 Inverse of complex sum 4 COMPLEX MATRICES
E = A + tB (253)
F = B tA, (254)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 27
5 SOLUTIONS AND DECOMPOSITIONS
and
as 1
a Rxx Rx1 Rx,y
= (260)
b Rx1 R11 Ry1
Ax = b (261)
Ax = b x = A1 b (262)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 28
5.1 Solutions to linear equations5 SOLUTIONS AND DECOMPOSITIONS
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 29
5.2 Eigenvalues and Eigenvectors5 SOLUTIONS AND DECOMPOSITIONS
AX + XB = C (272)
vec(X) = (I A + BT I)1 vec(C) (273)
Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec op-
erator.
P
n An XBn = C (274)
P T 1
vec(X) = n Bn An vec(C) (275)
See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec
operator.
Avi = i vi (276)
5.2.2 Decompositions
For matrices A with as many distinct eigenvalues as dimensions, the following
holds, where the columns of V are the eigenvectors and (D)ij = ij i ,
AV = VD (277)
For defective matrices A, which is matrices which has fewer distinct eigenvalues
than dimensions, the following decomposition called Jordan canonical form,
holds
AV = VJ (278)
where J is a block diagonal matrix with the blocks Ji = i I + N. The matrices
Ji have dimensionality as the number of identical eigenvalues equal to i , and N
is square matrix of same size with 1 on the super diagonal and zero elsewhere.
It also holds that for all matrices A there exists matrices V and R such that
AV = VR (279)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 30
5.3 Singular Value Decomposition
5 SOLUTIONS AND DECOMPOSITIONS
5.2.4 Symmetric
Assume A is symmetric, then
VVT = I (i.e. V is orthogonal) (282)
i R (i.e. i is real) (283)
p P p
Tr(A ) = i i (284)
eig(I + cA) = 1 + ci (285)
eig(A cI) = i c (286)
eig(A 1
) = 1
i (287)
For a symmetric, positive matrix A,
eig(AT A) = eig(AAT ) = eig(A) eig(A) (288)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 31
5.4 Triangular Decomposition 5 SOLUTIONS AND DECOMPOSITIONS
UT
A = V D , (296)
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
where D is diagonal with the square root of the eigenvalues of AAT , V is the
eigenvectors of AAT and UT is the eigenvectors of AT A.
A = LU (299)
5.5.1 Cholesky-decomposition
Assume A is a symmetric positive definite square matrix, then
A = UT U = LLT , (300)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 32
5.6 LDM decomposition 5 SOLUTIONS AND DECOMPOSITIONS
A = LDMT (301)
where L, M are unique unit lower triangular matrices and D is a unique diagonal
matrix.
A = LDLT = LT DL (302)
larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the
principal minor is called a leading principal minor. For an n times n square matrix, there are
n leading principal minors. [31]
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 33
6 STATISTICS AND PROBABILITY
6.1.1 Mean
The vector of means, m, is defined by
6.1.2 Covariance
The matrix of covariance M is defined by
or alternatively as
M = h(x m)(x m)T i (305)
as h i
(3) (3)
M3 = m::1 m::2 ...m(3)
::n (307)
where : denotes all elements within the given index. M3 can alternatively be
expressed as
M3 = h(x m)(x m)T (x m)T i (308)
as
h i
(4) (4) (4) (4) (4) (4) (4) (4)
M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4)
::nn (310)
or alternatively as
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 34
6.2 Expectation of Linear Combinations
6 STATISTICS AND PROBABILITY
E[Ax + b] = Am + b (315)
E[Ax] = Am (316)
E[x + b] = m+b (317)
See [7].
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 35
6.3 Weighted Scalar Variable 6 STATISTICS AND PROBABILITY
hyi = wT m (331)
2 T
h(y hyi) i = w M2 w (332)
h(y hyi)3 i = wT M3 w w (333)
h(y hyi)4 i = wT M4 w w w (334)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 36
7 MULTIVARIATE DISTRIBUTIONS
7 Multivariate Distributions
7.1 Cauchy
The density function for a Cauchy distributed vector t RP 1 , is given by
( 1+P
2 ) det()1/2
p(t|, ) = P/2 (335)
(1/2) 1 + (t )T 1 (t )(1+P )/2
where is the location, is positive definite, and denotes the gamma func-
tion. The Cauchy distribution is a special case of the Student-t distribution.
7.2 Dirichlet
The Dirichlet distribution is a kind of inverse distribution compared to the
multinomial distribution on the bounded continuous variate x = [x1 , . . . , xP ]
[16, p. 44] P
P P
p p Y 1
p(x|) = QP xp p
p ( p ) p
7.3 Normal
The normal distribution is also known as a Gaussian distribution. See sec. 8.
7.6 Multinomial
If the vector n contains counts, i.e. (n)i 0, 1, 2, ..., then the discrete multino-
mial disitrbution for n is given by
d d
n! Y X
P (n|a, n) = ani , ni = n (336)
n1 ! . . . nd ! i i i
P
where ai are probabilities, i.e. 0 ai 1 and i ai = 1.
7.7 Students t
The density of a Student-t distributed vector t RP 1 , is given by
( +P
2 ) det()1/2
p(t|, , ) = ()P/2 (337)
(/2) 1 + 1 (t )T 1 (t )(+P )/2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 37
7.8 Wishart 7 MULTIVARIATE DISTRIBUTIONS
7.7.1 Mean
E(t) = , >1 (338)
7.7.2 Variance
cov(t) = , >2 (339)
2
7.7.3 Mode
The notion mode meaning the position of the most probable value
mode(t) = (340)
det()/2 det()N/2
(+P )/2
det 1 + (T M)1 (T M)T
(341)
7.8 Wishart
The central Wishart distribution for M RP P , M is positive definite, where
m can be regarded as a degree of freedom parameter [16, equation 3.8.1] [8,
section 2.5],[11]
1
p(M|, m) = QP
2mP/2 P (P 1)/4 p [ 12 (m + 1 p)]
det()m/2 det(M)(mP 1)/2
1 1
exp Tr( M) (342)
2
7.8.1 Mean
E(M) = m (343)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 38
7.9 Wishart, Inverse 7 MULTIVARIATE DISTRIBUTIONS
7.9.1 Mean
1
E(M) = (345)
mP 1
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 39
8 GAUSSIANS
8 Gaussians
8.1 Basics
8.1.1 Density and normalization
The density of x N (m, ) is
1 1 T 1
p(x) = p exp (x m) (x m) (346)
det(2) 2
p(x)
= p(x)1 (x m) (347)
x
2p
= p(x) 1 (x m)(x m)T 1 1 (348)
xxT
then
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 40
8.1 Basics 8 GAUSSIANS
then
n
a = a + c 1
b (xb b ) (353)
p(xa |xb ) = Nxa (a , a )
a = a c 1
b c
T
n =
b b + Tc 1
a (xa a )
p(xb |xa ) = Nxb (b , b ) (354)
b = b Tc 1
a c
Note, that the covariance matrices are the Schur complement of the block ma-
trix, see 9.1.5 for details.
1
c = 1 1
1 + 2 (361)
mc = (1
1 + 1
2 )
1
(1
1 m1
+ 1
2 m2 ) (362)
1 T 1
C = (m + mT2 1 1 1 1
2 )(1 + 2 ) (1 1
1 m1 + 2 m2 )(363)
2 1 1
1
mT1 1 T 1
1 m1 + m2 2 m2 (364)
2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 41
8.2 Moments 8 GAUSSIANS
1
c = 1
1 + 2
1
(368)
1 1 1 1 1
Mc = (1 + 2 ) (1 M1 + 2 M2 ) (369)
1 h 1 i
C = Tr (1 M1 + 1
2 M 2 )T
(1
1 + 1 1
2 ) (1
1 M1 + 1
2 M2 )
2
1
Tr(MT1 1 T 1
1 M1 + M2 2 M2 ) (370)
2
cc = Nm1 (m2 , (1 + 2 ))
1 1
= p exp (m1 m2 )T (1 + 2 )1 (m1 m2 )
det(2(1 + 2 )) 2
mc = (1 1 1
1 + 2 ) (1 1
1 m1 + 2 m2 )
c = (1 1 1
1 + 2 )
8.2 Moments
8.2.1 Mean and covariance of linear forms
First and second moments. Assume x N (m, )
E(x) = m (372)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 42
8.2 Moments 8 GAUSSIANS
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 43
8.3 Miscellaneous 8 GAUSSIANS
8.2.5 Moments
X
E[x] = k mk (384)
k
XX
Cov(x) = k k0 (k + mk mTk mk mTk0 ) (385)
k k0
8.3 Miscellaneous
8.3.1 Whitening
Assume x N (m, ) then
z = 1/2 (x m) N (0, I) (386)
Conversely having z N (0, I) one can generate data x N (m, ) by setting
x = 1/2 z + m N (m, ) (387)
Note that 1/2 means the matrix which fulfils 1/2 1/2 = , and that it exists
and is unique since is positive definite.
8.3.3 Entropy
Entropy of a D-dimensional gaussian
Z p D
H(x) = N (m, ) ln N (m, )dx = ln det(2) + (389)
2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 44
8.4 Mixture of Gaussians 8 GAUSSIANS
8.4.2 Derivatives
P
Defining p(s) = k k Ns (k , k ) one get
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )] (391)
j k k N s (k , k ) j
j Ns (j , j ) 1
= P (392)
k k Ns (k , k ) j
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )] (393)
j
k k s N (k , k ) j
j Ns (j , j ) 1
= P j (s j ) (394)
k k Ns (k , k )
ln p(s) j Ns (j , j )
= P ln[j Ns (j , j )] (395)
j
k k s N (k , k ) j
j Ns (j , j ) 1
1 1 T 1
= P j + j (s j )(s j ) j (396)
k k s N (k , k ) 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 45
9 SPECIAL MATRICES
9 Special Matrices
9.1 Block matrices
Let Aij denote the ijth block of A.
9.1.1 Multiplication
Assuming the dimensions of the blocks matches we have
A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22
=
A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22
C1 = A11 A12 A1
22 A21 (397)
C2 = A22 A21 A1
11 A12 (398)
as
A11 A12
det = det(A22 ) det(C1 ) = det(A11 ) det(C2 )
A21 A22
C1 = A11 A12 A1
22 A21 (399)
C2 = A22 A21 A1
11 A12 (400)
as 1
C1 A1 1
A11 A12 1 11 A12 C2
=
A21 A22 C2 A21 A1
1
11 C21
A1 1 1 1
C1 1
11 + A11 A12 C2 A21 A11 1 A12 A22
= 1 1
A22 A21 C1 A22 + A22 A21 C1 A12 A1
1 1 1
22
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 46
9.2 Discrete Fourier Transform Matrix, The 9 SPECIAL MATRICES
The Schur complement of block A22 of the matrix above is the matrix (denoted
C1 in the text above)
A11 A12 A1
22 A21
Using the Schur complement, one can rewrite the inverse of a block matrix
1
A11 A12
A21 A22
(A11 A12 A1 1
A12 A1
I 0 22 A21 ) 0 I 22
=
A1
22 A21 I 0 A1
22 0 I
The Schur complement is useful when solving linear systems of the form
A11 A12 x1 b1
=
A21 A22 x2 b2
(A11 A12 A1 1
22 A21 )x1 = b1 A12 A22 b2
When the appropriate inverses exists, this can be solved for x1 which can then
be inserted in the equation for x2 to solve for x2 .
The DFT of the vector x = [x(0), x(1), , x(N 1)]T can be written in matrix
form as
X = WN x, (406)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 47
9.3 Hermitian Matrices and skew-Hermitian 9 SPECIAL MATRICES
x = W1
N X. (407)
TC = W1
N (I (WN t))WN , (412)
AH = A
For real valued matrices, Hermitian and symmetric matrices are equivalent.
Note that
A = B + iC
where B, C are hermitian, then
A + AH A AH
B= , C=
2 2i
9.3.1 Skew-Hermitian
A matrix A is called skew-hermitian if
A = AH
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 48
9.4 Idempotent Matrices 9 SPECIAL MATRICES
9.4.1 Nilpotent
A matrix A is nilpotent if
A2 = 0
A nilpotent matrix has the following property:
9.4.2 Unipotent
A matrix A is unipotent if
AA = I
A unipotent matrix has the following property:
QT Q = QQT = I
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 49
9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
9.5.1 Ortho-Sym
A matrix Q+ which simultaneously is orthogonal and symmetric is called an
ortho-sym matrix [20]. Hereby
QT+ Q+ = I (430)
Q+ = QT+ (431)
The powers of an ortho-sym matrix are given by the following rule
1 + (1)k 1 + (1)k+1
Qk+ = I+ Q+ (432)
2 2
1 + cos(k) 1 cos(k)
= I+ Q+ (433)
2 2
9.5.2 Ortho-Skew
A matrix which simultaneously is orthogonal and antisymmetric is called an
ortho-skew matrix [20]. Hereby
QH
Q = I (434)
Q = QH
(435)
The powers of an ortho-skew matrix are given by the following rule
ik + (i)k ik (i)k
Qk = Ii Q (436)
2 2
= cos(k )I + sin(k )Q (437)
2 2
9.5.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A
A = A+ + A (438)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 50
9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES
9.6.2 Eigenvalues
The following holds with respect to the eigenvalues:
H
A pos. def. eig( A+A
2 )>0
H (441)
A pos. semi-def. eig( A+A
2 )0
9.6.3 Trace
The following holds with respect to the trace:
9.6.4 Inverse
If A is positive definite, then A is invertible and A1 is also positive definite.
9.6.5 Diagonal
If A is positive definite, then Aii > 0, i
9.6.6 Decomposition I
The matrix A is positive semi-definite of rank r there exists a matrix B of
rank r such that A = BBT
9.6.7 Decomposition II
Assume A is an n n positive semi-definite, then there exists an n r matrix
B of rank r such that BT AB = I.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 51
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 52
9.7 Singleentry Matrix, The 9 SPECIAL MATRICES
i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.th
row.
If A is symmetric then
Sij = Jij + Jji Jij Jij (458)
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 53
9.8 Symmetric, Skew-symmetric/Antisymmetric 9 SPECIAL MATRICES
A = AT (459)
Symmetric matrices have many important properties, e.g. that their eigenvalues
are real and eigenvectors orthogonal.
9.8.2 Skew-symmetric/Antisymmetric
The antisymmetric matrix is also known as the skew symmetric matrix. It has
the following property from which it is defined
A = AT (460)
Hereby, it can be seen that the antisymmetric matrices always have a zero
diagonal. The n n antisymmetric matrices also have the following properties.
9.8.3 Decomposition
A square matrix A can always be written as a sum of a symmetric A+ and an
antisymmetric matrix A
A = A+ + A (463)
Such a decomposition could e.g. be
A + AT A AT
A= + = A+ + A (464)
2 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 54
9.10 Transition matrices 9 SPECIAL MATRICES
are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is
given by:
t0 t1 tn1
.. .. ..
t1 . . .
T= .
(466)
.. .. ..
. . t1
tn1 t1 t0
The circular Toeplitz matrix:
t0 t1 tn1
.. .. ..
tn1 . . .
TC =
.
(467)
.. .. ..
. . t1
t1 tn1 t0
The transition matrix usually describes the probability of moving from state i
to j in one step and is closely related to markov processes. Transition matrices
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 55
9.11 Units, Permutation and Shift 9 SPECIAL MATRICES
9.11.3 Permutations
Let P be some permutation matrix, e.g.
eT2
0 1 0
= eT1
P= 1 0 0 = e2 e1 e3 (477)
0 0 1 eT3
PPT = I (478)
and that
eT2 A
PA = eT1 A
AP = Ae2 Ae1 Ae3 (479)
eT3 A
That is, the first is a matrix which has columns of A but in permuted sequence
and the second is a matrix which has the rows of A but in the permuted se-
quence.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 56
9.12 Vandermonde Matrices 9 SPECIAL MATRICES
i.e. a matrix of zeros with one on the sub-diagonal, (L)ij = i,j+1 . With some
signal xt for t = 1, ..., N , the n.th power of the lag operator shifts the indices,
i.e. n 0 for t = 1, .., n
(Ln x)t = (481)
xtn for t = n + 1, ..., N
A related but slightly different matrix is the recurrent shifted operator defined
on a 4x4 example by
0 0 0 1
1 0 0 0
L =
0 1 0 0
(482)
0 0 1 0
i.e. a matrix defined by (L)ij = i,j+1 + i,1 j,dim(L) . On a signal x it has the
effect
(Ln x)t = xt0 , t0 = [(t n) mod N ] + 1 (483)
That is, L is like the shift operator L except that it wraps the signal as if it
was periodic and shifted (substituting the zeros with the rear end of the signal).
Note that L is invertible and orthogonal, i.e.
L1 = LT (484)
1 v1 v12 v1n1
1 v2 v22 v2n1
V= . . . (485)
.. ..
.. .. . .
1 vn vn2 vnn1
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 57
10 FUNCTIONS AND OPERATORS
assuming the limit exists and is finite. If the coefficients cn fulfils n cn xn < ,
P
then one can prove that the above series exists and is finite, see [1]. Thus for
any analytical function f (x) there exists a corresponding matrix function f (x)
constructed by the Taylor expansion. Using this one can prove the following
results:
1) A matrix A is a zero of its own characteristic polynomium [1]:
X
p() = det(I A) = cn n p(A) = 0 (490)
n
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 58
10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS
eA eB = eA+B if AB = BA (498)
A 1 A
(e ) = e (499)
d tA
e = AetA = etA A, tR (500)
dt
d
Tr(etA ) = Tr(AetA ) (501)
dt
det(eA ) = eTr(A) (502)
X (1)n A2n+1 1 1
sin(A) = A A3 + A5 ... (503)
n=0
(2n + 1)! 3! 5!
X (1)n A2n 1 1
cos(A) = I A2 + A4 ... (504)
n=0
(2n)! 2! 4!
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 59
10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS
A (B + C) = AB+AC (506)
A B 6= BA in general (507)
A (B C) = (A B) C (508)
(A A B B) = A B (A B) (509)
T T T
(A B) = A B (510)
(A B)(C D) = AC BD (511)
(A B)1 = A1 B1 (512)
(A B)+ = A+ B+ (513)
rank(A B) = rank(A)rank(B) (514)
Tr(A B) = Tr(A)Tr(B) = Tr(A B ) (515)
det(A B) = det(A)rank(B) det(B)rank(A) (516)
{eig(A B)} = {eig(B A)} if A, B are square (517)
{eig(A B)} = {eig(A)eig(B)T } (518)
if A, B are symmetric and square
eig(A B) = eig(A) eig(B) (519)
Where {i } denotes the set of values i , that is, the values in no particular
order or structure, and A denotes the diagonal matrix with the eigenvalues of
A.
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 60
10.3 Vector Norms 10 FUNCTIONS AND OPERATORS
X
||x||1 = |xi | (525)
i
||x||22 = x x H
(526)
" #1/p
X
||x||p = |xi |p (527)
i
||x|| = max |xi | (528)
i
||A|| 0 (529)
||A|| = 0A=0 (530)
||cA|| = |c|||A||, cR (531)
||A + B|| ||A|| + ||B|| (532)
where || || on the left side is the induced matrix norm, while || || on the right
side denotes the vector norm. For induced norms it holds that
||I|| = 1 (534)
||Ax|| ||A|| ||x||, for all A, x (535)
||AB|| ||A|| ||B||, for all A, B (536)
10.4.3 Examples
X
||A||1 = max |Aij | (537)
j
q i
||A||2 = max eig(AH A) (538)
||A||p = ( max ||Ax||p )1/p (539)
||x||p =1
X
||A|| = max |Aij | (540)
i
sX j
q
||A||F = |Aij |2 = Tr(AAH ) (Frobenius) (541)
ij
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 61
10.5 Rank 10 FUNCTIONS AND OPERATORS
10.4.4 Inequalities
E. H. Rasmussen has in yet unpublished material derived and collected the
following inequalities. They are collected in a table as below, assuming A is an
m n, and d = rank(A)
||A||max ||A||1 ||A|| ||A||2 ||A||F ||A||KF
||A||max 1 1 1 1 1
||A||1 m m m m m
||A|| n n n n n
||A||2 mn n m 1 1
||A||F mn n m d 1
||A||KF mnd nd md d d
which are to be read as, e.g.
||A||2 m ||A|| (544)
10.5 Rank
10.5.1 Sylvesters Inequality
If A is m n and B is n r, then
See [9].
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 62
10.7 Miscellaneous 10 FUNCTIONS AND OPERATORS
10.7 Miscellaneous
For any A it holds that
It holds that
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 63
A ONE-DIMENSIONAL RESULTS
A One-dimensional Results
A.1 Gaussian
A.1.1 Density
(x )2
1
p(x) = exp (551)
2 2 2 2
A.1.2 Normalization
Z
(s)2
e 22 ds = 2 2 (552)
r 2
b 4ac
Z
(ax2 +bx+c)
e dx = exp (553)
a 4a
r 2
4c2 c0
Z
2 c
ec2 x +c1 x+c0 dx = exp 1 (554)
c2 4c2
A.1.3 Derivatives
p(x) (x )
= p(x) (555)
2
ln p(x) (x )
= (556)
2
1 (x )2
p(x)
= p(x) 1 (557)
2
1 (x )2
ln p(x)
= 1 (558)
2
A.1.5 Moments
If the density is expressed by
(s )2
1
p(x) = exp or p(x) = C exp(c2 x2 + c1 x) (559)
2 2 2 2
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 64
A.2 One Dimensional Mixture of Gaussians
A ONE-DIMENSIONAL RESULTS
c1
hxi = = 2c2 2
1 c1
hx2 i = 2 + 2 = 2c2 + h 2c2
c21
i
c1
hx3 i = 3 2 + 3 = (2c ) 2 3 2c2
2 4 2 2
c1 c1 1 1
hx4 i = 4 + 62 2 + 3 4 = 2c2 + 6 2c2 2c2 +3 2c2
From the un-centralized moments one can derive other entities like
1
hx2 i hxi2 = 2 = 2c2
2c1
hx3 i hx2 ihxi = 2 2 = (2c2 )2
c2
h i
2
hx4 i hx2 i2 = 2 4 + 42 2 = (2c2 )2 1 4 2c12
A.2.2 Moments
A useful fact of MoG, is that
X
hxn i = k hxn ik (562)
k
where hik denotes average with respect to the k.th component. We can calculate
the first four moments from the densities
1 (x k )2
X 1
p(x) = k p exp (563)
2k2 2 k2
k
X
k Ck exp ck2 x2 + ck1 x
p(x) = (564)
k
as
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 65
B PROOFS AND DETAILS
h i
P P ck1
hxi = k k k = k k
2ck2 2
2 2 1 ck1
2k )
P P
hx i = k k (k + = k k 2ck2 + 2ck2
c2k1
h h ii
ck1
hx3 i 2 3
P P
= k k (3k k + k ) = k k (2ck2 )2 3 2ck2
2 2
1 ck1 c2k1
hx4 i 4 2 2 4
P P
= k k (k + 6k k + 3k ) =
k k 2ck2 2ck2 6 2ck2 + 3
From the un-centralized moments one can derive other entities like
2
hx2 i hxi2 2
P
= k,k0 k k k + k k k
0 0
hx3 i hx2 ihxi = 2 3 2 2
P
k,k 0 k k 0 3k k + k (k + k )k 0
4
hx4 i hx2 i2 2 2 4 2 2 2 2
P
= k,k0 k k k + 6k k + 3k (k + k )(k0 + k0 )
0
A.2.3 Derivatives
Defining p(s) = k k Ns (k , k2 ) we get for a parameter j of the j.th compo-
P
nent
ln p(s) j Ns (j , j2 ) ln(j Ns (j , j2 ))
=P 2 (565)
j k k Ns (k , k ) j
that is,
ln p(s) j Ns (j , j2 ) 1
= P 2 (566)
j k k Ns (k , k ) j
ln p(s) j Ns (j , j2 ) (s j )
= 2 (567)
j2
P
j k k Ns (k , k )
" #
ln p(s) j Ns (j , j2 ) 1 (s j )2
= 2 1 (568)
j2
P
j k k Ns (k , k ) j
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 66
B.1 Misc Proofs B PROOFS AND DETAILS
aT XBXT c = yz = zT yT
where conj means complex conjugated. Applying the vec rule for linear forms
Eq 520, we get
where we have also used the rule for transpose of Kronecker products. For yT
this yields (BT aH )vec(X). Similarly we can rewrite z which is the same as
vec(zT ) = vec(cT conj(X)). Applying again Eq 520, we get
z = (I cT )vec(conj(X))
where I is the identity matrix. For zT we obtain vec(X)(I c). Finally, the
original expression is zT yT which now takes the form
the final step is to apply the rule for products of Kronecker products and by
that combine the Kronecker products. This gives
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 67
B.1 Misc Proofs B PROOFS AND DETAILS
(Xn )kl X
= Xk,u1 Xu1 ,u2 ...Xun1 ,l
Xij Xij u1 ,...,un1
= k,i u1 ,j Xu1 ,u2 ...Xun1 ,l
+Xk,u1 u1 ,i u2 ,j ...Xun1 ,l
..
.
+Xk,u1 Xu1 ,u2 ...un1 ,i l,j
n1
X
= (Xr )ki (Xn1r )jl
r=0
n1
X
= (Xr Jij Xn1r )kl
r=0
Using the properties of the single entry matrix found in Sec. 9.7.4, the result
follows easily.
Through the calculations, (100) and (240) were used. In addition, by use of
(241), the derivative is found with respect to the imaginary part of X
det(XH AX) Tr[AX(XH AX)1 (XH )]
i = i det(XH AX)
=X =X
Tr[(XH AX)1 XH A(X)]
+
=X
= det(X AX) AX(XH AX)1 ((XH AX)1 XH A)T
H
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 68
B.1 Misc Proofs B PROOFS AND DETAILS
Notice, for real X, A, the sum of (249) and (250) is reduced to (54).
Similar calculations yield
and
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 69
REFERENCES REFERENCES
References
[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-
vationer. Studenterlitteratur, 1992.
[2] Jorn Anemuller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-
pendent component analysis of frequency-domain electroencephalographic
data. Neural Networks, 16(9):13111323, November 2003.
[3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe-
matics and Computin Science Series. Clarendon Press, 1990.
[4] Christopher Bishop. Neural Networks for Pattern Recognition. Oxford
University Press, 1995.
[5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes.
[6] D. H. Brandwood. A complex gradient operator and its application in
adaptive array theory. IEE Proceedings, 130(1):1116, February 1983. PTS.
F and H.
[7] M. Brookes. Matrix Reference Manual, 2004. Website May 20, 2004.
[8] Contradsen K., En introduktion til statistik, IMM lecture notes, 1984.
[9] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004.
[10] Nielsen F. A., Formula, Neuro Research Unit and Technical university of
Denmark, 2002.
[11] Gelman A. B., J. S. Carlin, H. S. Stern, D. B. Rubin, Bayesian Data
Analysis, Chapman and Hall / CRC, 1995.
[12] Gene H. Golub and Charles F. van Loan. Matrix Computations. The Johns
Hopkins University Press, Baltimore, 3rd edition, 1996.
[13] Robert M. Gray. Toeplitz and circulant matrices: A review. Technical
report, Information Systems Laboratory, Department of Electrical Engi-
neering,Stanford University, Stanford, California 94305, August 2002.
[14] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River,
NJ, 4th edition, 2002.
[15] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge
University Press, 1985.
[16] Mardia K. V., J.T. Kent and J.M. Bibby, Multivariate Analysis, Academic
Press Ltd., 1979.
[17] Mathpages on Eigenvalue Problems and Matrix Invariants,
http://www.mathpages.com/home/kmath128.htm
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 70
REFERENCES REFERENCES
[19] Thomas P. Minka. Old and new matrix algebra useful for statistics, De-
cember 2000. Notes.
http://en.wikipedia.org/wiki/Minor_(linear_algebra)
[32] Zhaoshui He, Shengli Xie, et al, Convolutive blind source separation in
frequency domain based on sparse representation, IEEE Transactions on
Audio, Speech and Language Processing, vol.15(5):1551-1563, July 2007.
[33] Karim T. Abou-Moustafa On Derivatives of Eigenvalues and Eigenvectors
of the Generalized Eigenvalue Problem. McGill Technical Report, October
2010.
[34] Mohammad Emtiyaz Khan Updating Inverse of a Matrix When a Column
is Added/Removed. Emt CS,UBC February 27, 2008
Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 71
Index
Anti-symmetric, 54 Normal-Inverse Gamma distribution, 37
Normal-Inverse Wishart distribution, 39
Block matrix, 46
Orthogonal, 49
Chain rule, 15
Cholesky-decomposition, 32 Power series of matrices, 58
Co-kurtosis, 34 Probability matrix, 55
Co-skewness, 34 Pseudo-inverse, 21
Condition number, 62
Cramers Rule, 29 Schur complement, 41, 47
Single entry matrix, 52
Derivative of a complex matrix, 24 Singular Valued Decomposition (SVD),
Derivative of a determinant, 8 31
Derivative of a trace, 12 Skew-Hermitian, 48
Derivative of an inverse, 9 Skew-symmetric, 54
Derivative of symmetric matrix, 15 Stochastic matrix, 55
Derivatives of Toeplitz matrix, 16 Student-t, 37
Dirichlet distribution, 37 Sylvesters Inequality, 62
Symmetric, 54
Eigenvalues, 30
Eigenvectors, 30 Taylor expansion, 58
Exponential Matrix Function, 59 Toeplitz matrix, 54
Transition matrix, 55
Gaussian, conditional, 40 Trigonometric functions, 59
Gaussian, entropy, 44
Gaussian, linear combination, 41 Unipotent, 49
Gaussian, marginal, 40
Gaussian, product of densities, 42 Vandermonde matrix, 57
Generalized inverse, 21 Vec operator, 59, 60
Idempotent, 49
Kronecker product, 59
LDL decomposition, 33
LDM-decomposition, 33
Linear regression, 28
LU decomposition, 32
Lyapunov Equation, 30
Moore-Penrose inverse, 21
Multinomial distribution, 37
Nilpotent, 49
Norm of a matrix, 61
Norm of a vector, 61
72