MatrixAnalysis Review
MatrixAnalysis Review
Tiep Vu
November 5, 2017
• Each A ∈ Mn (C) can be written in exactly one way as A = H(A) + iK(A), 0.4.2 Rank equalities
in which H(A), K(A) are Hermitian. • If A ∈ Mm,n (C), then : rank AH = rank AT = rank A = rank A
• trAAH = trAH A = i,j |aij |2 , A ∈ Mn (C) • If A ∈ Mm (F) and C ∈ Mn (F) are nonsingular and B ∈ Mm,n (F), then:
P
rank AB = rank B = rank BC = rank ABC
• range A + range B = range [A B] • If A, B ∈ M (F), then rank A = rank B ⇔ there exist a nonsingular
m,n
X ∈ Mm (F) and a nonsingular Y ∈ Mn (F) such that B = XAY
A
• nullspace A ∩ nullspace B = nullspace • If A ∈ Mm,n (C), then: rank AH A = rank A
B
1
• If A ∈ Mm,n (F), then rank A = k ⇔ A = XY T for some X ∈ Mm,k (F) and 0.7.2 The Sherman-Morrion-Woddbury formula
Y ∈ Mn,k) (F) that each have independent columns
Let A ∈ Mn (F) be nonsingular and B = A+XRY, X ∈ Mn,r (F), Y ∈ Mr,n (F), R ∈
−1
• rank A = k ⇐⇒ ∃ nonsingular matrices S ∈ Mn (F) and T ∈ Mn (F) such M r,r (F). If B and R + Y A−1 X are nonsingular, then:
I 0 B −1 = A−1 − A−1 X(R−1 + Y A−1 X)−1 Y A−1
that A = S k T
0 0
If r n, then R and R−1 + Y A−1 X may be much easier to invert than B. If
x, y ∈ Fn are nonzero vectors, X = x, Y = y T y T A−1 x 6= 0, and R = [1] then:
• Let A ∈ Mm,n (F). If X ∈ Mn,k (F) and Y ∈ Mm,k (F) and if W = Y T AX is
nonsingular, then: (A + xy T )−1 = A−1 − (1 + y T A−1 x)−1 A−1 xy T A−1
In particular, if B = I + xy T for x, y ∈ Fn and y T x 6= −1, then
rank (A − AXW −1 Y T A) = rank A − rank AXW −1 Y T A
B −1 = I − (1 + y T x)−1 xy T
When k = 1 (Wedderburn’s rank-one redution formula): If x ∈ Fn and
y ∈ Fm , and if ω = y T Ax 6= 0, then: 0.7.3 Complementary nullities
Let A ∈ Mn (F) is nonsingular. The law of complementary nullities is:
rank (A − ω −1 Axy T A) = rank A − 1
nullity(A[α, β]) = nullity(A−1 [β c , αc ])
n m T
Conversely, if σ ∈ F, u ∈ F , v ∈ F , and rank (A − σuv ) < rank A, then: which is equivalent to the rank identity:
rank (A − σuv T ) = rank A − 1 and there are x ∈ Fn , y ∈ Fm such that
u = Ax, v = AT y, y T Ax 6= 0, and σ = (y T Ax)−1 rank (A[α, β]) = rank (A−1 [β c , αc ]) + r − s − n, r = |α|, s = |β|
2
• adj (AB) = (adj A)(adj B) for all A, B ∈ Mn 0.8.4 Determinantal identities of Sylvester and Kronecker
• A product of square triangular matrices of the same size and type is a tri-
⇒ a det( − a−1 xy T ) = a det  − y T (adj Â)x
angular matrix of the same type; eack i, i diagonal entry of such a matrix
a = −1 ⇒ det( + xy T ) = det  + y T (adj Â)x product is the product of the i, i entries of the factors.
3
0.9.3 Permutation matrices 0.9.7 Hessenberg matrices
• A squate matric P is a permutation matric if exactly one entry in each row • A matric A = [aij ] ∈ Mn (F) is said to be in upper Hessenberg form or to be
and column is equal to 1 and all other entries are 0. an upper Hessenberg matric if aij = 0 for all i > j + 1:
• P T = P −1 and det P = ±1
a11 ∗
a21 a22
• The product of two permuation matrices is again a permuation matrix. ..
A=
a32 .
.. ..
• A matrix A ∈ Mn such that P AP T is triangular for some permutation matric
. .
P is called essentially triangular 0 an,n−1 ann
4
0.9.9 Vandermonde matrices and Lagrange interpolation 0.10 Change of basis
• A Vandermonde matrix A ∈ Mn (F) has the form 0.11 Equivalence relations
1 x1 x21 . . . xn−1
1 Equivalence Relation ∼ A∼B
1 x2 x22 . . . xn−1 congruence A = SBS T
2
A = .
.. .. . . .. unitary congruence A = U BU T
.. . . . .
2 n−1 *congruence A = SBS H
1 xn xn . . . xn −1
consimilarity A = SBS
n
Y equivalence A = SBT
• det A = (xi − xj ) unitary equivalence A = U BV
1≤i<j≤n diagonal equivalence A = S1 BD2
similarity A = SBS −1
0.9.10 Cauchy matrices unitary similarity A = U BU H
triangular equivalence A = LBR
• A Cauchy matric A ∈ Mn (F) is matric of the form A = [(ai + bj )−1 ]ni,j=1 ,
in which a1 , . . . , an , b1 , . . . , bn are scalars such that ai + bj 6= 0 for all in which:
i, j = 1, . . . , n.
Q • D1 , D2 , S, T, L and R are square and nonsingular.
1≤i<j≤n (aj − ai )(bj − bi )
• det A = Q
1≤i<j≤n (ai + bj ) • U and V are unitary
• A Hilbert matrix Hn = [i + j − 1]−1 ]ni,j=1 is a Cauchy matrix that is also a • L is lower triangular
Hankel matrix.
(1!2! . . . (n − 1)!)4 • R is upper triangular
det Hn =
1!2! . . . (2n − 1)!
• D1 and D2 are diagonal
• So a Hilbert matrix is always nonsingular. The entries of its inverse
Hn−1 = [hij ]ni,j=1 are: • A and B need not be square for equivalence, unitary equivalence, triangular
equivalence, or diagonal equivalence.
(−1)i+j (n + i − 1)!(n + j − 1)!
hij = 2
(i − 1)!(j − 1)! (n − i)!(n − j)!(i + j + 1)!
1 Eigenvalues, Eigenvectors and Similarity
0.9.11 Involution, nilpotent, projection, coninvolution 1.1 The eigenvalue-eigenvector equation
A matrix A ∈ Mn (F) is • A ∈ Mn , p() is a given polynomial. Then if Ax = λx ⇒ p(A)x = p(λ)x
2
• an involution if A = I
• Theorem 1.1.6 A ∈ Mn , p() is a given polynomial. Then if Ax = λx ⇒
• nilpotent if Ak = 0 for some k ∈ N∗; the least such k is the index of nulpotence p(A)x = p(λ)x. Conversely, if k ≥ 1 and if µ is an eigenvalue of p(A), then
of A. ∃λ ∈ σ(A)|µ = p(λ)
5
1.2 The characteristic polynomial and algebraic multilicity 4. If A is similar to a matrix of the above form, then the diagonal entries
of Λ are eigenvalues of A. If A is similar to a diagonal matrix Λ, then
• Theorem 1.2.8 (Brauer’s theorem) . Let x, y ∈ C n , x 6= 0 and A ∈ Mn .
the diagonal entries of Λ are all of the eigenvalues of A
Suppose that Ax = λx and let the eigenvalues of A be λ, λ2 , . . . , λn . Then,
the eigenvalues of A + xy H are λ + y H x, λ2 , . . . , λn . In other words,
• Lemma 1.3.8 Let λ1 , . . . , λk , (k ≥ 2) be distinct eigenvalues of A ∈ Mn and
(t − λ)pA+xyH (t) = (t − (λ + y x))pA (t)H suppose that x(i) is an eigenvector associated with λi for each i = 1, . . . , k.
Then the vectors x(1) , . . . , x(k) are linearly independent.
• ak is the coefficient of tk in pA (t). Ek (A) is the sum of all k-by-k principal
minors of A. Then • Theorem 1.3.9 If A ∈ Mn has n distinct eigenvalues, then A is diagonal-
1 (k) izable.
ak = p (0) = (−1)n−k En−k (A)
k! A
• Lemma 1.3.10 Let B1 ∈ Mn1 , . . . , Bd ∈ Mnd be gien and let B be the direct
X k
Y sum
• Ek (A) = Sk (λ1 , . . . , λn ) = λij , λi ∈ σ(A)
B1 0
1≤i1 <···<ik ≤n j=1
B=
.. = B1 ⊕ · · · ⊕ Bd
.
pA (t) = tn − E1 (A)tn−1 + · · · + (−1)n−1 En−1 t + (−1)n En (A) 0 Bd
• Theorem 1.2.17 Let A ∈ Mn . There is some δ > 0 such that A + εI is Then B is diagonalizable iff each of B1 , . . . , Bd is diagonalizable.
nonsingular whenever ε ∈ C and 0 < |ε| < δ (See Observation 1.1.8)
0 • Theorem 1.3.12 Let A, B ∈ Mn be diagonalizable. Then AB = BA iff
p (α) = · · · = p(k−1) (α) = 0 they are simultaneously diagonalizable.
• α is a zero of p(t) of multiplicity k iff
p(k) (α) 6= 0
• Theorem 1.2.18 Let A ∈ Mn and suppose that λ ∈ σ(A) has algebraic • F ⊆ Mn is a commuting family ⇒ ∃x ∈ Cn that is an eigenvector of every
multiplicity k. Then rank (A − λI) ≥ n − k with equality for k = 1. A∈F
• Let A ∈ Mn and x, y ∈ Cn be given. Let f (t) = det(A + txy T ). Then, for • F is a family of diagonalizable matrices. Then F is a commuting family ⇔
any t1 6= t2 it is a simultaneously diagonalizable family.
t2 f (t1 ) − t1 f (t2 )
det A =
t2 − t1 • A ∈ Mm,n , B ∈ Mn,m , m ≤ n. Then pBA (t) = tn−m pAB (t)
1.3 Similarity • Theorem 1.3.28 Let S ∈ Mn be nonsingular and let S = C + iD, in which
• Theorem 1.3.7 Let A ∈ Mn be given. Then C, D ∈ Mn (R). There is a real number τ such that T = C + τ D is nonsin-
gular.
1. A is similar to a bloock matrix of the form
• Theorem 1.3.29 Two real matrices that are similar over C are similar over
Λ C
, Λ = diag(λ1 , . . . , λk ), D ∈ Mn−k , 1 ≤ k < n R.
0 D
iff there are k linearly independent vectors in Cn , each of which is an • Theorem 1.3.31 (Misky) Let an interger n ≥ 2 and complex scalars
eigenvector of A. λ1 , . . . , λn and d1 , . . . , dn be given. There is an A ∈ Mn with eigenvalues
n n
2. The matrix A is diagonalizable iff it has n linearly independent eigen- X X
λ1 , . . . , λn and main diagonal entries d1 , . . . , dn iff λi = di .
vectors.
i=1 i=1
3. If x(1) , . . . , x(n) are linearly independent vectors of A and if S = If λ1 , . . . , λn and d1 , . . . , dn are all real and have the same sums, there is an
[x(1) , . . . , x(n) ], then S −1 AS is a diagonal matrix A ∈ Mn (R) with eigenvalues λ1 , . . . , λn and main diagonal entries d1 , . . . , dn .
6
2 Unitary equivalence and normal 2.3 Schur’s unitary triangularization theorem
• Schur’s Thr: Given A ∈ Mn with eigenvalues λ1 , . . . , λn in any prescribed
2.1 Unitary matrices order, there is a unitary matrix U ∈ Mn such that:
• Orthogonal : The vectors x1 , . . . , xk ∈ Cn form an orthogonal set if xH
i xj =
U ∗ AU = T = [tij ]
0, ∀ pairs1 ≤ i < j ≤ k.
is upper triangular, with diagonal entries tii = λi , i = 1, . . . , n. That is,
• Orthonormal : If an orthogonal set has vectors normalized, xH i xi = 1, ∀i = every square matrix A is unitarily equivalent to a triangular matrix whose
1, . . . , k, then the set is call orthonormal. An orthonormal set of cectors is
diagonal entries are the eigenvalues of A in a prescribed order.
linearly independent.
• Thr: Let F ⊆ Mn be a commuting family. There is a unitary matrix U ∈ Mn
• A matrix U ∈ Mn is said to be unitary if U H U = I. If, in addition, such that U H AU is upper triangular for every A ∈ F
U ∈ Mn (R), U is said to be real orthogonal.
• Thr: if A ∈ Mn (R), there is a real orthogonal matrix Q ∈ Mn (R) such that:
n
• Theorem: For all x ∈ C and matrix U is unitary, the Euclidean length of
y = U x is the same as that of x; that is, y ∗ y = x ∗ x. A1 ∗
A2
QT AQ = ∈ Mn (R), 1 ≤ k ≤ n (13)
• Theorem: Let A ∈ Mn be a nonsingular matrix. Then A−1 is similar to ..
.
AH iff there is a nonsingular matrix B ∈ Mn such that A = B −1 B H
0 Ak
where each Ai is a real 1-by-1 matrix, or a real 2-by-2 matrix with a non-
2.2 Unitary equivalence real pair of complex conjugate eigenvalues. The diagonal blocks Ai may be
• Def : A matrix B ∈ Mn is said to be unitarily equivalent to A ∈ Mn if there arranged in any prescribed order.
is a unitary matrx U ∈ Mn such that B = U H AU . If U may be taken to be • Thr: Let F ⊆ Mn (R) be a commuting family. There is a real orthogonal
real, then B is said to be (real) orthogonally equivalent to A. matrix Q ∈ Mn (R) such that QT AQ is of the form 13 for every A ∈ F.
• Thr: If A = [aij ] and B = [bij ] ∈ Mn are unitary equivalent, then:
2.4 Some implications of Schur’s theorem
n n
• Lem: Supose that R = [rij ], T = [tij ] ∈ Mn are upper triangular and
X X
|bij |2 = |aij |2
i,j=1 i,j=1
that rij = 0, 1 ≤ i, j ≤ k < n, and tk+1,k+1 = 0. Let T 0 = [t0ij ]RT then
t0ij = 0, 1 ≤ i, j ≤ k + 1.
• Householder transformations: Let w ∈ C n be a nonzero vector and • Cayley-Hamilton Thr: Let PA (t) be the characteristic polynomial of
define Uw ∈ Mn by Uw = I − twwH in which t = 2(wH w)−1 . Then, A ∈ Mn . Then: PA (A) = 0.
• Uw x = x if x ⊥ w and Ww w = −w • Thr: Let A = [aij ] ∈ Mn . For every > 0, there exists a matrix
A() = [aij ()] ∈ Mn that has n distinct eigenvalues (and therefore diag-
• Uw is both unitary and Hermitian
onalizabal) and is such that:
• Specht’s Thr: Two given matrices A, B ∈ Mn are unitarily equivalent iff: n
X
tr W (A, AH ) = tr W (B, B H ) for every word W (s, t) in two noncommuting |a − [ij] − aij ()|2
variables. i,j=1
W (A, AH ) = Am1 (AH )n1 Am2 (AH )n2 . . . Amk (AH )nk • Thr: Let A ∈ Mn . For every > 0, there exists a nonsingular matrix
S ∈ Mn such that
S−1 AS = T = [tij ()]
• Pearcy’s Thr: Two given matrices A, B ∈ Mn are unitarily equivalent iff
tr W (A, AH ) = tr W (B, B H ) for every word W (s, t) of degre at most 2n2 . is upper triangular and |Tij ()| < , ∀1 ≤ i <, > j ≤ n
7
• Thr: Suppose that A ∈ Mn has eigenvalues λ + i with multiplicity ni , i = 1. xH Ax is real for all x ∈ Cn ;
1, . . . , k and that λ1 , . . . , λk are distint. Then A is similar to a matrix of the 2. A is normal and all the eigenvalues of A is are real; or
form
T1 0
3. S H AS is Hermitian for all S ∈ Mn
T2
• Thm: Let A ∈ Mn be gien. Then A is Hermitian iff there is a nunitarry
. ..
matrix U ∈ Mn and a real diagonal matrix Λ ∈ Mn s.t. A = U ΛU H
0 Tk
where Ti ∈ Mni is upper triangular with all diagonal entries equal to 4.2 Variational characterizations of eigenvalues of Hermi-
λi , i = 1, . . . , k. If A ∈ Mn (R)) and if all the eigenvalues of A are real, tian matrices
then the same result holds, and the similarity matrix may be taken to be • Thm: Let A ∈ Mn be a Hermitian matrix with eigenvalues λ1 ≤ λ2 ≤ . . . λn ,
real. and let k be the given integer with 1 ≤ k ≤ n. Then
• Thr: Let A, B ∈ Mn have eigencalues α1 , . . . , αn and β1 , . . . , βn respectively. xH Ax
If A and B commute, there is a permutation i1 , . . . , in of the indices 1, . . . , n min max = λk
w1 ,w2 ,...,wn−k ∈Cn x6=0,x⊥w1 ,...,wn−k xH x
such that the eigenvalues of A + B are α1 + β1 , . . . , αn + βn . In particular,
σ(A + B) ⊆ σ(A) + σ(B) if A and B are commute.
xH Ax
max min = λk
w1 ,w2 ,...,wk−1 ∈Cn x6=0,x⊥w1 ,...,wk−1 xH x
2.5 Normal matrices
• Def : A matrix A ∈ Mn is said to be normal if AH A = AAH . 4.3 Some apps of the variational characterizations
• Thm: if A = [aij ] ∈ Mn has eigenvalues λ1 , . . . , λn , the following statements • Thm: Let A, B ∈ Mn be Hermitian. Let the respective eigenvalues A, B,
are equivalents: and A + B be {λ1 (A) ≤ λ2 (A) ≤ · · · ≤ λn (A)}, {λ1 (B) ≤ λ2 (B) ≤ · · · ≤
λn (B)}, and {λ1 (A + B) ≤ λ2 (A + B) ≤ · · · ≤ λn (A + B)}. Then:
1. A is normal;
λi (A + B) ≤ λi+j (A) + λn−j (B), i = 1, 2, . . . , n; j = 0, 1, . . . , n − 1
2. A is unitary diagonalizable;
Pn 2
Pn 2
3. i,j=1 |aij | = i=1 |λi | ; and • Thm: Let A, B ∈ Mn be Hermitian and let the eigenvalues λi (A), λi (B),
4. There is an orthonormal set of n eigenvectors of A. and λi (A + B) be arranged in increasing order. For each k = 1, 2, . . . , n, we
have:
• A normal matrix is nondefective (the geometric’s and algebraic’s multiplicity λk (A) + λ1 (B) ≤ λk (A + B) ≤ λk (A) + λn (B)
are the same)
• Corollary: Let A, B ∈ Mb n be Hermitian. Assume that B is positive
• If A ∈ Mn is normal, x ∈ Cn then Ax = λx ⇔ xH A = λxH semidefinite and that the eigenvalues of A and A + B are arranged in in-
creasing order. Then: λk (A) ≤ λk (A + B) for all k = 1, 2, . . . , n
• Thm: If N ⊆ Mn is a commuting family of normal matrices, then N is
simultaneously unitary diagonalizable. • Thm 4.3.4: Let A ∈ Mn be Hermitian and let z ∈ Cn be a given vector. If
the eigenvalues of A and A ± zz H are arranged in increasing order, we have:
• Lem: If A ∈ Mn is Hermitian and xH Ax ≥ 0 for all x ∈ Cn , then all the
eigenvalues of A are nonnegative. If, in addition, tr A = 0 then A = 0. 1. λk (A ± zz H ) ≤ λk+1 (A) ≤ λk+2 (A ± zz H ), k = 1, 2, . . . , n − 2
2. λk (A) ≤ λk+1 (A ± zz H ) ≤ λk+2 (A)
4 Hermitian and symmetric matrices • Let A, B ∈ Mn be Hermitian and suppose that B has rank at most r. Then:
8
1. j + k ≥ n + 1, then: λj+k−n (A + B) ≤ λj (A) + λk (B). ◦ 0 6= z ∈ Cn :
2. j + k ≤ n + 1, then λj (A) + λk (B) ≤ λj+k−1 (A + B)
λi (A) ≤ λi (A + zz H ) ≤ λ(A); i = 1, n − 1 (18)
H
λn (A) ≤ λn A + zz
A y (19)
• Thm: A Hermitian. Â = H . Then:
y a
◦ λk (A) + λ1 (B) ≤ λk (A + B) ≤ λk (A) + λn (B); k = 1, n
λ̂1 ≤ λ1 ≤ λ̂2 ≤ λ2 ≤ · · · ≤ λ̂n ≤ λn ≤ λ̂n+1 n
• Cauchy’s Thm: Let B ∈ Mn be Hermitian, let y ∈ C and a ∈ R be a
B y
• Thm 4.3.10 If: λ̂1 ≤ λ1 ≤ λ̂2 ≤ λ2 ≤ · · · ≤ λ̂n ≤ λn ≤ λ̂n+1 . Let given, and let A = H ∈ Mn+1 . Then:
y a
Λ = diag (λ1 , . . . , λn. Then∃a, y ∈ Rn s.t. {λ̂1 , . . . , λ̂n } is the set of the real
A y λ1 (A) ≤ λ1 (B) ≤ λ2 (A) ≤ · · · ≤ λn (A) ≤ λn (B) ≤ λn+1 (A)
symmetric matrix: H
y a
• Thm: A is Hermitian, Ar is r-by-r principal submatrix of A. • Thm 4.3.10 If: λ̂1 ≤ λ1 ≤ λ̂2 ≤ λ2 ≤ · · · ≤ λ̂n ≤ λn ≤ λ̂n+1 . Let
n
Λ = diag (λ1 , . . . , λn ). Then ∃a,
y ∈ R s.t. {λ̂1 , . . . , λ̂n } is the set of the
λk (A) ≤ λk (Ar ) ≤ λk+n−r (A), 1≤k≤r A y
real symmetric matrix: H
y a
• Def : The vector β is said to majorize the vector α if sum of k smallest • If λ1 ≤ µ1 ≤ λ2 ≤ µ2 ≤ · · · ≤ λn ≤ µn . Let Λ = diag (λ1 , . . . , λ2 ). There
elements of β ≥ that of α, for k = 1, . . . , n and equality for k = n exists z ∈ Rn such that: σ(Λ + zz T ) = {µ1 , . . . , µn }
• Thm: A is Hermitian. Then the vector of diagonal entries of A majorizes
the vector of eigenvalues of A. 5 Norms for vectors and matrices
• Thm: a1 ≤ · · · ≤ an ; λ1 ≤ · · · ≤ λn and vector a = [ai ] majorizes the vector
λ = [λi ], then ∃A = [aij ] is real symmetric s.t. aii = ai and {λi } is the set
5.4 Analytic properties of vector norms
of eigenvalues of A • Let B = {x(1) , . . . , x(n) } be a basix for V . A fuction f : V → R is said to be
pre-norm if it satisfies:
—————————
a) Positive: f (x) ≥ 0, ∀x; f (x) = 0 ⇔ x = 0
• Weyl’s Thm: Let A, B ∈ Mn be Hermitian. Let the respective eigenvalues b) Homogeneous: f (αx) = αf (x)
of A, B, and A + B be {λ1 (A) ≤ λ2 (A) ≤ · · · ≤ λn (A)}, {λ1 (B) ≤ λ2 (B) ≤
· · · ≤ λn (B)}, and {λ1 (A + B) ≤ λ2 (A + B) ≤ · · · ≤ λn (A + B)}. Then: c) Continunous: f (x(z)) is continuous on (F)n , where z = [z1 , . . . , zn ]T ∈
(F)n and x(x) = z1 x(1) + · · · + zn x(n)
λi−k+1 (A) + λk (B) ≤ λi (A + B) ≤ λi+j (A) + λn−j (B), A norm is always a pre-norm.
i = 1, n; j = 0, n − 1; k = 1, i
• Let f () be a pre-norm on V = Rn or Cn . The function:
◦ If B has exactly π positive eigenvalues, µ negative eignenvalues:
f D (y) = max Rey H x
f (x)=1
λi (A + B) ≤ λi+π (A); i = 1, n − π, (14)
λi−µ (A) ≤ λi (A + B); i = µ + 1, n (15) is called the dual norm of f . Also: f D (y) = maxf (x)=1 |y H x|. The dual
norm of a pre-norm is always a norm.
◦ If rank (B) = r < n:
◦ |y H x| ≤ f (x)f D (y)
λi (A + B) ≤ λi+r (A); i = 1, n − r (16) ◦ (k · k1 )D = k · k∞ ; (k · k∞ )D = k · k1 ;
λi−r ≤ λi (A + B); i = r + 1, n (17) ◦ (k · k2 )D = k · k2
9
◦ ((k · k)D )D = k · k • If k · k is any matrix norm, then ρ(A) ≤ kAk.
• x ∈ Cn be a given vector and let k · k be a given vector norm on Cn . The set • Let A ∈ Mn and > 0 be given. There is a matrix norm k · k such that:
ρ(A) ≤ kAk ≤ ρ(A) +
{y ∈ Cn : kykD kxk = y H x = 1} P∞
• If A ∈ Mn , then th series k=0 ak Ak converges if there is a matrix norm
P∞ k
is said to be the dual of x with respec to k · k k · k on Mn such that the numerical series k=1 |ak |kAk converges, or even
if the partial sums of this series are bounded.
5.5 Matrix norms • Define kAk
H
= kAH k
• We call a function k · k : Mn → R a matrix norm if
for all A, B ∈ Mn , it astisfies the following five axioms: • A matrix norm k · k on Mn is a minimal matrix norm if the only matrix
(1) kAk ≥ 0 Nonegative norm N (·) on Mn such that N (A) ≤ kAk for all A ∈ Mn is N (·) = k · k
(1a) kAk = 0 if and only if A = 0 Positive •
(2) kcAk = |c|kAk, ∀c ∈ C Homogeneous
(3) kA + Bk ≤ kAk + kBk Triangle inequality • Thm: Let k · k be a given matrix norm on Mn . Then:
(4) kABk ≤ kAkkBk Submutiplicative H
a) k · k is an induced norm if and only if k · k is an induced norm.
• Let k · k be a vector norm on Cn . Define k · k on Mn by: H
b) If the matrix norm k · k is induced by the vector norm k · k, then k · k
kAk = max kAxk is induced by the dual norm k · kD
kxk=1
c) The spectral norm k · k2 is the only matrix norm on Mn that is both
Then, k · k is a matrix norm. k · k is the matrix norm induced by the vector induced and self-adjoint.
norm k · k.
5.7 Vector norms on matrices
• The maximum column sum matrix norm k · k1 :
• If f is a pre-norm on Mn , then limk→∞ [f (Ak )]1/k exists for all A ∈ Mn and:
n
X
kAk1 = max |aij | lim [f (Ak )]1/k = ρ(A)
a≤j≤n k→∞
i=1
• The maximum row sum matrix norm k · k∞ : • For each vectr norm G(·) on Mn , there is a finite positive constant c(G) such
that c(G)G(·) is a matrix norm on Mn . If k · k is a matrix norm on Mn , and
n
X if:
kAk∞ = max |aij | Cm kAk ≤ G(A) ≤ CM kAk for all A ∈ Mn
a≤i≤n
j=1
CM
then c(G) ≤ 2
Cm
• Spectral norm k · k2 is defined on Mn by:
√ • The vector norm k · k on Cn is said to be compatible with the vector norm
kAk2 = max{ λ is an eigenvalues of AH A} G(·) on Mn if:
kAxk ≤ G(A)kxk, ∀x ∈ Cn , A ∈ Mn
• Spectrul radius ρ(A) of a matrix A ∈ Mn is:
• If k · k is a matrix norm on Mn , then there is some vectr norm on Cn that is
ρ(A) = max{|λ| : λ is an eigen value of A} compatible with it.
10