LECTURE NOTE Matrix
LECTURE NOTE Matrix
Kalyanmoy Chatterjee
Abstract
a1 x1 + a2 x2 + a3 x3 = 0
b1 x 1 + b2 x 2 + b3 x 3 = 0
c1 x 1 + c2 x 2 + c3 x 3 = 0 (1)
MX = 0 (2)
T T T
and a x = 0; b x = 0; c x = 0 (3)
Where
a1 a2 a3
M = b1 b2 3
c1 c2 c3
and
a1 b1 c1 x1
a = a2 ; b = b 2 ; c = c 2 ; x = x 2
a3 b3 c3 x3
The three vector equations in Eqn. 3 have the geometrical interpretation that x is or-
thogonal to a, b, and c. The volume spanned by a, b, and c given by the triple scalar
product
a1 a2 a3
b b
(a.b) × c = det(a, b, c) = det 1 2 3 (4)
c1 c2 c3
If a, b, and c are not co-planer, a non-zero x could not be perpendiculer to all of them at
the same time. If they are co-planer, the volume spanned by them, will be zero. Therefore
the condition for a non trivial solution for the Eqn. 1 is that the value of the determinant in
Eqn. 4 should be zero.
1
Guides and tutorials Lecture Note: Matrix
S−1 = ST , or SST = I
Unitary Matrix Matrices for which the adjoint is also the inverse. Such matrices are
identified as unitary matrices. One way of expressing this relationship is
UU† = U† U = I or U† = U−1
3 Eigenvalue spectrum
3.1 Characteristic Equation and Eigenvalues
If M is a (n × n) matrix such that
This equation has n number of roots in complex plan. Therefor we can write
Some of the roots may be repeated. The set of eigenvalues is the spectrum of the matrix
M. (More generally, the spectrum of an operator is the set of its eigenvalues.)
The coefficients ci in the characteristic equation are determined by the elements of the
matrix M. It follows from elementary algebra that
λ1 + λ2 + λ3 + ... + λn = −c1
λ1 λ2 + λ2 λ3 + ... + λn−1 λn = c2
.
.
.
λ1 λ2 λ3 ...λn = (−1)n cn
3.2 Eigenvectors
Suppose X and M are two matrices of the form
x1
x2 m11 m12 ... m1n
m21 m22 ... m2n
X= . ; M=
.. .. ... ..
.
mn1 mn2 ... mn n
xn
Then the linear transformation Y = MX, will transform x to Y. In case the matrix M
transform x to itself or a vector parallel to itself, We can write:
MX = λX
or (M − λI)X = 0 (8)
det(M − λI)
which is the characteristic equation 5. In such case, the vector X is known as Eigenvector
or Latentvector of the matrix M
1. Any square matrix M and its transpose M T has the same eigen values.
Page 3
Guides and tutorials Lecture Note: Matrix
2. The eigenvalues of a triangular matrix are just the diagonal elements of that matrix
Cor. eigenvalues of a diagonal matrix are just the diagonal elements of that matrix.
4. Sum of the eigenvalues of a matrix is the sum of the diagonal elements of that matrix.
8. if λi s are the eigenvalues of a matrix M, then λni are the eigenvalues of the matrix M n ,
where n is a positive integer.
A natural question that arises is the following: Suppose we find, quite independently of
a knowledge of its eigenvalues, that an (n × n) matrix M satisfies an nth order polynomial
equation of the form
M n + b1 M n−1 + b2 M n−2 + ... + bn I = 0. (10)
Can we then conclude that the polynomial λn + b1 λn−1 +b2 λn−2 + ... + bn−1 λ + bn must
necessarily be the characteristic polynomial PM (λ) of the matrix M? The answer is NO!
3.5 Degeneracy
If the characteristic equation has a multiple or repeated root, the eigensystem is said to be
degenerate or to exhibit degeneracy. [?]. On the otherhand, simple root of the characteristic
equation corrosponds to nondegenerate eigenvalues.
Page 4
Guides and tutorials Lecture Note: Matrix
Hci = λi ci
or, c†j Hci = λi c†j ci multiplying both side by c†j
or, (c†j Hci )† = (λi c†j ci )†
or, (Hci )† cj = λ∗i c†i cj using (AB)† = B † A†
or, c†i H † cj = λ∗i c†i cj using (AB)† = B † A†
or, c†i Hcj = λ∗i c†i cj using H † = H
or, λj c†i cj = λ∗i c†i cj sinceHcj = λj cj
or, (λj − λ∗i )c†i cj = 0 (11)
(12)
Equation 11 has further implication. Keeping in mind that the eigen values of a hermitian
matrix are real, when i 6= j,
Now, if λj 6= λi ,
c†i cj = 0
Page 5
Guides and tutorials Lecture Note: Matrix
M X = λX
or, M S −1 Sx = λX
or, SM S −1 (SX) = λ(SX) multiplying f rom lef t by S
or, M 0 X 0 = λX 0 where M 0 = SM S −1
Our original eigenvalue equation has been converted into one in which M has been
replaced by a transformed matrix SM S −1 and the eigenvector X has also been transformed
by S, but the value of λ remains unchanged. The transformation M 0 → SM S −1 or M →
S −1 M 0 S is known as similarity transformation. This give us an important result: The
eigenvalues of a matrix remain unchanged when the matrix is subjected to a Similarity
transformation.
In case U is unitary, we can write:
M X = λX → U M U † (U X) = λ(U X)
det(M 0 ) = det(SM S −1 )
= det(S)det(M )det(S −1 )
= det(M )
T r(M 0 ) = T r(SM S −1 )
= T r(M SS −1 ) U sing the property T r(AB) = T r(BA)
= tr(M I) = T r(M )
as we have seen that the similar matrices has the same eigenvalues, this result is pretty
obvious.
5 Diagonalizatin of a Matrix
A mtrix can be braught to diagonal form by change of basis. The similarity matrix that
effects the diagonalization can be constructed by using the normazlied column vectors as
the columns of S −1 matrix.
Page 6
Guides and tutorials Lecture Note: Matrix
5.1 Example
m11 m12 m13
Let the matrix M = m21 m22 m23 with eigenvalues λ1 , λ2 , λ3 and eigenvectors X1 =
m31 m32 m33
x13
x11 x12 x23
x21 , X2 = x22 , X3 = . Next, we construct a matrix S −1 , using X1 , X2 ,
...
x31 x32
x33
x11 x12 x13
−1
X3 , so that S = x21 x22 x23
x31 x32 x33
Now
m11 m12 m13 x11 x12 x13
M S −1 = m21 m22 m23 x21 x22 x23
m31 m32 m33 x31 x32 x33
m11 x11 + m21 x12 + m31 x13 ... ...
= ... ... ...
... ... ...
λ1 x11 λ2 x12 λ3 x13
= λ1 x21 λ2 x22 λ3 x23
λ1 x13 λ2 x23 λ3 x33
λ1 0 0 x11 x12 x13
= 0 λ2 0 x21 x22 x23
0 0 λn x31 x32 x33
or, M S −1 =DS −1
or, SM S −1 = D
λ1 0 0
where D = 0 λ2 0
0 0 λ3
If the eigenvectors X1 , X2 , X3 are orthogonal,as in the case of hermitian matrices (3.6),
then SS † = I, i.e. the they will be unitary and the matrix M will be diagonizable by the
unitary transformation U M U † = D
• And there exist normal matrices that are not in any of these categories.
Page 7
Guides and tutorials Lecture Note: Matrix
6 Commutator
It turns out that two matrices can have a common set of eigne state if and only if they
commute. The proof is simple if the eigenvectors of either A or B are nondegenerate.
Assume that Xi are a set of eigenvectors of both A and B with respective eigenvalues ai
and bi . Then for any i,
BA(Xi ) = Bai Xi = bi ai Xi
AB(Xi ) = Abi Xi = ai bi Xi
For the converse, we assume that A and B commute, that Xi is an eigenvector of A with
eigenvalue ai , and that this eigenvector of A is nondegenerate. Then we form
This equation shows that BXi is also an eigenvector of A with eigenvalue ai . Since the
eigenvector of A was assumed nondegenerate, BXi must be proportional to Xi , meaning
that Xi is also an eigenvector of B.
Page 8