0% found this document useful (0 votes)
27 views

LECTURE NOTE Matrix

This document provides a summary of key concepts related to matrices: 1) It defines homogeneous linear equations and shows how they can be represented in matrix form. 2) It describes important types of matrices including orthogonal, unitary, and Hermitian matrices. 3) It discusses eigenvalue spectra including characteristic equations, eigenvalues, and eigenvectors. It also outlines several properties of eigenvalues. 4) It introduces the Cayley-Hamilton theorem which states that every matrix satisfies its own characteristic equation.

Uploaded by

Rivu Saha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

LECTURE NOTE Matrix

This document provides a summary of key concepts related to matrices: 1) It defines homogeneous linear equations and shows how they can be represented in matrix form. 2) It describes important types of matrices including orthogonal, unitary, and Hermitian matrices. 3) It discusses eigenvalue spectra including characteristic equations, eigenvalues, and eigenvectors. It also outlines several properties of eigenvalues. 4) It introduces the Cayley-Hamilton theorem which states that every matrix satisfies its own characteristic equation.

Uploaded by

Rivu Saha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Lecture Note: Matrix

Kalyanmoy Chatterjee

Abstract

1 Homogeneous Linear Equation:


Suppose we have three equations with three unknowns x1 , x2 , x3 :

a1 x1 + a2 x2 + a3 x3 = 0
b1 x 1 + b2 x 2 + b3 x 3 = 0
c1 x 1 + c2 x 2 + c3 x 3 = 0 (1)

Then we can write:

MX = 0 (2)
T T T
and a x = 0; b x = 0; c x = 0 (3)

Where
 
a1 a2 a3
M =  b1 b2 3 
c1 c2 c3

and
       
a1 b1 c1 x1
a = a2 ; b = b 2 ; c = c 2 ; x = x 2 
      
a3 b3 c3 x3

The three vector equations in Eqn. 3 have the geometrical interpretation that x is or-
thogonal to a, b, and c. The volume spanned by a, b, and c given by the triple scalar
product

a1 a2 a3
b b
(a.b) × c = det(a, b, c) = det 1 2 3 (4)
c1 c2 c3

If a, b, and c are not co-planer, a non-zero x could not be perpendiculer to all of them at
the same time. If they are co-planer, the volume spanned by them, will be zero. Therefore
the condition for a non trivial solution for the Eqn. 1 is that the value of the determinant in
Eqn. 4 should be zero.
1
Guides and tutorials Lecture Note: Matrix

2 Few Important Types of Matrices


Orthogonal Matrix A real matrix (one whose elements are real) is termed orthogonal if
its transpose is equal to its inverse. Thus, if S is orthogonal, we may write

S−1 = ST , or SST = I

Unitary Matrix Matrices for which the adjoint is also the inverse. Such matrices are
identified as unitary matrices. One way of expressing this relationship is

UU† = U† U = I or U† = U−1

Important Property : det(U) = eiθ , det(U† ) = e−iθ


Hermitian Matrix The adjoint of a matrix A , denoted A† , is obtained by both complex
conjugating and transposing it (the same result is obtained if these operations are
performed in either order). Thus,

(A† )ij = a∗ji

A matrix is identified as Hermitian, or, synonymously, self-adjoint, if it is equal to its


adjoint. To be self-adjoint, a matrix H must be square, and in addition, its elements
must satisfy
(H† )ij = Hij → h∗ji = hij
If (H† )ij = −Hij , H will be called as anti Hermitian.
1. the array of elements in a self-adjoint complex matrix exhibits a reflection sym-
metry about the principal diagonal. As a corollary to this observation, or by
direct reference we see that the diagonal elements of a self-adjoint matrix must
be real.
2. If all the elements of a self-adjoint matrix are real, then the condition of self-
adjointness will cause the matrix also to be symmetric.
3. if two matrices A and B are Hermitian, it is not necessarily true that AB or BA
is Hermitian; however, AB + BA , if nonzero, will be Hermitian, and AB - BA
, if nonzero, will be anti-Hermitian, meaning that (AB − BA)† = −(AB − BA).

3 Eigenvalue spectrum
3.1 Characteristic Equation and Eigenvalues
If M is a (n × n) matrix such that

PM (λ) = det(λI − M ) = 0 (5)

where PM (λ) is a polymonial of order n of λ, Equation 5 is called the Characteristic


equation or Secular equation of the the matrix M.
Equation 5 can be written as

PM (λ) = λn + c1 λn−1 + c2 λn − 2 + ... + cn = 0 (6)


Page 2
Guides and tutorials Lecture Note: Matrix

This equation has n number of roots in complex plan. Therefor we can write

PM (λ) = (λ − λ1 )(λ − λ2 )...(λ − λn ) = 0


Y n
PM (λ) = (λ − λi ) = 0 (7)
i=1

Some of the roots may be repeated. The set of eigenvalues is the spectrum of the matrix
M. (More generally, the spectrum of an operator is the set of its eigenvalues.)
The coefficients ci in the characteristic equation are determined by the elements of the
matrix M. It follows from elementary algebra that

λ1 + λ2 + λ3 + ... + λn = −c1
λ1 λ2 + λ2 λ3 + ... + λn−1 λn = c2
.
.
.
λ1 λ2 λ3 ...λn = (−1)n cn

3.2 Eigenvectors
Suppose X and M are two matrices of the form
 
x1  
 x2  m11 m12 ... m1n
   m21 m22 ... m2n 
X=  . ; M=
 
 .. .. ... .. 
.
mn1 mn2 ... mn n
xn

Then the linear transformation Y = MX, will transform x to Y. In case the matrix M
transform x to itself or a vector parallel to itself, We can write:

MX = λX
or (M − λI)X = 0 (8)

Equation 8 represents n numbers of homogenious linear equations. In previous Lecture,


we have seen that non-trivial solution for such equations exists if the determinant of the
coefficient-matrix vanishes, i.e.

det(M − λI)

which is the characteristic equation 5. In such case, the vector X is known as Eigenvector
or Latentvector of the matrix M

3.3 Properties of Eigenvalues


(proved in class)

1. Any square matrix M and its transpose M T has the same eigen values.

Page 3
Guides and tutorials Lecture Note: Matrix

2. The eigenvalues of a triangular matrix are just the diagonal elements of that matrix
Cor. eigenvalues of a diagonal matrix are just the diagonal elements of that matrix.

3. The eigenvalues of an idempotent matrix are either zero or unity.

4. Sum of the eigenvalues of a matrix is the sum of the diagonal elements of that matrix.

5. The product of the eigenvalues of a matrix is equals to it’s determinant.

6. If λ is the eigenvalue of a matrix M, then 1/λ is a eigenvalue of M −1 .

7. If λ is the eigenvalue of a orthogonal matrix, then 1/λ is also its eigenvalue.

8. if λi s are the eigenvalues of a matrix M, then λni are the eigenvalues of the matrix M n ,
where n is a positive integer.

3.4 Cayley-Hamilton Theorem


Every (n × n) matrix M satisfies its own characteristic equation, i.e. M satisfies the matrix
equation:
n
Y
PM (M ) = M n + c1 M n−1 + c2 M n−2 + ... + cn I = (M − λi I) = 0 (9)
i=1

where the coefficients ci s are the same as in Equation 5.


This theorem garuntees that M n can be expressed as a linear combination of I, M, M 2 ,
M 3 , ..., M n−1 . Therefor all higher powers of M n+k can be expressed in such way. In princi-
ple, we can expresse eM in terms of I, M, M 2 , M 3 , ..., M n−1 . In addition this theorem also
provides an alternate method of finding the inverse of a matrix. For example, multiplying
Equation 9 by M −1

M n−1 + c1 M n−2 + c2 M n−3 + ... + cn M −1 = 0


or, M n−1 + c1 M n−2 + c2 M n−3 + ... + cn−1 I = −cn M −1
−1 n−1
or, M −1 = [M + c1 M n−2 + c2 M n−3 + ... + cn−1 I]
cn

A natural question that arises is the following: Suppose we find, quite independently of
a knowledge of its eigenvalues, that an (n × n) matrix M satisfies an nth order polynomial
equation of the form
M n + b1 M n−1 + b2 M n−2 + ... + bn I = 0. (10)
Can we then conclude that the polynomial λn + b1 λn−1 +b2 λn−2 + ... + bn−1 λ + bn must
necessarily be the characteristic polynomial PM (λ) of the matrix M? The answer is NO!

3.5 Degeneracy
If the characteristic equation has a multiple or repeated root, the eigensystem is said to be
degenerate or to exhibit degeneracy. [?]. On the otherhand, simple root of the characteristic
equation corrosponds to nondegenerate eigenvalues.

Page 4
Guides and tutorials Lecture Note: Matrix

3.6 Eigenvalues of a Hermitian Matrix


: Let H be a Harmitian matrix, i.e. H = H † , with with ci and cj two of its eigenvectors
corresponding, respectively, to the eigenvalues λi and λj . Then,

Hci = λi ci
or, c†j Hci = λi c†j ci multiplying both side by c†j
or, (c†j Hci )† = (λi c†j ci )†
or, (Hci )† cj = λ∗i c†i cj using (AB)† = B † A†
or, c†i H † cj = λ∗i c†i cj using (AB)† = B † A†
or, c†i Hcj = λ∗i c†i cj using H † = H
or, λj c†i cj = λ∗i c†i cj sinceHcj = λj cj
or, (λj − λ∗i )c†i cj = 0 (11)
(12)

When i=j, Equation 11 becomes:

(λi − λ∗i )(c†i ci ) = 0

(c†i ci ) is a positive quantiry, which implies that λi = λ∗i , or λi is real.

Eigenvalues of Hermitian matrix are real.

Equation 11 has further implication. Keeping in mind that the eigen values of a hermitian
matrix are real, when i 6= j,

(λj − λ∗i )c†i cj = 0


or, (λj − λi )c†i cj = 0

Now, if λj 6= λi ,
c†i cj = 0

Eigenvectors of a Hermitian matrix corresponding to different eigenvalues are orthogonal.

Summary 1. Eigenvalues of Hermitian matrix are real.


2. Eigenvectors of a Hermitian matrix corresponding to different eigenvalues are
orthogonal.
3. “The eigenvectors of a Hermitian matrix form a complete set.” This means that
if the matrix is of order n, any vector of dimension n can be written as a linear
combination of the orthonormal eigenvectors, with coefficients determined by the
rules for orthogonal expansions.

Page 5
Guides and tutorials Lecture Note: Matrix

4 Similarity Transformation and Unitary Transforma-


tion
Suppose M is a n × n matrix with eigen values λ and eigenvector X and S is an invartible
matrix of same dimension. Let us consider the linear vector transformation X 0 → SX. Then

M X = λX
or, M S −1 Sx = λX
or, SM S −1 (SX) = λ(SX) multiplying f rom lef t by S
or, M 0 X 0 = λX 0 where M 0 = SM S −1

Our original eigenvalue equation has been converted into one in which M has been
replaced by a transformed matrix SM S −1 and the eigenvector X has also been transformed
by S, but the value of λ remains unchanged. The transformation M 0 → SM S −1 or M →
S −1 M 0 S is known as similarity transformation. This give us an important result: The
eigenvalues of a matrix remain unchanged when the matrix is subjected to a Similarity
transformation.
In case U is unitary, we can write:

M X = λX → U M U † (U X) = λ(U X)

We call M 0 → U M U † or M → U † M U as unitary transformation.

Determinant of similar matrices are same:

det(M 0 ) = det(SM S −1 )
= det(S)det(M )det(S −1 )
= det(M )

Trace of similar matrices are same:

T r(M 0 ) = T r(SM S −1 )
= T r(M SS −1 ) U sing the property T r(AB) = T r(BA)
= tr(M I) = T r(M )

as we have seen that the similar matrices has the same eigenvalues, this result is pretty
obvious.

5 Diagonalizatin of a Matrix
A mtrix can be braught to diagonal form by change of basis. The similarity matrix that
effects the diagonalization can be constructed by using the normazlied column vectors as
the columns of S −1 matrix.

Page 6
Guides and tutorials Lecture Note: Matrix

5.1 Example
 
m11 m12 m13
Let the matrix M = m21 m22 m23  with eigenvalues λ1 , λ2 , λ3 and eigenvectors X1 =
m31 m32 m33
 
    x13
x11 x12 x23 
x21 , X2 = x22 , X3 =  . Next, we construct a matrix S −1 , using X1 , X2 ,
 ... 
x31 x32
  x33
x11 x12 x13
−1
X3 , so that S = x21 x22 x23 

x31 x32 x33
Now
  
m11 m12 m13 x11 x12 x13
M S −1 = m21 m22 m23  x21 x22 x23 
m31 m32 m33 x31 x32 x33
 
m11 x11 + m21 x12 + m31 x13 ... ...
= ... ... ...
... ... ...
 
λ1 x11 λ2 x12 λ3 x13
= λ1 x21 λ2 x22 λ3 x23 

λ1 x13 λ2 x23 λ3 x33
  
λ1 0 0 x11 x12 x13
=  0 λ2 0  x21 x22 x23 
0 0 λn x31 x32 x33
or, M S −1 =DS −1
or, SM S −1 = D
 
λ1 0 0
where D =  0 λ2 0 
0 0 λ3
If the eigenvectors X1 , X2 , X3 are orthogonal,as in the case of hermitian matrices (3.6),
then SS † = I, i.e. the they will be unitary and the matrix M will be diagonizable by the
unitary transformation U M U † = D

5.2 Normal Matrix


a matrix A is diagonizableif it has the property, namely [A, A† ] = 0, is termed normal.

• Clearly Hermitian matrices are normal, as H † = H.

• Unitary matrices are also normal, as U commutes with its inverse.

• Anti-Hermitian matrices (with A† = −A ) are also normal.

• And there exist normal matrices that are not in any of these categories.

Page 7
Guides and tutorials Lecture Note: Matrix

6 Commutator
It turns out that two matrices can have a common set of eigne state if and only if they
commute. The proof is simple if the eigenvectors of either A or B are nondegenerate.
Assume that Xi are a set of eigenvectors of both A and B with respective eigenvalues ai
and bi . Then for any i,

BA(Xi ) = Bai Xi = bi ai Xi
AB(Xi ) = Abi Xi = ai bi Xi

From the above two equation, we can write:

BAXi = ABXi f or all Xi


or, AB − BA =0
or, [A, B] =0

For the converse, we assume that A and B commute, that Xi is an eigenvector of A with
eigenvalue ai , and that this eigenvector of A is nondegenerate. Then we form

ABXi = BAXi = Bai Xi


or A(BXi ) = B(AXi )
or A(BXi ) = B(ai Xi )
or A(BXi ) = ai (BXi )

This equation shows that BXi is also an eigenvector of A with eigenvalue ai . Since the
eigenvector of A was assumed nondegenerate, BXi must be proportional to Xi , meaning
that Xi is also an eigenvector of B.

Page 8

You might also like