0% found this document useful (0 votes)
19 views12 pages

Lec 3 Introduction To Spectral Theory

The document provides an introduction to spectral theory, focusing on eigenvalues, eigenvectors, and diagonalization. It defines key concepts, illustrates examples, and discusses the conditions under which a matrix can be diagonalized. Additionally, it highlights the significance of diagonalization in various applications, including differential equations and econometrics.

Uploaded by

NANDINI JAIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views12 pages

Lec 3 Introduction To Spectral Theory

The document provides an introduction to spectral theory, focusing on eigenvalues, eigenvectors, and diagonalization. It defines key concepts, illustrates examples, and discusses the conditions under which a matrix can be diagonalized. Additionally, it highlights the significance of diagonalization in various applications, including differential equations and econometrics.

Uploaded by

NANDINI JAIN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Introductory

Math Econ

Introduction
Eigenvalues-
Eigenvectors
Diagonalization

Lecture 4 Introduction to Spectral Theory

A. Banerji

September 2, 2024
Introductory
Math Econ
Outline
Introduction
Eigenvalues-
Eigenvectors
Diagonalization

1 Introduction
Eigenvalues-Eigenvectors
Diagonalization
Introductory
Math Econ
Definition
Introduction
Eigenvalues-
Eigenvectors
Diagonalization
Definition
Let T : V → V be a LT. A scalar λ is an eigenvalue of T if
there exists a non-zero vector v ∈ V s.t. T (v ) = λv . v is
then an eigenvector of T , corresponding to the eigenvalue
λ.
Note that (1) If v is an eigenvector corresponding to λ, then
so is cv , for all scalars c, because
0
T (cv ) = cT (v ) = cλv = λ(cv ). Moreover, if v , v are
eigenvectors corresponding to the same eigenvalue λ, then
0 0
for scalars c1 , c2 we have T (c1 v + c2 v ) = c1 T (v ) + c2 T (v )
0 0 0
= c1 λv + c2 λv = λ(c1 v + c2 v ). So c1 v + c2 v is also an
eigenvector corresponding to λ. That is, the set of all
eigenvectors corresponding to a particular eigenvalue is a
subspace. It (along with the zero vector) is called the
eigenspace corresponding to λ.
Introductory
Math Econ
Notions
Introduction
Eigenvalues-
Eigenvectors
(2) For a typical (n × n) matrix A and a typical (n × 1) vector
Diagonalization v , the vector Av is some point in <n (or C n , if A is a complex
matrix). In the eigenvalue problem, we are looking for
vectors v such that Av lies on the same line through the
origin as v itself.
Example
(1). A matrix for which all non-zero vectors are eigenvectors:
In×n . For all non-zero x ∈ <n , Ix = 1x; all such x are
eigenvectors corresponding to the eigenvalue 1.
(2) Using the appropriate rotation matrix, rotate the earth
through say 90o . Typically, vectors x ∈ <3 will change
direction on rotation. But vectors lying on the axis of rotation
will not change direction. For such vectors v , Av = 1v .
The set of all eigenvalues of a LT T is called the spectrum
of T , and denoted σ(T ).
Introductory
Math Econ
Characteristic Equation
Introduction
Eigenvalues-
Eigenvectors T (v ) = λv ⇒ (T − λI)(v ) = 0. That is, eigenvectors v
Diagonalization
corresponding to the eigenvalue λ comprise the Nullspace
of the LT T − λI (along with the zero vector). In fact the
nullspace N(T − λI) is called the eigenspace w.r.t. λ.
For there to be a non-zero solution v to (T − λI)(v ) = 0, we
need Nullity (T − λI) > 0, or Rank (T − λI) < dim(V ). Let
the matrix An×n represent T . So we have Rank (A − λI) < n,
which implies the determinant det(A − λI) = 0. det(A − λI)
is just
 
a11 − λ a12 ... a1n
 a21 a22 − λ . . . a2n 
det 
 
.. . . .. 
 . ... . . 
an1 ... . . . ann − λ
Introductory
Math Econ
Characteristic Equation
Introduction So det(A − λI) = 0 is a polynomial equation in λ, and has n
Eigenvalues-
Eigenvectors solutions (roots), possibly repeated, and possibly complex.
Diagonalization
If λ is a complex root of the equation, its conjugate λ̄ is also
a complex root.
We replaced a linear transformation T with a matrix A
representing it. Can we do that? Indeed all matrices
representing T in different bases are similar to each other. If
A and B are 2 such matrices, then A = SBS −1 for some
invertible (change of basis) matrix S.
Lemma
Similar matrices have an identical spectrum.

Proof.
Let Av = λv , so λ is an eigenvalue of A. If A = SBS −1 , then
this implies SBS −1 v = λv . Premultiplying both sides by
S −1 , BS −1 v = S −1 λv = λS −1 v . So λ is an eigenvalue of B
(and S −1 v a corresponding eigenvector).
Introductory
Math Econ
Examples
Introduction Note a general result: The sum of the diagonal elements of
Eigenvalues-
Eigenvectors a square matrix
P A (called trace(A)) equals the sum of
eigenvalues ni=1 λi . And det(A) = Πni=1 λi , the product of
Diagonalization

the eigenvalues.
Recall that we can write a quadratic equation in a variable λ
as λ2 − (sum of roots).λ+ (product of roots)= 0. So for a
(2 × 2) matrix A, det(A − λI) = 0 is the equation
λ2 − trace(A)λ + det(A) = 0.
Example 1. Let A2×2 have rows (1, 2) and (8, 1).
det(A − λI) = λ2 − 2λ − 15 = 0 has solutions
λ1 = 5, λ2 = −3. Eigenvectors for λ1 :
    
1−5 2 x1 0
=
8 1−5 x2 0
The eigenspace is 1-dimensional; Rank (A − 5I) = 1. It’s
{(x1 , x2 )| − 4x1 + 2x2 = 0}. If we choose x1 = 1, we solve
and get x2 = 2, so x = (1, 2)T is an eigenvector
T
Introductory
Math Econ
Examples
Introduction
Eigenvalues-
Eigenvectors
Diagonalization
Example
A2×2 has rows (1, 2) and (−2, 1).
det(A

− λI) = λ2 − 2λ + 5 = 0 implies the roots equal
2± 4−20
2 . So λ1 = 1 + 2i and λ2 = λ1 . Eigenvector
corresponding to λ1 :

    
−2i 2 x1 0
(A − (1 + 2i)I)x = =
−2 −2i x2 0

Rank (A − λ1 I) = 1 (the 2nd row is i times the first). Solving


−2ix1 + 2x2 = 0 by setting x1 = 1, we get x2 = i. So (1, i)T
is an eigenvector corresponding to λ1 . The conjugate of this
vector, (1, −i)T is therefore an eigenvector corresponding to
the conjugate root λ2 (this is a general result).
Introductory
Math Econ
Diagonal Form of Matrix
Introduction
Eigenvalues-
Theorem
Eigenvectors
Diagonalization
Suppose An×n has n LI eigenvectors. Then S −1 AS = Λ,
where the columns of S are the n eigenvectors and Λ is the
diagonal matrix whose diagonals are the corresponding
eigenvalues.
Proof.
Note S = (v1 , ..., vn ) is invertible (eigenvectors written as
columns). Then AS = A(v1 v2 ... vn ) = (Av1 Av2 ... Avn )
= (λ1 v1 λ2 v2 ... λn vn )
 λ 0 ... 0

.. .. . . ..

1
 . . . .  0 λ ... 0 
2
=  v1 v2 . . . vn  ..  = SΛ
 

 .. .. . .
.. .. . . .  . . . . 
. . . .. 0 0 . . . λn

Premultiplying both sides of AS = SΛ by S −1 ,


S −1 AS = Λ.
Introductory
Math Econ
When can we Diagonalize?
Introduction
Eigenvalues-
Lemma
Eigenvectors
Diagonalization
Suppose An×n has n distinct eigenvalues λ1 , ..., λn . Then a
set of eigenvectors v1 , ..., vn , vi corresponding to λi is LI.
PROOF. (By induction). The statement is trivially true if
n = 1. Suppose it’s true for n = 1, 2, ..., k − 1; i.e. if
λ1 , ..., λk −1 are distinct and v1 , ..., vk −1 are corresponding
eigenvectors, then they are LI. Suppose λk is an eigenvalue
distinct from λ1 , ..., λk −1 and vk is a corresponding
eigenvector. Suppose c1 v1 + ... + ck vk = 0 (1) for
some scalars c1 , ..., ck . So, A(c1 v1 + ... + ck vk ) = A0 = 0.
By linearity of A, this implies c1 Av1 + ... + ck Avk = 0. So
c1 λ1 v1 + ... + ck λk vk = 0 (2). Eq(2) minus λk (Eq(1))
eliminates λk , giving:
c1 (λ1 − λk )v1 + ... + ck −1 (λk −1 − λk )vk −1 = 0. By LI of
{v1 , ..., vk −1 }, ci (λi − λk ) = 0, ∀i = 1, ..., k − 1. Since
λi 6= λk , ci = 0, ∀i = 1, ..., k − 1. Plugging these ci ’s in
Eq(1), we get ck = 0 as well, so that {v1 , ..., vk } are LI. 
Introductory
Math Econ
Why Diagonalize?
Introduction
Eigenvalues- (1) Interpretation. If A is diagonalizable, then A = SΛS −1 .
Eigenvectors
Diagonalization That is A and Λ are similar. Specifically, suppose
x = (x1 , ..., xn ) is a vector written in terms of the standard
basis. Suppose the LI eigenvectors {v1 , ..., vn } are the new
basis. If Sn×n = (v1 v2 ... vn ), then w.r.t. the new basis, we
know that x is represented as y = S −1 x. Now, the matrix A
carries x to Ax. In terms of the new basis, Ax is represented
as S −1 Ax. Since x = Sy , S −1 Ax = S −1 ASy = Λy . So wrt
the new basis, the action of A on x becomes the easier to
visualize action of Λ on y . Λ just stretches the coordinates of
y by possibly different amounts.
(2) Uses - including more general uses of finding the
spectrum σ(A): In solving and interpreting solutions to
differential equations; also in some specification testing in
econometrics, interpreting quadratic forms, frequency
domain time series analysis etc.
Introductory
Math Econ
Why Diagonalize
Introduction
Eigenvalues-
Eigenvectors
Diagonalization

(3) If A is real, symmetric and full-ranked, then we can find


a set of eigenvectors that are LI as well as orthogonal. Then
the matrix S of eigenvectors is called an orthogonal matrix.
These are generally rotation matrices; so A = SΛS −1
means the action of A is a composition of a rotation followed
by a stretch followed by another rotation. Examples of such
matrices: (i) the (X T X ) or (X ∗ X ) matrix discussed in the
least squares application; (ii) the Hessian D 2 f of a function
f : <n → <.

You might also like