0% found this document useful (0 votes)
14 views6 pages

Eigenvalues and Dynamical Systems-1

Uploaded by

Victor Gabriel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

Eigenvalues and Dynamical Systems-1

Uploaded by

Victor Gabriel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Eigenvalues and Dynamical Systems

 
1 1 3
Warm-up: P = 10 is the matrix of projection onto the line y = 3x in R2 .
3 9
   
−1 −1
(a) What do you get when you project onto the line y = 3x, i.e. what is P ?
7 7
   
1 1
(b) What is P ? What about P 100 ?
3 3
   
−3 100 −3
(c) What is P ? What about P ?
1 1
 
100 −1
(d) Find P . (Hint: Use your result in (c) and (d) )
7

Definition: A nonzero vector ~v is called an eigenvector of a n × n matrix A if A~v = λ~v for some constant
λ; λ is called an eigenvalue of A.

1. Use geometric insight to find the eigenvalues and eigenvectors, if exist, of the following linear trans-
formation.

(a) Othogonal projection onto a line L in R2 .


Solution. We are looking for vectors ~v and scalars λ so that A~v = λ~v . Since A is the matrix of
projL , A~v simply means projL ~v , the projection of ~v onto the line L. So, we’re looking for vectors
that get projected onto a scalar multiple of themselves.

Vectors in the line L get projected onto themselves, so they are eigenvectors with eigenvalue 1.
Vectors orthogonal to L (in other words, vectors in L⊥ ) get projected to ~0, so they are eigenvectors
with eigenvalue 0.

(b) Reflection about a line in R2 .

(c) Rotation counter-clockwise by 90◦ in R2 .


Solution. Visually, R~x always goes in a different direction from ~x, so R~x can never be a scalar
multiple of ~x (assuming ~x is non-zero). So, R does not seem to have any eigenvalues or eigen-
vectors.
 
0 −1
To confirm, we’ll calculate the characteristic polynomial. The matrix R is , and its
1 0
characteristic polynomial is λ2 + 1, which has no real roots. So, indeed, R has no real eigenvalues.

(d) Rotation about a line in space.

(e) Shear in the direction of ~v .

(f ) Dilation A = 5I2

1
2. In a California redwood forest, the spotted owl is the primary predator of the wood rat. Suppose
that the following system models the interaction between owls and rats. Here, o(t) represents the owl
population at time t and r(t) represents the rat population at time t.
(
o(t + 1) = 0.5o(t) + 0.3r(t)
r(t + 1) = −0.2o(t) + 1.2r(t)

 
o(t)
(a) Let ~x(t) = . Find a matrix A so that ~x(t + 1) = A~x(t).
r(t)
Solution. We can rewrite the given system as
    
o(t + 1) 0.5 0.3 o(t)
= ,
r(t + 1) −0.2 1.2 r(t)
 
0.5 0.3
so the matrix we are looking for is A = .
−0.2 1.2

(b) Suppose there are currently 2000 owls and 1000 rats. How many owls and rats are there next
year? The year after? In t years?
   
100 300
Given that ~v1 = and ~v2 = are eigenvectors of A with eigenvalues 1.1 and 0.6,
200 100
respectively.
Solution. It’s easy to tell how many there are next year: if we call the current time t = 0,
2000
then we are told that ~x(0) = . We’d like to know ~x(1), and (a) says that ~x(1) = A~x(0) =
1000
    
0.5 0.3 2000 1300
= . That is, there will be 1300 owls and 800 rats next year.
−0.2 1.2 1000 800
    
0.5 0.3 1300 890
The year after, there will be ~x(2) = A~x(1) = = .
−0.2 1.2 800 700

We can continue this process to get ~x(t) = At ~x(0), but this is actually a pretty useless formula.
We can’t use it easily to find out, for instance, what the populations will be in 100 years, because
we don’t have an easy way to evaluate A100 .
   
100 300
We know that ~v1 = and ~v2 = are eigenvectors of A with eigenvalues 1.1 and 0.6,
200 100
respectively. That is, A~v1 = 1.1~v1 and A~v2 = 0.6~v2 . From this, it follows that At~v1 = 1.1t~v1 and
At~v2 = 0.6t~v2 .
Since ~v1 and ~v2 are not scalar multiples of eachother, they form a basis of R2 . Thus, we can
2000
write our current initial condition, ~x(0) = , as a linear combination of ~v1 and ~v2 . In this
1000

2
case, it turns out that ~x(0) = 2~v1 + 6~v2 .(1) So,

~x(t) = At ~x(0)
= At (2~v1 + 6~v2 )
= 2(At~v1 ) + 6(At~v2 )
= 2(1.1t~v1 ) + 6(0.6t~v2 ) (1)
   
100 300
If we then use the fact that ~v1 = and ~v2 = , then we can rewrite this as
200 100
   
t100 t 300
= 2(1.1 ) + 6(0.6 )
200 100
t t
 
200(1.1 ) + 1800(0.6 )
=
400(1.1t ) + 600(0.6t )

So, the owl population in t years is o(t) = 200(1.1t ) + 1800(0.6t ), and the rat population in t years
is r(t) = 400(1.1t ) + 600(0.6t ).
Here is the corresponding trajectory, together with the two trajectories defined by ~v1 and ~v2 :

2000

1500

1000 Ó
xH0L
Ó
xH1L
Ó Ó
xH3L xH2L
500

0
0 500 1000 1500 2000

In general, how do we find eigenvalues?

• A~v = λ~v where ~v 6= 0 if and only if det(A − λIn ) = 0.


     
(1) How 2000 100 300
did we get this? Well, we wanted to write ~
x(0) = c1~v1 + c2~v2 . This is the same as saying = c1 + c2 ,
1000 200 100
which is a linear system
100c1 + 300c2 = 2000
200c1 + 100c2 = 1000
We can solve this using Gauss-Jordan elimination. Alternatively, expressing ~ x(0) as a linear combination of ~v1 , ~v2 is equivalent
to expressing ~
x(0) in thebasis B =(~v1 , ~v2 ). That is, we want to find [~
x(0)]B , and we know that we can use the change of basis
100 300
x(0)]B = S −1 ~
 
matrix S = ~v1 ~v2 = to do this; it is simply [~ x(0).
200 100

3
We call fA (λ) = det(A − λIn ) the characteristic polynomial. One can find the eigen-
values of a matrix A by finding the roots of the characteristic polynomial fA (λ)

How do we find eigenvectors?

• The eigenvector(s) of A with eigenvalue λ satisfies (A − λIn )~v = 0. In other words, ~v is an


eigenvector of A with eigenvalue λ if and only if ~v ∈ ker(A − λIn ).

Note: let P (x) be a polynomial of degree 2n + 1 (odd) with real coefficients, then P (x) must
have a real root. Why?
The leading term in P (x) is x2n+1 , when x → ∞, P (x) → +∞ and when x → −∞, P (x) → −∞,
P (x) is continuous, so there must be at least one x intercept; which is a real root to P (x) = 0.

 
1 0 0
3. Find the characteristic polynomial and the eigenvalues of A = 2 3 0.
4 5 6
 
1−λ 0 0
Solution. We have A − λI3 =  2 3−λ 0 . Therefore, det(A − λI3 ) is the polynomial
4 5 6−λ
(1 − λ)(3 − λ)(6 − λ), which has roots 1, 3, and 6. Thus, the eigenvalues of A are 1, 3, and 6 .

The algebraic multiplicity  of an eigenvalue


 is the multiplicity of the root. For example, the
1 2 3
characteristic polynomial of 0 1 1 is (1 − λ)2 (2 − λ). The algebraic multiplicity of the eigenvalue
0 0 2
λ = 1 is 2 and the algebraic multiplicity of the eigenvalue λ = 2 is 1.

4. Find all eigenvalues and eigenvectors of the following matrices.


 
1 2
(a) A =
2 1
 
1 2
(b) B =
0 1

Solution. The characteristic polynomial is (1 −λ)(1 − λ) = 0, so


there is 
only one
eigenvaule
 
1 2 1 0 0 2 1
λ = 1. The corresponding eigenvector is in ker −1 = ker . So,
0 1 0 1 0 0 0
is an eigenvector.
 
1 0
(c) C =
0 1

Solution. The characteristic polynomial is (1 − λ)(1 − λ) = 0, so


there is 
only one
eigenvaule
1 0 1 0 0 0
λ = 1. The corresponding eigenvector is in ker −1 = ker . So, two
0 1   0 1  0 0
1 0
linearly independent eigenvectors associated with λ = 1; and .
0 1

4
 
1 2 3
(d) D = 0 1 1
0 0 2

Solution. The characteristic polynomial is (1 − λ)2 (2 − λ), so there are two eigenvaules λ = 1
and λ = 2. To find corresponding eigenvectors, we just need to find the kernel of A−I and A−2I.
I will leave this to you.
 
a b
5. Find the characteristic polynomial of A = .
c d
Solution. By definition, the characteristic polynomial is

fA (λ) = det(A − λI)


 
a−λ b
= det
c d−λ
= (a − λ)(d − λ) − bc
= λ2 − (a + d)λ + (ad − bc)
= λ2 − (tr A)λ + det A

Note: The sum of the eigenvalues equals the trace, and the product of eigenvaules equals the determi-
nant. Suppose λ1 and λ2 are the eigenvaules of A, then (λ − λ1 )(λ − λ2 ) = 0 because they are roots
to the characteristic polynomial.

We know (λ − λ1 )(λ − λ2 ) = λ2 − (λ1 + λ2 )λ + λ1 λ2 = 0. Compare this with λ2 − (tr A)λ + det A = 0.


We get λ1 + λ2 = tr(A) and λ1 λ2 = det(A)
Question: If A is a 2 × 2 matrix with tr A = 5 and det A = −14, what are the eigenvalues of A?
Solution.

Solve the system of equations


λ1 + λ2 = 5 and λ1 λ2 = −14.
We get λ1 = 7, λ2 = −2. When you are trying to find the eigenvalues of a 2 × 2 matrix, it’s often
easiest to first find the trace and determinant of the matrix and then to use the idea of this problem.

6. The Fibonacci sequence 0, 1, 1, 2, 3, 5, 8, ..., Fn , ... is defined recursively by letting F0 = 0, F1 = 1 and


for n ≥ 2,
Fn = Fn−2 + Fn−1 .
Rewrite this into a matrix form as below, what is A and its eigenvaules?
   
Fn Fn−1
=A
Fn−1 Fn−2

Fn
What’s the ratio as n → ∞?
Fn−1

5
Solution. Let’s divide both sides of the equation Fn − Fn−1 − Fn−2 = 0 by Fn−2 , we then get
Fn Fn−1 Fn−1
− − 1 = 0. Compare this with the characteristic polynomial λ2 − λ − 1 = 0. We see
Fn−1 Fn−2 Fn−2 √
Fn 1+ 5
that the ratio → (Golden ratio!)
Fn−1 2

7. Markov Chain: this example is from https://en.wikipedia.org/wiki/Examples_of_Markov_chains


 
s
The probabilities of weather conditions (modeled as either sunny or rainy) is presented by a vector .
r
Given the weather on the preceding day, the matrix
 
0.9 0.5
P =
0.1 0.5

represents the weather model in which a sunny day is 90% likely to be followed by another sunny day,
and a rainy day is 50% likely to be followed by another
 rainy day. For example, suppose we know the
1
weather on day 0 is known to be sunny, which means . Then the weather on day 1 can be predicted
0
by     
0.9 0.5 1 0.9
=
0.1 0.5 0 0.1

In this example, predictions for the weather on more distant days are increasingly inaccurate and tend
towards a steady state vector. This vector represents the probabilities of sunny and rainy weather on
all days, and is independent of the initial weather.
Find the steady state, i.e an eigenvector of the matrix P associated to the eigenvalue 1.
 
5/6
Solution. The 1-eigenspace of P is ker(P − I), which is spanned by the vector . This means in
1/6
the long term, about 83.3% of days are sunny.

A matrix is called a Markov matrix if every entry is a nonnegative real number representing a probability
and each column summing to 1. Any Markov matrix has an eigenvalue 1.

Lemma: A and AT have the same eigenvaules,


Why? det(A − λI) = det((A − λI)T ) = det(AT − λI), so the characteristic polynomials of A and AT are the
same!

You might also like