0% found this document useful (0 votes)
17 views9 pages

Lecture 22

Uploaded by

marquisarmwood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views9 pages

Lecture 22

Uploaded by

marquisarmwood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

22 Solving linear systems of ODE with constant coefficients.

Part
III. Eigenvalues with multiplicities two and more. Matrix expo-
nent
To deal with the eigenvalues of multiplicity 2 and above is mathematically challenging in the general
form, therefore I divide this lecture into two parts. In the first one I somewhat heuristically deal
with the case of a two by two matrix A that has an eigenvalue λ of (algebraic) multiplicity two and
consider two possible cases. In the second part, which in principle covers any situation, I introduce
the so-called matrix exponent. The second part of this lecture is usually optional for an introductory
ODE class.

22.1 The case of a two by two matrix


Assume that we must solve a system od first order ODE of the form

ẏ = Ay, y(t) ∈ R2 ,

and it happened that matrix A has only one eigenvalue λ of (algebraic) multiplicity two. For the
corresponding eigenvector(s) it is possible in this case to have two quite different situations. First, it
is feasible to have two linearly independent eigenvectors correspond to the same eigenvalue λ. As an
example take, e.g., matrix I 2 , which clearly has only one eigenvalue λ = 1 of (algebraic) multiplicity
2. Solving the system
(I 2 − λI 2 )v = 0
however yields that any vector v ∈ R2 is a solution, and to characterize all the solutions we can take
any two linearly independent vectors, which form a basis of R2 . I can, e.g., take e1 and e2 as a basis.
In this case the general solution to the corresponding system of ODE is written in general as

y(t) = C1 v 1 eλt + C2 v 2 eλt ,

where v 1 , v 2 ate two linearly independent eigenvectors corresponding to λ (it is said that λ has the
geometric multiplicity two as well in this case). How often it can happen? Not very often. It can
be proved (an exercise for a mathematically inclined student) that a two by two matrix A has one
eigenvalue λ of (algebraic) multiplicity 2 with two linearly independent eigenvectors if and only if
A = aI 2 , i.e, A is a scalar product of diagonal matrix. Not a very interesting case.
In most cases one finds that if A has one eigenvalue λ with multiplicity two, there is only one
linearly independent eigenvector. That is, we know one solution to the ODE system, but lacking the
second one. Let me try, using our experience with linear ODE with constant coefficients, try to look
for a solution in the form
y(t) = vteλt + weλt .
Plugging this expression into my equation I find

veλt + λvteλt + λweλt = Avteλt + Aweλt .


MATH266: Intro to ODE by Artem Novozhilov, e-mail: [email protected]. Spring 2024

1
After cancelling the exponent and comparing the coefficients at the same powers of t I get

Av = λv,
Aw = λw + v,

which gives me the recipe to find the second solution. The first equation is simply the eigenvector
problem for v, and hence the second equation is a system of linear equation with respect to unknown
w, which (it requires proof!) can be always solved. Here is an example.
Example 1. Solve [ ]
3 −18
ẏ = y.
2 −9
The characteristic equation is
λ2 + 6λ + 9 = (λ + 3)2 = 0,
hance there is one eigenvalue −3 multiplicity two. By solving the corresponding homogeneous system I
find that there is only one linearly independent vector v = (3, 1)⊤ , and hence one linearly independent
solution is [ ]
3 −3t
y 1 (t) = e .
1

Now to find w = (w1 , w2 )⊤ I need to solve


[ ][ ] [ ]
6 −18 w1 3
= ,
2 −6 w2 1

which has infinitely many solutions. I need only one, hence from the first equation

w1 = 3w2 + 1/2

I can take [ ]
1/2
w= ,
0
and therefore my second solution is
[ ] [ ]
3 −3t 1/2 −3t
y 2 (t) = te + e ,
1 0

and the general solution to my problem is given by

y(t) = C1 y 1 (t) + C2 y 2 (t).

Remark 2. To connect this subsection with the following I note that the equation

Aw = λw + v

implies (why?) that


(A − λI)2 w = 0,
which sometimes can be used to determine the unknown w.

2

22.2 Matrix exponent
Since “guessing” a solution is not very mathematically attractive, here I present a rigorous and general
approach to deal with repeated eigenvalues. It is based on the mathematical object, which is called
matrix exponent.
Consider the following first order differential equation of the form

y ′ = ay, a ∈ R,

with the initial condition


y(0) = y0 .
Of course, we know that the solution to this IVP is given by

y(t) = eat y0 .

However, let us apply the method of iterations to this equation. First note that instead of differential
equation plus the initial conditions we can have one integral equation
∫ t
y(t) = y0 + ay(τ ) dτ.
0

Now we plug in the right hand side y(τ ) = y0 and find first iteration y1 (t):
∫ t
y1 (t) = y0 + ay0 dτ = y0 + ay0 t = (1 + at)y0 .
0

I plug it again in the right hand side and find the second iteration
∫ t ( )
a2 t2
y2 (t) = y0 + ay1 (τ ) dτ = 1 + at + y0 .
0 2

In general we find
∫ t ( )
at a2 t2 an tn
yn (t) = y0 + ayn−1 (τ ) dτ = 1 + + + ... + y0 .
0 1! 2! n!

You should recognize inside the parenthesis the partial sums for the Taylor series of eat , hence we
recover again our familiar solution
y(t) = eat y0 .
So what is the point about these iterations? Let us do the same trick with the system

ẏ = Ay, y(0) = y 0 . (1)

Instead of (1) we can write the integral equation


∫ t
y(t) = y 0 + Ay(τ ) dτ,
0

3
where the integral of a vector is understood as componentwise integrals. I plug in the right-hand side
y 0 and find the first iteration

y 1 (t) = y 0 + tAy 0 = (I + At)y 0 .

Similarly to the previous, we find


( )
A2 t2 An tn
y n (t) = I + At + + ... + y0.
2! n!

The expression in the parenthesis is a sum of n × n matrices, and hence a matrix itself. Therefore, it
is natural to define a matrix, which is called matrix exponent, as the infinite sum of the form:

A2 t2 An tn
eAt = exp At := I + At + + ... + + ...
2! n!
Note that we can include scalar t to the matrix A.

Definition 3. The matrix exponent eA of A is the series

A2 An
eA = exp A := I + A + + ... + + ... (2)
2! n!
To make sure that the definition makes sense we need to specify what we understand under the
infinite series of matrices. I will skip this point here and just mention that series (2) converges
absolutely for any matrix A, which allows us multiply this series by another matrix, differentiate it
term by term, or integrate it if necessary.
Matrix exponent has a lot of properties similar to the usual exponent. Here are those that I will
need in the following:

1. As I already mentioned, series (2) converges absolutely, which means that there is a well defined
limit of partial sums of this series.

2.
d At
e = AeAt = eAt A.
dt
This property can be proved by term by term differentiation and factoring out A (left as an
exercise). Note here that both A and eAt are n × n matrices, and it is not obvious that
AeAt = eAt A. Such matrices for which AB = BA are called commuting.

3. If A and B commute, then


eA+B = eA eB .
In particular A and B commute if one of them is a scalar matrix, i.e., it has the form of the
form λI.

4.
eλIt v = eλt v,
for any λ ∈ R and v ∈ Rn . The proof follows from the definition.

4
Before using the matrix exponent to solve problems with equal eigenvalues, I would like to state
the fundamental theorem of linear first order homogeneous ODE with constant coefficients:
Theorem 4. Consider problem (1). Then this problem has the unique solution

y(t) = eAt y 0 .

Moreover, for any vector v ∈ Rn , y(t) = eAt v is a solution to the system ẏ = Ay.

22.3 Dealing with equal eigenvalues


It is important to note that the matrix exponent is not that easy to calculate for each particular
example. However, the expression eAt v can be easily calculated for some special vectors v without
the knowledge of the explicit form of eAt .
Example 5. For eigenvector v with the eigenvalue λ we have that

eAt v = eλt v.

To show this, express At = λIt + At − λIt, then

eAt v = eλIt+At−λIt v = by property 3


= eλIt e(A−λI)t v = by property 4 = eλt e(A−λI)t v = by definition
( )
(A − λI)2 t2
=e λt
I + (A − λI)t + + ... v
2!
( )
t2 (A − λI)2 v
=e λt
Iv + t(A − λI)v + + . . . = by the properties of the eigenvectors
2!
= eλt (Iv + 0 + 0 + . . .) = eλt v.

We actually found exactly those solutions to system ẏ = Ay that can be written down using the
distinct eigenvalues.
Definition 6. A nonzero vector v is called a generalized eigenvector of matrix A associated with the
eigenvalue λ with the algebraic multiplicity k > 1, if

(A − λI)k v = 0.

Now assume that vector v is a generalized eigenvector with k = 2. Exactly as in the last example,
we will find that

eAt v = eλIt+At−λIt v = by property 3


= eλIt e(A−λI)t v = by property 4 = eλt et(A−λI) v = by definition
( )
t2 (A − λI)2 t3 (A − λI)3
=e λt
I + t(A − λI) + + + ... v
2! 3!
( )
t2 (A − λI)2 v t3 (A − λI)3 v
=e λt
Iv + t(A − λI)v + + + . . . = by the properties of the eigenvectors
2! 3!
( ) ( )
= eλt Iv + t(A − λI)v + 0 + 0 + . . . = eλt I + t(A − λI) v.

5
Hence we found that for a generalized eigenvector v with k = 2, the solution to our system can be
taken as ( )
y(t) = eλt I + t(A − λI) v,
which does not require much computations. The only remaining question is actually whether we are
always able to find enough linearly independent generalized eigenvectors for a given matrix. The
answer is positive. Hence we obtain an algorithm for matrices with equal eigenvalues:

ˆ Assume that we have a real eigenvalue λi of multiplicity 2 and we found only one linearly
independent eigenvector v i corresponding to this eigenvalue (if we are able to find two, the
problem is solved). Then first particular solution is given by, as before,

y i (t) = v i eλi t .

To find a second particular solution to account for this multiplicity we need to look for a gener-
alized eigenvector that solves the equation

(A − λi I)2 ui = 0.

Note that we are looking for such ui that the previous holds and

(A − λi I)ui ̸= 0.

We can always find a solution ui of this system, which is linearly independent of v i . In this case
the second particular solution is given by
( )
y i+1 (t) = eλi t I + (A − λi I)t ui .

This case can be generalized to the case when multiplicity of eigenvalues in bigger than 2 (see an
example below) and when we have complex conjugate eigenvalues of multiplicity two and higher
(we will not need this case for the quizzes and exams).

Example 7. Find the general solution to


 
1 1 0
ẏ = 0 1 0 y.
0 0 2

The eigenvalues are λ1 = 2 and λ2 = 1 (multiplicity 2). An eigenvector for λ1 can be taken as
 
0
v 1 = 0 .

1

For λ2 we find that  


1
v 2 = 0 ,

0

6
and we are short for one more linearly independent solution to form a basis for the solution set.
Consider
(A − λ2 I)2 u = 0,
which has linearly independent solutions  
1
u1 = 0

0
and  
0
u2 = 1 .
0
The first one is exactly v 2 , therefore we keep only u2 .
Finally, one finds that  
( ) t
y 3 (t) = et I + (A − λ2 I)t u2 = 1 et ,
0
and the general solution is
y(t) = C1 v 1 e2t + C2 v 2 et + C3 y 3 (t).

Example 8. Solve the IVP


   
1 2 −3 1
y = 1 1 2  y, y(0) = 0 .

1 −1 4 0

I find that λ = 2 is the only eigenvalue of multiplicity 3. Its eigenvector is


 
−1
v =  1 ,
1

and a first linearly independent solution is given by


 
−1
y 1 (t) = 1  e2t .

1

To find two more linearly independent solutions we need to look for the generalized eigenvectors.
Consider first
(A − λ2 I)2 u = 0,
which has two vectors as a solution basis  
1
u1 = 0

0

7
and  
0
u2 = 1 .
1
Note that v, u1 , u2 are linearly dependent (why?) and therefore we can keep only one of the vectors,
e.g., u1 . The particular solution in this case
 
( ) 1−t
y 2 (t) = e2t I + (A − 2I)t u1 =  t  e2t .
t

To find one more linearly independent solution let us look for a generalized eigenvector with k = 3:

(A − λ2 I)3 w = 0.

Note that (A − λ2 I)3 = 0 therefore any vector w will do, the only thing is that we need it to be
linearly independent of v and u1 . For instance we can take
 
0
w = 0 .
1

Then the last solution is given by


 
( )
2
−3t + t2
1  2 
y 3 (t) = e2t I + (A − 2I)t + (A − 2I)2 t2 w =  2t − t2  e2t ,
2 2
1 + 2t − t2

and hence the general solution to our problem is


       2

y1 (t) −1 1−t −3t + t2
 
y(t) = y2 (t) = C1  1  e2t + C2  t  e2t + C3  2t − t2  e2t .
2

2
y3 (t) 1 t 1 + 2t − t 2

Applying the initial conditions, one finds C1 = C3 = 0, C2 = 1 and finally the solution is
 
1−t
y(t) =  t  e2t .
t

22.4 How to find eAt


Recall that if we are given system
ẏ = Ay, (3)
then its solution space is an n-dimensional vector space and has a basis {y 1 , . . . , y n }. The matrix
Φ(t), which has y i (t) as its i-th column is called the fundamental matrix solution:
( )
Φ(t) = y 1 (t)| . . . |y n (t) .

8
Theorem 9. Let Φ(t) be a fundamental matrix solution of (3), then

eAt = Φ(t)Φ−1 (0).

Proof. First note that if Φ(t) is a fundamental matrix solution it solves the matrix differential equation

Φ̇ = AΦ.

Moreover, since the columns of Φ(t) are linearly independent, then det Φ(0) ̸= 0. Now, since

d At
e = AeAt , eA0 = I,
dt
then eAt is a fundamental matrix solution itself. For any two fundamental matrix solutions X(t) and
Y (t) it is true that
X(t) = Y (t)C,
where C is a constant matrix. The last equality is true since each column of X(t) can be expressed
as a linear combination of columns of Y (t). Therefore, by plugging t = 0, we find

eAt = Φ(t)C =⇒ C = Φ−1 (0).

This approach is not usually the best to find eAt and requires quite a few calculations.

Example 10. Consider system (3) with


 
1 1 1
A = 0 3 2 .
0 0 5

To find the fundamental matrix solution, we find eigenvalues and eigenvectors of A:


 t 
e e3t e5t
Φ(t) =  0 2e3t 2e5t  .
0 0 2e5t

Next,  
1 −1/2 0
Φ−1 (0) = 0 1/2 −1/2 .
0 0 1/2
Finally,  t 
e − 12 et + 12 e3t − 21 e3t + 12 e5t
eAt = Φ(t)Φ−1 (0) =  0 e3t −e3t + e5t  .
0 0 e5t

You might also like