Lecture 22
Lecture 22
Part
III. Eigenvalues with multiplicities two and more. Matrix expo-
nent
To deal with the eigenvalues of multiplicity 2 and above is mathematically challenging in the general
form, therefore I divide this lecture into two parts. In the first one I somewhat heuristically deal
with the case of a two by two matrix A that has an eigenvalue λ of (algebraic) multiplicity two and
consider two possible cases. In the second part, which in principle covers any situation, I introduce
the so-called matrix exponent. The second part of this lecture is usually optional for an introductory
ODE class.
ẏ = Ay, y(t) ∈ R2 ,
and it happened that matrix A has only one eigenvalue λ of (algebraic) multiplicity two. For the
corresponding eigenvector(s) it is possible in this case to have two quite different situations. First, it
is feasible to have two linearly independent eigenvectors correspond to the same eigenvalue λ. As an
example take, e.g., matrix I 2 , which clearly has only one eigenvalue λ = 1 of (algebraic) multiplicity
2. Solving the system
(I 2 − λI 2 )v = 0
however yields that any vector v ∈ R2 is a solution, and to characterize all the solutions we can take
any two linearly independent vectors, which form a basis of R2 . I can, e.g., take e1 and e2 as a basis.
In this case the general solution to the corresponding system of ODE is written in general as
where v 1 , v 2 ate two linearly independent eigenvectors corresponding to λ (it is said that λ has the
geometric multiplicity two as well in this case). How often it can happen? Not very often. It can
be proved (an exercise for a mathematically inclined student) that a two by two matrix A has one
eigenvalue λ of (algebraic) multiplicity 2 with two linearly independent eigenvectors if and only if
A = aI 2 , i.e, A is a scalar product of diagonal matrix. Not a very interesting case.
In most cases one finds that if A has one eigenvalue λ with multiplicity two, there is only one
linearly independent eigenvector. That is, we know one solution to the ODE system, but lacking the
second one. Let me try, using our experience with linear ODE with constant coefficients, try to look
for a solution in the form
y(t) = vteλt + weλt .
Plugging this expression into my equation I find
1
After cancelling the exponent and comparing the coefficients at the same powers of t I get
Av = λv,
Aw = λw + v,
which gives me the recipe to find the second solution. The first equation is simply the eigenvector
problem for v, and hence the second equation is a system of linear equation with respect to unknown
w, which (it requires proof!) can be always solved. Here is an example.
Example 1. Solve [ ]
3 −18
ẏ = y.
2 −9
The characteristic equation is
λ2 + 6λ + 9 = (λ + 3)2 = 0,
hance there is one eigenvalue −3 multiplicity two. By solving the corresponding homogeneous system I
find that there is only one linearly independent vector v = (3, 1)⊤ , and hence one linearly independent
solution is [ ]
3 −3t
y 1 (t) = e .
1
which has infinitely many solutions. I need only one, hence from the first equation
w1 = 3w2 + 1/2
I can take [ ]
1/2
w= ,
0
and therefore my second solution is
[ ] [ ]
3 −3t 1/2 −3t
y 2 (t) = te + e ,
1 0
Remark 2. To connect this subsection with the following I note that the equation
Aw = λw + v
2
∗
22.2 Matrix exponent
Since “guessing” a solution is not very mathematically attractive, here I present a rigorous and general
approach to deal with repeated eigenvalues. It is based on the mathematical object, which is called
matrix exponent.
Consider the following first order differential equation of the form
y ′ = ay, a ∈ R,
y(t) = eat y0 .
However, let us apply the method of iterations to this equation. First note that instead of differential
equation plus the initial conditions we can have one integral equation
∫ t
y(t) = y0 + ay(τ ) dτ.
0
Now we plug in the right hand side y(τ ) = y0 and find first iteration y1 (t):
∫ t
y1 (t) = y0 + ay0 dτ = y0 + ay0 t = (1 + at)y0 .
0
I plug it again in the right hand side and find the second iteration
∫ t ( )
a2 t2
y2 (t) = y0 + ay1 (τ ) dτ = 1 + at + y0 .
0 2
In general we find
∫ t ( )
at a2 t2 an tn
yn (t) = y0 + ayn−1 (τ ) dτ = 1 + + + ... + y0 .
0 1! 2! n!
You should recognize inside the parenthesis the partial sums for the Taylor series of eat , hence we
recover again our familiar solution
y(t) = eat y0 .
So what is the point about these iterations? Let us do the same trick with the system
3
where the integral of a vector is understood as componentwise integrals. I plug in the right-hand side
y 0 and find the first iteration
The expression in the parenthesis is a sum of n × n matrices, and hence a matrix itself. Therefore, it
is natural to define a matrix, which is called matrix exponent, as the infinite sum of the form:
A2 t2 An tn
eAt = exp At := I + At + + ... + + ...
2! n!
Note that we can include scalar t to the matrix A.
A2 An
eA = exp A := I + A + + ... + + ... (2)
2! n!
To make sure that the definition makes sense we need to specify what we understand under the
infinite series of matrices. I will skip this point here and just mention that series (2) converges
absolutely for any matrix A, which allows us multiply this series by another matrix, differentiate it
term by term, or integrate it if necessary.
Matrix exponent has a lot of properties similar to the usual exponent. Here are those that I will
need in the following:
1. As I already mentioned, series (2) converges absolutely, which means that there is a well defined
limit of partial sums of this series.
2.
d At
e = AeAt = eAt A.
dt
This property can be proved by term by term differentiation and factoring out A (left as an
exercise). Note here that both A and eAt are n × n matrices, and it is not obvious that
AeAt = eAt A. Such matrices for which AB = BA are called commuting.
4.
eλIt v = eλt v,
for any λ ∈ R and v ∈ Rn . The proof follows from the definition.
4
Before using the matrix exponent to solve problems with equal eigenvalues, I would like to state
the fundamental theorem of linear first order homogeneous ODE with constant coefficients:
Theorem 4. Consider problem (1). Then this problem has the unique solution
y(t) = eAt y 0 .
Moreover, for any vector v ∈ Rn , y(t) = eAt v is a solution to the system ẏ = Ay.
eAt v = eλt v.
We actually found exactly those solutions to system ẏ = Ay that can be written down using the
distinct eigenvalues.
Definition 6. A nonzero vector v is called a generalized eigenvector of matrix A associated with the
eigenvalue λ with the algebraic multiplicity k > 1, if
(A − λI)k v = 0.
Now assume that vector v is a generalized eigenvector with k = 2. Exactly as in the last example,
we will find that
5
Hence we found that for a generalized eigenvector v with k = 2, the solution to our system can be
taken as ( )
y(t) = eλt I + t(A − λI) v,
which does not require much computations. The only remaining question is actually whether we are
always able to find enough linearly independent generalized eigenvectors for a given matrix. The
answer is positive. Hence we obtain an algorithm for matrices with equal eigenvalues:
Assume that we have a real eigenvalue λi of multiplicity 2 and we found only one linearly
independent eigenvector v i corresponding to this eigenvalue (if we are able to find two, the
problem is solved). Then first particular solution is given by, as before,
y i (t) = v i eλi t .
To find a second particular solution to account for this multiplicity we need to look for a gener-
alized eigenvector that solves the equation
(A − λi I)2 ui = 0.
Note that we are looking for such ui that the previous holds and
(A − λi I)ui ̸= 0.
We can always find a solution ui of this system, which is linearly independent of v i . In this case
the second particular solution is given by
( )
y i+1 (t) = eλi t I + (A − λi I)t ui .
This case can be generalized to the case when multiplicity of eigenvalues in bigger than 2 (see an
example below) and when we have complex conjugate eigenvalues of multiplicity two and higher
(we will not need this case for the quizzes and exams).
The eigenvalues are λ1 = 2 and λ2 = 1 (multiplicity 2). An eigenvector for λ1 can be taken as
0
v 1 = 0 .
1
6
and we are short for one more linearly independent solution to form a basis for the solution set.
Consider
(A − λ2 I)2 u = 0,
which has linearly independent solutions
1
u1 = 0
0
and
0
u2 = 1 .
0
The first one is exactly v 2 , therefore we keep only u2 .
Finally, one finds that
( ) t
y 3 (t) = et I + (A − λ2 I)t u2 = 1 et ,
0
and the general solution is
y(t) = C1 v 1 e2t + C2 v 2 et + C3 y 3 (t).
To find two more linearly independent solutions we need to look for the generalized eigenvectors.
Consider first
(A − λ2 I)2 u = 0,
which has two vectors as a solution basis
1
u1 = 0
0
7
and
0
u2 = 1 .
1
Note that v, u1 , u2 are linearly dependent (why?) and therefore we can keep only one of the vectors,
e.g., u1 . The particular solution in this case
( ) 1−t
y 2 (t) = e2t I + (A − 2I)t u1 = t e2t .
t
To find one more linearly independent solution let us look for a generalized eigenvector with k = 3:
(A − λ2 I)3 w = 0.
Note that (A − λ2 I)3 = 0 therefore any vector w will do, the only thing is that we need it to be
linearly independent of v and u1 . For instance we can take
0
w = 0 .
1
2
y3 (t) 1 t 1 + 2t − t 2
Applying the initial conditions, one finds C1 = C3 = 0, C2 = 1 and finally the solution is
1−t
y(t) = t e2t .
t
8
Theorem 9. Let Φ(t) be a fundamental matrix solution of (3), then
Proof. First note that if Φ(t) is a fundamental matrix solution it solves the matrix differential equation
Φ̇ = AΦ.
Moreover, since the columns of Φ(t) are linearly independent, then det Φ(0) ̸= 0. Now, since
d At
e = AeAt , eA0 = I,
dt
then eAt is a fundamental matrix solution itself. For any two fundamental matrix solutions X(t) and
Y (t) it is true that
X(t) = Y (t)C,
where C is a constant matrix. The last equality is true since each column of X(t) can be expressed
as a linear combination of columns of Y (t). Therefore, by plugging t = 0, we find
This approach is not usually the best to find eAt and requires quite a few calculations.
Next,
1 −1/2 0
Φ−1 (0) = 0 1/2 −1/2 .
0 0 1/2
Finally, t
e − 12 et + 12 e3t − 21 e3t + 12 e5t
eAt = Φ(t)Φ−1 (0) = 0 e3t −e3t + e5t .
0 0 e5t