Chapter 1 2024 Lucia
Chapter 1 2024 Lucia
www.smstc.ac.uk
Contents
(i)
SMSTC (2024/25)
Dynamical Systems and Conservation Laws
Chapter 1: Autonomous Systems of Ordinary Differential
Equations (ODEs)
Lucia Scardia
Original notes: Jack Carr HWU
www.smstc.ac.uk
Contents
1.1 What is a scalar autonomous ODE? . . . . . . . . . . . . . . . . . . . . . . . 1–1
1.2 Existence, Uniqueness and Maximal intervals . . . . . . . . . . . . . . . . . 1–2
1.3 Autonomous Systems, Phase Portraits and Stability . . . . . . . . . . . . . 1–6
1.3.1 Stability of linear systems with constant coefficients . . . . . . . . . . . . . . . 1–11
1.3.2 Stability of nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–16
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–18
where the (highest) derivative is expressed in terms of everything else, with x : I → Rn , and f : Ã → Rn .
In components, we can write (1.1) as
0
x1 (t) = f1 (t, x1 (t), x2 (t), · · · , xn (t)) ,
x0 (t) = f (t, x (t), x (t), · · · , x (t)) ,
2 2 1 2 n
. (1.2)
..
0
xn (t) = fn (t, x1 (t), x2 (t), · · · , xn (t)) ,
where
x1 (t) f1 (t, x1 , · · · , xn )
x(t) = ... , and f (t, x) = ..
. (1.3)
.
xn (t) fn (t, x1 , · · · , xn )
We say that x : I → Rn is a solution of (1.1) if x is differentiable in I, and (t, x(t)) ∈ Ã for every t ∈ I.
A system like (1.1) is called autonomous if it is of the form
i.e., if there is no explicit t dependence in the right-hand side, with f : A → Rn , and A ⊂ Rn an open
set.
1–1
SMST C: Dynamical Systems and Conservation Laws 1–2
Remark 1.1.1 (Why only first order?) While the general form (1.1) apparently involves only a first
derivative, it can include differential equations of higher order as a special case.
To see this, consider an nth-order differential (scalar) equation of the form
dn x dn−1 x
dx
= f t, x, , . . . , . (1.5)
dtn dt dtn−1
This can be written as a first-order equation for the vector-valued function x in (1.3) as
x1 x2
x2
d x3
.. = . (1.6)
..
dt . .
xn f (t, x1 , . . . , xn )
We recover the differential equation (1.5) by identifying x = x1 . Then the first n − 1 components of (1.6)
tell us that
dx dx2 d2 x dn−1 x
x2 = , x3 = = 2 , . . . , xn = n−1
dt dt dt dt
so that the last component is precisely equation (1.5). In conclusion, higher-order equations can be
transformed into a system of first-order equations, and this is why first-order systems are all we will focus
on.
In general, it is not necessarily true that these two basic properties - existence and uniqueness - are
satisfied, as shown in the following examples. For the examples we focus on the scalar case n = 1.
That is however not the only solution of the IVP (1.8). In fact for any c > 0, the following is a solution:
0 t ≤ c,
xc (t) = 1 2
4 (t − c) t > c.
Example 1.2.2 (Existence for only a finite time) Solve the IVP
(
x0 (t) = x2 (t),
x(0) = 1.
This is again an autonomous ODE. It is easy to show that the function x : (−∞, 1) → R defined as
1
x(t) =
1−t
is a solution of the IVP (check!). However the function is not defined for every t ∈ R, nor is a solution
for every t ∈ R (even if f is defined everywhere!). In fact the function x blows up at time t = 1, namely
x(t) → ∞ as t → 1.
Note: As a function, x(t) = (1 − t)−1 is defined for all t 6= 1, so it is defined in a domain much bigger
than the maximal interval where it is a solution. We will recall the definition of maximal intervals
below.
Example 1.2.3 (Non-existence) If f is not continuous then (1.7) may not have a solution. For ex-
ample, consider the IVP (
x0 (t) = f (t, x(t)),
(1.10)
x(0) = 0,
with
1 if t≥0
f (t, x) =
0 if t < 0.
If x is a solution of (1.10), then it must be x(t) = t for t ≥ 0 and x(t) = 0 for t < 0. But this function is
not differentiable at t = 0, and hence it cannot be a solution (since the interval of existence must contain
the initial time t0 = 0).
An important theorem by C.E. Picard (1856-1914) says that, under fairly mild assumptions on f , IVPs
(initial value problems) have unique solutions (at least ‘locally’, close to the initial time t0 ).
The assumptions of Theorem 1.1 can be weakened, although, as observed above, continuity has to be
required, otherwise even existence could be lost.
Both local existence and uniqueness theorems for the IVP (1.12) can be obtained, provided that the
function f satisfies a certain property, called Lipschitz continuity. This property is roughly between
continuity and differentiability, i.e. it is stronger than continuity, but weaker than differentiability. More
precisely, we have the following.
kf (s, x) − f (s, y)k ≤ L(t,z) kx − yk, for every (s, x), (s, y) ∈ U(t,z) .
The function f is said to be locally Lipschitz-continuous with respect to its second variable in
à if it is locally Lipschitz-continuous with respect to its second variable at all points (t, x) ∈ Ã. Finally,
f is said to be globally Lipschitz-continuous with respect to its second variable, if there is a
constant L > 0 such that
kf (s, x) − f (s, y)k ≤ Lkx − yk, for every (s, x), (s, y) ∈ Ã.
The following is is known as the Picard-Lindelöf Theorem and is a weaker version of Theorem 1.1.
Theorem 1.2 (Existence and uniqueness (Picard-Lindelöf )) Let à ⊂ R × Rn be open, and let
f : Ã → Rn be a continuous function. Assume, in addition, that f is locally Lipschitz-continuous with
respect to its second variable x in Ã. Then, for every (t0 , x0 ) ∈ Ã there exists δ > 0 such that the solution
to the initial value problem (
x0 (t) = f (t, x(t)),
x(t0 ) = x0 ,
exists and is unique in the interval I := (t0 − δ, t0 + δ).
Can we weaken the assumptions of Picard-Lindelöf’s Theorem further? Well, Example 1.2.1 tells us that
we can lose uniqueness if f is not locally Lipschitz continuous in the variable x.
This suggests that, if we want uniqueness, we cannot weaken the assumptions of Picard-Lindelöf Theorem,
Theorem 1.2. On the other hand, in Example 1.2.1 we still have existence, even if the assumptions of
Theorem 1.2 are not satisfied. However, we know by Example 1.2.3 that continuity is necessary for
existence. It turns out that continuity is in fact sufficient to guarantee local existence of solutions of IVP,
but no uniqueness is guaranteed.
Theorem 1.3 (Peano Existence Theorem) Let à ⊂ R × Rn be open, and let f : à → Rn be a continuous
function. Then, for every (t0 , x0 ) ∈ Ã there exists δ > 0 such that the initial value problem
(
x0 (t) = f (t, x(t)),
x(t0 ) = x0 ,
and assume that the assumptions of Picard’s (or Picard-Lindelöf) Theorem are satisfied. Then we know
that there exists δ > 0 such that we have existence and uniqueness for a solution x of the IVP inside the
small interval (t0 − δ, t0 + δ). However, the function x may be a solution of the IVP in a larger interval,
even in R. How do we find the largest interval (or maximal interval) where x is a solution of the IVP?
It turns out that, for every initial value problem satisfying the assumptions of Picard-Lindelöf Theorem,
there exists a maximal interval of existence I = (t− , t+ ). Essentially, the solution exists until (t, x(t))
reaches the “boundary” of the domain à of f .
SMST C: Dynamical Systems and Conservation Laws 1–5
(i) t+ = +∞;
(ii) t+ < +∞, and lim kx(t)k = +∞;
t→t+
We now give examples to illustrate the three different possibilities for t+ and t− .
Example 1.2.6 Let us give some examples in which the three different possibilities for t− or t+ occur,
in the scalar case.
x(t) = tanh t, I = R.
and
lim x(t) = lim − tan t = ∞.
t→(t+ )− t→( π
2)
In this case, Ã = {(t, x) ∈ R2 : x 6= 0}. The solution x and the maximal interval of existence are
given by
√
1
x(t) = 2t + 1, I = − , +∞ .
2
In this case, t− = −1/2 and t+ = +∞. Moreover,
1
lim (t, x(t)) = lim (t, x(t)) = − , 0 ∈ ∂ Ã.
t→(t− )+ t→(− 12 )
+ 2
Thus, when t → (t− )+ the graph of the solution approaches the boundary of à .
If we assume that f is sufficiently smooth (e.g. continuous in an open set A ⊂ Rn with continuous matrix-
valued derivative ∂f
∂x : A → R
n×n
), then Picard’s theorem tells us that the solution to the initial-value
problem
x0 (t) = f (x(t)), x(t0 ) = x0 (1.16)
for any t0 ∈ R and for x0 ∈ A is unique.
Note: In the autonomous case, in (1.18) there is no loss of generality in assuming that t0 = 0. To see
this let x be a solution of (1.18) and set y(t) := x(t + t0 ). Then
for x0 ∈ A. We recall that Ix0 denotes the maximal interval where the solution exists and satisfies the
IVP (1.17).
Ω = {(t, x0 ) ∈ R × A : t ∈ Ix0 }.
We define φ : Ω → Rn as the function such that, for x0 ∈ A, φ(·, x0 ) : Ix0 → Rn is the solution of (1.17),
defined on its maximum interval (namely ∂φ
∂t (t, x0 ) = f (φ(t, x0 )) and φ(0, x0 ) = x0 ).
SMST C: Dynamical Systems and Conservation Laws 1–7
φt (x0 ) = φ(t, x0 ).
The function φt , for t ∈ x0 , is called the flow of the system x0 (t) = f (x(t)), or alternatively, the flow of
the vector field f .
Remark 1.3.2 (The flow of a linear system) In the special case of a linear system
the flow is given by the exponential matrix of J. We recall that for t ∈ R one can define
∞
X Ak t k
eJt := .
k!
k=0
Computing eJt , for a given matrix J ∈ Rn×n , is equivalent to solving the system x0 (t) = Jx(t). Indeed,
it turns out that the unique solution of the initial value problem
x(t) = eJt x0 .
Uniqueness is known. To see that the function above defines a solution of the system, note that
d Jt
x0 (t) = (e )x0 = JeJt x0 = Jx(t),
dt
and x(0) = e0 x0 = Ix0 = x0 .
(Essentially, eJt is the matrix whose columns are, up to multiplicative constants, the n independent
solutions of the system.)
In terms of the general notation of the flow, we would have that φt (x0 ) = eJt x0 , with φt : Rn → Rn in
this case. Hence, φt = eJt .
Definition 1.3.3 (Trajectory) The set of points {x(t) ∈ Rn : t ∈ Ix0 }, where x satisfies
Remark 1.3.4 (Trajectories and flow) Let x0 ∈ A be fixed. Then the trajectory of the system through
x0 is the set {φ(t, x0 ) ∈ Rn : t ∈ Ix0 }.
On the other hand, if we let x0 vary in a set K ⊂ A, then the flow φt : K → Rn can be viewed as the
motion of all the points in K.
Definition 1.3.5 (Phase portrait) The phase portrait (or phase diagram) of the autonomous ODE
(1.15) consists of Rn with the trajectories of (1.15) drawn through each point. In the scalar case n = 1
the phase portrait is often called ‘phase line’, while for n = 2 is it called ‘phase plane’.
A phase portrait represents the directional behaviour of a system of ODEs. It is often possible to sketch
the phase portrait for autonomous equations without solving the equations completely, and then deduce
the qualitative nature of the solutions from the portrait.
If xe is an equilibrium point of (1.15), x(t) ≡ xe is a constant solution of (1.15). Hence the trajectory of
(1.15) through xe is equal to {xe }; that is, the trajectory is the single point xe .
Example 1.3.7 Draw the phase diagram of the equation x0 (t) = x(t) (corresponding to f (x) = x).
First note that f (x) = 0 if and only if x = 0, hence 0 is the only equilibrium point of the ODE. Now we
study the sign of f . Clearly f (x) > 0 for x > 0 and f (x) < 0 if x < 0.
Then it follows that if x(t) > 0 we have x0 (t) = f (x(t)) > 0, and if x(t) < 0 we have x0 (t) = f (x(t)) < 0.
The phase diagram consists of the set of possible trajectories in R and is of the form in Figure 1.1. In the
region of f > 0 we draw an arrow in the positive direction, when f < 0 we draw an arrow in the negative
direction.
x
0
Note that in this simple case we can write the solution exactly: x(t) = x(0)et .
Example 1.3.8 Draw the phase diagram of the equation x0 (t) = λx(t), for λ ∈ R (corresponding to
f (x) = λx).
Note that 0 is the only equilibrium point of the ODE for λ 6= 0. Now we study the sign of f .
If λ > 0, the phase diagram is as in the previous exercise, see Figure 1.2. The picture suggests that 0 is
unstable.
x
0
If λ < 0, the phase diagram will be of the same type as in Figure 1.2, but the direction of the arrows will
be reversed, see Figure 1.3. The picture suggests that 0 is stable.
x
0
Finally, If λ = 0, then every x0 ∈ R is an equilibrium of the system (meaning that every constant function
is a solution of the ODE. The phase diagram will be made by infinitely many points, all points in R. There
are no arrows! The diagram is shown in Figure 1.4.
SMST C: Dynamical Systems and Conservation Laws 1–9
x
0
First note that f (x) = 0 if and only if x = 0 or x = 10, hence 0 and 10 are the only equilibrium points
of the ODE. Now we study the sign of f . Clearly f (x) > 0 for 0 < x < 10 and f (x) < 0 if x < 0 and if
x > 10.
Then it follows that if x(t) < 0 or x(t) > 10, we have x0 (t) = f (x(t)) < 0, and if 0 < x(t) < 10 we have
x0 (t) = f (x(t)) > 0. The phase diagram consists of the set of possible trajectories in R and is of the form
in Figure 1.5.
x
0 10
x(t)
Figure 1.5: Phase line of x0 (t) = 1
2 1− 10 x(t).
First note that f (x) = 0 if and only if x = −1 or x = ±2, hence ±2 and −1 are the only equilibrium
point of the ODE. Now we study the sign of f . Clearly f (x) > 0 if x < −2 and if x > 2, while f (x) < 0
if −2 < x < 2.
Then it follows that if x(t) < −2 or x(t) > 2, we have x0 (t) = f (x(t)) > 0, and if −2 < x(t) < 2 we have
x0 (t) = f (x(t)) < 0. The phase diagram consists of the set of possible trajectories in R and is of the form
in Figure 1.6.
x
−2 −1 2
In the two-dimensional case the study of the equilibria and their stability is more complicated. Sometimes
it can be done ‘by hand’, as in the following example.
Example 1.3.11 Find the phase plane of x00 (t) = −x(t). This is the equation of the simple harmonic
oscillator. We are interested in sketching the phase plane of the associated system.
The second-order equation may be equivalently be rewritten as a system of two first-order scalar equations:
(
x01 (t) = x2 (t)
x02 (t) = −x1 (t).
The only equilibrium point of the system is (x1 , x2 ) = (0, 0). Now we would like to sketch some trajectories.
SMST C: Dynamical Systems and Conservation Laws 1–10
Note that, if t 7→ x(t) = (x1 (t), x2 (t)) is a solution of the system, then it satisfies
d d 2
kx(t)k2 = x1 (t) + x22 (t) = 2x1 (t)x01 (t) + 2x2 (t)x02 (t) = 2x1 (t)x2 (t) − 2x2 (t)x1 (t) = 0,
dt dt
hence for a solution we have that
x2
x1
The fact that x01 (t) > 0 when x2 (t) > 0 gives the direction of the arrows in this picture. Since the
trajectories are closed they correspond to periodic solutions.
Let us go back to the issue of stability of equilibria. Suppose that the system is slightly displaced from
an equilibrium state xe . That is, suppose that we consider a solution x(t) of (1.15) that passes through
the point x0 at time 0, where the distance kx0 − xe k is small. What happens to the system when we start
at a point x0 different from xe but close to xe ? Will the resulting motion remain close to xe for t ≥ 0?
Will the solution x(t) not only stay close to xe , but tend to it as t → +∞?
is said to be stable if for every ε > 0 there exists δ > 0 (depending on ε) such that if kx(0) − xe k < δ,
then the solution φ(t, x(0)) exists for every t ≥ 0, and kφ(t, x(0)) − xe k < ε for t ≥ 0.
The equilibrium solution xe is said to be unstable if it is not stable.
ε
δ
x
x(0)
is said to be asymptotically stable if it is stable and if there exists a number δ0 > 0 such that if x(t) is
any solution of (1.20) having kx(0) − xe k < δ0 , then limt→+∞ x(t) = xe .
SMST C: Dynamical Systems and Conservation Laws 1–11
x
x(0)
Note: In words, an equilibrium is stable if, whenever a solution starts close to it, it stay close to it. An
equilibrium is asymptotically stable if, whenever a solution starts close enough to it, then it gets closer
and closer as t → +∞.
In Example 1.3.9 0 is an unstable equilibrium and 10 is an asymptotically stable equilibrium. In Example
1.3.8, for λ = 0, all x0 ∈ R are stable but not asymptotically stable. Also, the origin is a stable equilibrium
in Examples 1.3.11, but not asymptotically stable.
In general, however, it is not so easy, in dimension n ≥ 2, to establish the nature of an equilibrium by
looking at the system. If the system is linear there is a criterion for stability.
where J ∈ Rn×n is a constant (t-independent) matrix. This is the simplest general system for which
stability questions are easily and completely decided.
Clearly, xe = 0 is always an equilibrium point of (1.21) and will appear in this phase diagram. We are
going to study the stability of this equilibrium point, in terms of the eigenvalues of J.
As a first step, we recall the expression of a general solution of the linear system (1.21) in the simplest
case.
Lemma 1.1 Let J ∈ Rn×n , and assume that J has n linearly independent eigenvectors v1 , . . . , vn ∈ Cn
(which are then a basis of Cn ) with corresponding eigenvalues λ1 , . . . , λn ∈ C (not necessarily different
from each other). Then the general solution of (1.21) is given by
X
x(t) = ck eλk t vk , ck ∈ C,
i=1
namely the functions xk : R → Cn , xk (t) = eλk t vk , form a basis of n linearly independent solutions.
One can always obtain a general real-valued solution, even if some eigenvalues and corresponding eigen-
vectors are complex. Indeed, if λ, λ̄ ∈ C are complex-conjugate eigenvalues with corresponding complex-
conjugate eigenvectors v, v̄ ∈ Cn , then one can replace the functions
eλ v, eλ̄ v̄
Remark 1.3.14 (Real and complex eigenvectors) Assume that J has 2m linearly independent com-
plex eigenvectors, say uk ± iwk , with uk , wk ∈ Rn for k = 1, . . . , m, and n − 2m real eigenvectors
u2m+k ∈ Rn , with k = 1, . . . , n − 2m. Then not only
but also
spanR {u1 , w1 , . . . , um , wm , u2m+1 , . . . , un } = Rn .
SMST C: Dynamical Systems and Conservation Laws 1–12
To see the latter statement, let x ∈ Rn . Since x ∈ Cn , there exist αk ∈ C such that
m
X n
X m
X n
X
x = <x = < αk uk ± iαk wk + < αk uk = < αk uk ± iαk wk + <(αk )uk
k=1 k=m+1 k=1 k=2m+1
m
X n
X
= <(αk )uk ∓ =(αk )wk + <(αk )uk ∈ spanR {u1 , w1 , . . . , um , wm , u2m+1 , . . . , un }.
k=1 k=2m+1
Unfortunately not every matrix has n linearly independent eigenvectors, and in that case one has to
resort to generalised eigenvectors in order to write the general solution of the system. We briefly recall
the procedure in the two-dimensional case n = 2.
0
Example 1.3.15 continued For the pair λ = 1, v2 = , Lemma 1.2 provides the solution
1
0 1
x2 (t) = et +t . The general solution of the system is then
1 0
t 1 t 0 1
x(t) = c1 x1 (t) + c2 x2 (t) = c1 e + c2 e +t .
0 1 0
We can now go back to studying the stability of the origin for the system x0 (t) = Jx(t), in terms of the
eigenvalues of J. We have the following.
Theorem 1.4 If all of the eigenvalues of J have nonpositive (namely ≤ 0) real parts, and all the eigen-
values with zero real parts are simple (namely with algebraic multiplicity a = 1), then the solution x(t) ≡ 0
of
x0 (t) = Jx(t), (1.22)
is stable. If (and only if ) all eigenvalues of J have negative (namely < 0) real parts, then the solution
x(t) ≡ 0 of (1.22) is asymptotically stable.
If one or more eigenvalues of J have a positive real part, the zero solution of (1.22) is unstable.
Remark 1.3.16 The case with some eigenvalues with zero real part not simple, assuming the remaining
eigenvalues have negative real parts, requires some special investigation.
Example 1.3.17 Study the stability of equilibria of the equation x00 (t) − x(t) = 0.
This equation can be written as a system x0 (t) = Jx(t), with
0 1
J= .
1 0
The eigenvalues of J are λ1 = 1 and λ2 = 1. Since the eigenvalues have positive real part, we conclude
that the zero solution of the system (and hence of the scalar equation) is unstable.
Example 1.3.18 Study the stability of equilibria of the equation x00 (t) + 2kx0 (t) + x(t) = 0, with k > 0.
This equation can be written as a system x0 (t) = Jx(t), with
0 1
J= .
−1 −2k
√
The eigenvalues of J are λ = −k ± k 2 − 1. Now, if k ≥ 1, then both eigenvalues are real and (strictly)
negative, and hence the origin is an asymptotically stable equilibrium. If, instead, 0 < k < 1, then the
eigenvalues are complex and conjugate, but they both have negative real parts, so also in this case the
origin is an asymptotically stable equilibrium.
Here we consider the general case of a two-dimensional linear system with constant coefficients.
For simplicity, we only consider the case det J 6= 0. This means that the origin is the only equilibrium
and that zero is not an eigenvalue of J.
We now discuss the different possibilities that occur for the eigenvalues λ1 , λ2 of a real matrix J: each
case generates a different type of phase plane, and we consider them in turn.
Case 1: Real and distinct eigenvalues, λ1 , λ2 ∈ R, λ1 6= λ2 . Since the eigenvalues are distinct, there
are two linearly independent eigenvectors, v1 eigenvector to λ1 and v2 eigenvector to λ2 . The general
solution is
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 , c1 , c2 ∈ R. (1.23)
Note that the lines through the origin and parallel to v1 and v2 are trajectories in the phase plane: they
are 4 trajectories (4 rays), corresponding to c2 = 0, with c1 > 0 or c1 < 0, and to c1 = 0, with c2 > 0 or
SMST C: Dynamical Systems and Conservation Laws 1–14
c2 < 0 respectively. These 4 rays exclude the origin, since it cannot be reached in finite time. Moreover,
the origin itself is a trajectory! So, what it looks like 2 intersecting trajectories (which would violate
Picard’s Theorem) are in fact 5 non-intersecting trajectories. The direction on the 4 rays will depend on
the sign of the eigenvalues.
To be more precise, we consider 3 sub-cases.
Case 1 (a) λ1 < λ2 < 0 (eigenvalues negative and distinct). In this case the directions on the 4 rays
given by the eigenvectors are all towards the equilibrium, and the trajectories of the other solutions of
(1.23) are ‘parabola-like’ curves following the 4 rays. For large and positive t (and hence close to the
origin), since λ1 < λ2 < 0, we can neglect the term with eλ1 t in (1.23), and approximate
Namely, close to the origin the solution x(t) follows the ray v2 .
Similarly, for large and negative t (and hence far from the origin), since λ1 < λ2 < 0, we can neglect the
term with eλ2 t in (1.23), and approximate
x2 v2
v1
x1
Case 1 (b) λ1 > λ2 > 0 (eigenvalues positive and distinct). In this case the directions on the 4 rays given
by the eigenvectors are all away from the equilibrium. The trajectories of the other solutions of (1.23)
are ‘parabola-like’ curves following the 4 rays, exactly as in the previous case. The only difference is the
orientation, which in this case is opposite, away from the equilibrium (see Figure 1.9). The equilibrium
point 0 is unstable and is called a node or a nodal source.
Case 1 (c) λ1 > 0 > λ2 (eigenvalues of opposite sign). In this case the directions on the 4 rays given by
the eigenvectors are slightly more complicated.
Along v1 , which is the eigenvector corresponding to the positive eigenvalue, the directions will be away
from the equilibrium, since eλ1 t → +∞ as t → +∞. Along v2 , which is the eigenvector corresponding to
the negative eigenvalue, the directions will be towards the equilibrium, since eλ2 t → 0 as t → +∞.
The trajectories of the other solutions of (1.23) will be ‘hyperbola-like’ curves having the 4 rays as
asymptotes (see Figure 1.10). The equilibrium point 0 is unstable and is called a saddle point.
x2 v2
v1
x1
x2
v1
x1
v2
(where the exponential is missing since the real part of the eigenvalues is zero). So the general solution
is of the form
x(t) = cos(βt)(c1 u + c2 w) + sin(βt)(−c1 w + c2 u), c1 , c2 ∈ R.
Since every solution is periodic, with period 2π/β, the moving point representing it in the phase plane
retraces its path at intervals of 2π/β. The trajectories therefore are closed curves; ellipses, in fact (see
Figure 1.11). Sketching the ellipse is a little troublesome. For this course, it will be enough to determine
whether the motion is clockwise or counterclockwise. This can be done by using the system x0 (t) = Jx(t)
to calculate a single velocity vector; from this the sense of motion can be determined by inspection.
The equilibrium point 0 is stable but not asymptotically stable and is called a center.
Case 3: Complex eigenvalues with non-zero real part, λ1 = α + iβ, λ2 = λ1 = α − iβ, α ∈ R,
β > 0. The corresponding eigenvectors are u ± iw with u, w ∈ R2 , and two real solutions are
The solutions are like in the previous case, except for the factor eαt , which decreases to zero if α < 0
or increases to +∞ if α > 0. So the trajectories are similar to ellipses, but the distance from the origin
keeps steadily shrinking or expanding. The result is a trajectory which spirals to the origin (if α < 0) or
spirals away from the origin (if α > 0).
So the equilibrium point 0, called a spiral point, is asymptotically stable if α < 0, and it is unstable if
α > 0.
SMST C: Dynamical Systems and Conservation Laws 1–16
x2
x1
Case 4: Repeated (real) eigenvalues λ1 = λ2 6= 0. Here there are two sub-cases, depending
on whether the repeated eigenvalue admits two different eigenvectors, or instead we need generalised
eigenvectors to construct a basis.
Case 4 (a) (linearly independent eigenvectors). In this case the general solution is
x2
x1
v1
x2
x1
v1
if f (xe ) = 0 and none of the eigenvalues of the (constant) matrix J = Df (xe ) have zero real part. The
linear system y 0 (t) = Jy(t), with J = Df (xe ) is called the linearisation of (1.27) at xe .
Remark 1.3.20 Heuristically, if we set x := xe + y with kyk small, namely we perturb the equilibrium
a bit, we have that, by Taylor-expanding f around xe ,
1
f (x) = f (xe + y) = Df (xe )y + D2 f (xe )(y, y) + . . .
2
So since the linear part Df (xe )y is a good approximation of f close to xe , it is reasonable to expect that
the system (1.27) close to xe is well approximated by its linearisation at zero. More precisely, since
we expect that
x0 (t) = f (x(t)) ≈ y 0 (t) = Df (xe )y(t)
close to xe , and hence that to analyse the nature of the equilibrium xe of the initial system one can instead
study the nature of the origin for the linearised equation.
We now classify equilibria for (1.27) according to the sign of the real parts of the eigenvalues of the matrix
Df (xe ).
is called a sink if all of the eigenvalues of the matrix Df (xe ) have negative real part; it is called a source
if all of the eigenvalues of Df (xe ) have positive real part; and it is called a saddle if it is a hyperbolic
equilibrium point and Df (xe ) has at least one eigenvalue with a positive real part and at least one with a
negative real part.
The linearisation gives a very good description of the behaviour of the nonlinear system, at least in the
neighbourhood of the equilibrium, provided no eigenvalue has real part equal to zero. In particular, the
linearisation correctly characterises the stability of the equilibrium in this case.
We will see that, if xe is a hyperbolic equilibrium point of the nonlinear system x0 (t) = f (x(t)), then
the local behaviour of the nonlinear system is topologically equivalent to the local behaviour of
x0 (t) = Df (xe )x(t). This means that we will be able to map trajectories of the nonlinear system, close
to xe , into trajectories of the linear system close to the origin, and we will in addition be preserving the
direction of the flow along the trajectories.
If instead an eigenvalue has zero real part, then the linearisation is uninformative regarding stability: in
such cases, nonlinear terms determine stability.
The stability for nonlinear systems is summarised in the following theorem.
Remark 1.3.22 In the scalar case, we have seen in Examples 1.3.7-1.3.10 that one can study the line
diagram just looking at the zeros and the sign of the right-hand side f . That is a special case which
does not extend to higher dimensions. Moreover, it could be sometimes tricky to determine the sign of f
‘globally’ in its domain.
Hence one can always apply the criterion illustrated in Theorem 1.5. For n = 1 it simplifies as follows:
if f 0 (xe ) < 0, then xe is asymptotically stable. If instead, f 0 (xe ) > 0, then xe is unstable.
Looking back at Example 1.3.8 we see that f 0 (0) = λ, hence the equilibrium x = 0 is unstable if λ > 0
and asymptotically stable if λ < 0. In Example 1.3.10 we have that f (x) = (x2 − 4)(x + 1)2 , so f 0 (x) =
4x3 + 6x2 − 6x − 8. Hence, f 0 (−1) = 0, so we cannot make conclusions for the stability of x = −1 with
this criterion (while the sign analysis gave us the full picture). Instead, f 0 (2) = 36 > 0, hence x = 2 is
unstable, while f 0 (−2) = −4 < 0, hence the equilibrium x = −2 is asymptotically stable.
If f 0 (xe ) = 0, then the stability properties of xe depend on higher-order terms. For example consider the
equilibrium point 0 for x0 (t) = ax2 (t) + bx3 (t), for a, b ∈ R. If a 6= 0 then 0 is unstable. If a = 0 then the
stability depends on the sign of b.
1.4 Exercises
1–1. Find the equilibrium points and draw the phase portrait for the ODE
Sketch the phase portrait in the three cases : β < 0 , 0 < β < 1 , β > 1.
1–4. Consider the differential equation x0 (t) = f (x(t)) where f is a C 1 (R) function with exactly two
(distinct) equilibrium points x1 , x2 ∈ R, say x1 < x2 .
• If both equilibria are hyperbolic, what can you deduce about the stability of the two equilibrium
points?
• Give explicit examples to show how the situation changes if either one or two of the equilibria
are non-hyperbolic.
Show that for the ODE x0 (t) = f (x(t)) the point xe = 0 is a non-hyperbolic equilibrium point.
Moreover show that in any neighbourhood of the non-hyperbolic equilibrium point xe = 0 there are
infinitely many equilibria. What are the stability properties of the equilibria?
Find the two equilibrium points of the first-order system. Determine for each equilibrium point if
there is a value of µ at which they are not hyperbolic.
Compute the equilibrium points, study their stability, and sketch the phase diagram.