0% found this document useful (0 votes)
2 views20 pages

Chapter 1 2024 Lucia

The document outlines the study of Autonomous Systems of Ordinary Differential Equations (ODEs) within the context of Dynamical Systems and Conservation Laws. It covers key concepts such as the definition of scalar autonomous ODEs, existence and uniqueness of solutions, and theorems related to initial value problems. Additionally, it includes examples illustrating non-uniqueness, finite-time existence, and non-existence of solutions.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views20 pages

Chapter 1 2024 Lucia

The document outlines the study of Autonomous Systems of Ordinary Differential Equations (ODEs) within the context of Dynamical Systems and Conservation Laws. It covers key concepts such as the definition of scalar autonomous ODEs, existence and uniqueness of solutions, and theorems related to initial value problems. Additionally, it includes examples illustrating non-uniqueness, finite-time existence, and non-existence of solutions.

Uploaded by

miru park
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

SMSTC (2024/25)

Dynamical Systems and Conservation Laws

www.smstc.ac.uk

Contents

1 Autonomous Systems of Ordinary Differential Equations (ODEs) 1–1


1.1 What is a scalar autonomous ODE? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–1
1.2 Existence, Uniqueness and Maximal intervals . . . . . . . . . . . . . . . . . . . . . . . . . 1–2
1.3 Autonomous Systems, Phase Portraits and Stability . . . . . . . . . . . . . . . . . . . . . 1–6
1.3.1 Stability of linear systems with constant coefficients . . . . . . . . . . . . . . . . . 1–11
1.3.2 Stability of nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–16
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–18

(i)
SMSTC (2024/25)
Dynamical Systems and Conservation Laws
Chapter 1: Autonomous Systems of Ordinary Differential
Equations (ODEs)
Lucia Scardia
Original notes: Jack Carr HWU

www.smstc.ac.uk

Contents
1.1 What is a scalar autonomous ODE? . . . . . . . . . . . . . . . . . . . . . . . 1–1
1.2 Existence, Uniqueness and Maximal intervals . . . . . . . . . . . . . . . . . 1–2
1.3 Autonomous Systems, Phase Portraits and Stability . . . . . . . . . . . . . 1–6
1.3.1 Stability of linear systems with constant coefficients . . . . . . . . . . . . . . . 1–11
1.3.2 Stability of nonlinear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–16
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–18

1.1 What is a scalar autonomous ODE?


In this chapter we will consider systems of first-order ODEs of a special type.
Generally speaking, a first-order ODE is an equation that relates a function with its first-order derivative.
To fix the notation, let n ∈ N; let I ⊂ R be an open interval and à ⊂ Rn+1 an open set. Typically, we
will consider systems of ODEs of explicit form,
 
dx
(t) = x0 (t) = f (t, x(t)) , (1.1)
dt

where the (highest) derivative is expressed in terms of everything else, with x : I → Rn , and f : Ã → Rn .
In components, we can write (1.1) as
 0
 x1 (t) = f1 (t, x1 (t), x2 (t), · · · , xn (t)) ,

 x0 (t) = f (t, x (t), x (t), · · · , x (t)) ,

2 2 1 2 n
. (1.2)
 ..


 0
xn (t) = fn (t, x1 (t), x2 (t), · · · , xn (t)) ,

where    
x1 (t) f1 (t, x1 , · · · , xn )
x(t) =  ...  , and f (t, x) =  ..
. (1.3)
   
.
xn (t) fn (t, x1 , · · · , xn )

We say that x : I → Rn is a solution of (1.1) if x is differentiable in I, and (t, x(t)) ∈ Ã for every t ∈ I.
A system like (1.1) is called autonomous if it is of the form

x0 (t) = f (x(t)) (1.4)

i.e., if there is no explicit t dependence in the right-hand side, with f : A → Rn , and A ⊂ Rn an open
set.

1–1
SMST C: Dynamical Systems and Conservation Laws 1–2

Remark 1.1.1 (Why only first order?) While the general form (1.1) apparently involves only a first
derivative, it can include differential equations of higher order as a special case.
To see this, consider an nth-order differential (scalar) equation of the form

dn x dn−1 x
 
dx
= f t, x, , . . . , . (1.5)
dtn dt dtn−1

This can be written as a first-order equation for the vector-valued function x in (1.3) as
   
x1 x2
 x2  
d  x3
  
 ..  =  . (1.6)

..
dt  .   . 
xn f (t, x1 , . . . , xn )

We recover the differential equation (1.5) by identifying x = x1 . Then the first n − 1 components of (1.6)
tell us that
dx dx2 d2 x dn−1 x
x2 = , x3 = = 2 , . . . , xn = n−1
dt dt dt dt
so that the last component is precisely equation (1.5). In conclusion, higher-order equations can be
transformed into a system of first-order equations, and this is why first-order systems are all we will focus
on.

1.2 Existence, Uniqueness and Maximal intervals


As a first step, we would like to know whether a solution exists and whether it is unique. More precisely,
we ask whether the following initial-value problem (IVP) has a unique solution:
(
x0 (t) = f (t, x(t)),
(1.7)
x(t0 ) = x0 .

In general, it is not necessarily true that these two basic properties - existence and uniqueness - are
satisfied, as shown in the following examples. For the examples we focus on the scalar case n = 1.

Example 1.2.1 (Non-uniqueness: Classical example) Solve the IVP


(
x0 (t) = |x(t)|1/2 ,
(1.8)
x(0) = 0.

This is an autonomous ODE.


One solution of the IVP is clearly the constant function x ≡ 0, namely the constant function

x : R → R, t 7→ x(t) = 0 for every t ∈ R.

That is however not the only solution of the IVP (1.8). In fact for any c > 0, the following is a solution:

0 t ≤ c,
xc (t) = 1 2
4 (t − c) t > c.

Hence there are infinitely many solutions, even uncountably many!


To see that xc is a solution of the IVP for every c > 0, we start by observing that xc (0) = 0, namely xc
satisfies the initial condition at t = 0 (note that this is the reason why we require c > 0!).
Now we check is that xc is differentiable for every t ∈ R. Clearly this is true for t 6= c. For x = c we
show that the right-derivative of xc coincides with the left derivative of xc . Since the left derivative of xc
is zero (xc is constant for x < c), we only need to show that also the right derivative is zero. In fact
1
xc (t) − xc (c) (t − c)2 − 0 1
lim+ = lim+ 4 = lim+ (t − c) = 0,
t→c t−c t→c t−c t→c 4
SMST C: Dynamical Systems and Conservation Laws 1–3

proving the differentiability of xc at t = c and hence for every t ∈ R.


Finally, we show that xc solves the ODE, namely that x0c (t) = |xc (t)|1/2 . For the left-hand side of the
ODE we have that 
0 t ≤ c,
x0c (t) = 1
2 (t − c) t > c.
For the right-hand side, again for t > c, we have that
s r
p 1 2
1 1 1
|xc (t)| = (t − c) = (t − c)2 = |t − c| = (t − c),
4 4 2 2
where we removed the absolute value since t > c. In conclusion, we have that

p 0 t ≤ c,
|xc (t)| = 1 (1.9)
2 (t − c) t > c,
and hence xc is a solution of the IVP (1.8) in R.

Example 1.2.2 (Existence for only a finite time) Solve the IVP
(
x0 (t) = x2 (t),
x(0) = 1.
This is again an autonomous ODE. It is easy to show that the function x : (−∞, 1) → R defined as
1
x(t) =
1−t
is a solution of the IVP (check!). However the function is not defined for every t ∈ R, nor is a solution
for every t ∈ R (even if f is defined everywhere!). In fact the function x blows up at time t = 1, namely
x(t) → ∞ as t → 1.
Note: As a function, x(t) = (1 − t)−1 is defined for all t 6= 1, so it is defined in a domain much bigger
than the maximal interval where it is a solution. We will recall the definition of maximal intervals
below.

Example 1.2.3 (Non-existence) If f is not continuous then (1.7) may not have a solution. For ex-
ample, consider the IVP (
x0 (t) = f (t, x(t)),
(1.10)
x(0) = 0,
with 
1 if t≥0
f (t, x) =
0 if t < 0.
If x is a solution of (1.10), then it must be x(t) = t for t ≥ 0 and x(t) = 0 for t < 0. But this function is
not differentiable at t = 0, and hence it cannot be a solution (since the interval of existence must contain
the initial time t0 = 0).

An important theorem by C.E. Picard (1856-1914) says that, under fairly mild assumptions on f , IVPs
(initial value problems) have unique solutions (at least ‘locally’, close to the initial time t0 ).

Theorem 1.1 (Picard’s Theorem) Let n ∈ N, let à ⊂ R × Rn be open, and let f : à → Rn be a


continuous function. Assume, in addition, that the matrix-valued function ∂f
∂x : Ã → R
n×n
is continuous,
where we denote  ∂f1 ∂f1 
∂x1 . . . ∂x n
∂f
=  ... ..  . (1.11)

∂x . 
∂fn ∂fn
∂x1 . . . ∂xn
Then, for every (t0 , x0 ) ∈ Ã there exists δ > 0 such that the solution to the initial value problem
(
x0 (t) = f (t, x(t)),
(1.12)
x(t0 ) = x0 ,
exists and is unique in the interval (t0 − δ, t0 + δ).
SMST C: Dynamical Systems and Conservation Laws 1–4

The assumptions of Theorem 1.1 can be weakened, although, as observed above, continuity has to be
required, otherwise even existence could be lost.
Both local existence and uniqueness theorems for the IVP (1.12) can be obtained, provided that the
function f satisfies a certain property, called Lipschitz continuity. This property is roughly between
continuity and differentiability, i.e. it is stronger than continuity, but weaker than differentiability. More
precisely, we have the following.

Definition 1.2.4 Let à ⊂ Rn+1 be open. A function f : à → Rn is said to be locally Lipschitz-


continuous with respect to its second variable at a point (t, z) ∈ Ã if there exists a neighbourhood
U(t,z) ⊂ Ã of (t, z) and a constant L(t,z) > 0 such that

kf (s, x) − f (s, y)k ≤ L(t,z) kx − yk, for every (s, x), (s, y) ∈ U(t,z) .

The function f is said to be locally Lipschitz-continuous with respect to its second variable in
à if it is locally Lipschitz-continuous with respect to its second variable at all points (t, x) ∈ Ã. Finally,
f is said to be globally Lipschitz-continuous with respect to its second variable, if there is a
constant L > 0 such that

kf (s, x) − f (s, y)k ≤ Lkx − yk, for every (s, x), (s, y) ∈ Ã.

The following is is known as the Picard-Lindelöf Theorem and is a weaker version of Theorem 1.1.

Theorem 1.2 (Existence and uniqueness (Picard-Lindelöf )) Let à ⊂ R × Rn be open, and let
f : Ã → Rn be a continuous function. Assume, in addition, that f is locally Lipschitz-continuous with
respect to its second variable x in Ã. Then, for every (t0 , x0 ) ∈ Ã there exists δ > 0 such that the solution
to the initial value problem (
x0 (t) = f (t, x(t)),
x(t0 ) = x0 ,
exists and is unique in the interval I := (t0 − δ, t0 + δ).

Can we weaken the assumptions of Picard-Lindelöf’s Theorem further? Well, Example 1.2.1 tells us that
we can lose uniqueness if f is not locally Lipschitz continuous in the variable x.
This suggests that, if we want uniqueness, we cannot weaken the assumptions of Picard-Lindelöf Theorem,
Theorem 1.2. On the other hand, in Example 1.2.1 we still have existence, even if the assumptions of
Theorem 1.2 are not satisfied. However, we know by Example 1.2.3 that continuity is necessary for
existence. It turns out that continuity is in fact sufficient to guarantee local existence of solutions of IVP,
but no uniqueness is guaranteed.

Theorem 1.3 (Peano Existence Theorem) Let à ⊂ R × Rn be open, and let f : à → Rn be a continuous
function. Then, for every (t0 , x0 ) ∈ Ã there exists δ > 0 such that the initial value problem
(
x0 (t) = f (t, x(t)),
x(t0 ) = x0 ,

admits a solution in the interval I := (t0 − δ, t0 + δ).

Finally, we discuss maximal intervals. Consider the IVP

x0 (t) = f (t, x(t)), x(t0 ) = x0 , (1.13)

and assume that the assumptions of Picard’s (or Picard-Lindelöf) Theorem are satisfied. Then we know
that there exists δ > 0 such that we have existence and uniqueness for a solution x of the IVP inside the
small interval (t0 − δ, t0 + δ). However, the function x may be a solution of the IVP in a larger interval,
even in R. How do we find the largest interval (or maximal interval) where x is a solution of the IVP?
It turns out that, for every initial value problem satisfying the assumptions of Picard-Lindelöf Theorem,
there exists a maximal interval of existence I = (t− , t+ ). Essentially, the solution exists until (t, x(t))
reaches the “boundary” of the domain à of f .
SMST C: Dynamical Systems and Conservation Laws 1–5

Definition 1.2.5 (Continuation and maximal solution) Let x : I ⊂ R → Rn be a solution of the


system
x0 (t) = f (t, x(t)). (1.14)
We say that x̂ : Iˆ → Rn is a continuation of x (or extension of x) if x̂ is itself a solution of the system
(1.14), I ⊂ I,ˆ and x̂(t) = x(t) for every t ∈ I. A solution is called maximal if no continuation exists,
i.e., I is the maximal interval on which the solution exists.

More precisely, we have the following proposition.

Proposition 1.1 Let à ⊂ R × Rn be open, and let f : à → Rn be a continuous function. Assume, in


addition, that f is locally Lipschitz-continuous with respect to x in A. For (t0 , x0 ) ∈ Ã consider the initial
value problem (
x0 (t) = f (t, x(t)),
x(t0 ) = x0 ,
and let x : Ix0 → Rn be the maximal solution, where Ix0 = (t− , t+ ) ⊂ R is the maximal existence interval,
and t0 ∈ Ix0 . Then, one and only one of the following three possibilities occurs for t+ (and similarly for
t− ):

(i) t+ = +∞;
(ii) t+ < +∞, and lim kx(t)k = +∞;
t→t+

(iii) t+ < +∞, and


lim x(t) = ξ exists with (t+ , ξ) ∈ ∂ Ã.
t→t+

We now give examples to illustrate the three different possibilities for t+ and t− .

Example 1.2.6 Let us give some examples in which the three different possibilities for t− or t+ occur,
in the scalar case.

(i) Consider the IVP (


2
x0 (t) = 1 − (x(t)) ,
x(0) = 0.
The solution x and the maximal interval of existence I are given by

x(t) = tanh t, I = R.

In this case the solution is defined globally, so t− = −∞ and t+ = +∞.

(iia) The solution to the IVP (


2
x0 (t) = (x(t)) + 1,
x(0) = 0,
is given by  π π
x(t) = tan t, I= − , .
2 2
In this case, t− = −π/2 and t+ = π/2. Moreover,

lim x(t) = lim tan t = −∞,


t→(t− )+ t→(− π
+
2)

and
lim x(t) = lim − tan t = ∞.
t→(t+ )− t→( π
2)

Therefore, the solution tends to ±∞ at the boundary of I.


SMST C: Dynamical Systems and Conservation Laws 1–6

(iib) Consider the IVP (see Example 1.2.2)


(
2
x0 (t) = (x(t)) ,
x(0) = 1.

The solution x and the maximal interval of existence are given by


1
x(t) = , I = (−∞, 1) .
1−t
In this case, t− = −∞ and t+ = 1. Moreover,
1
lim x(t) = lim = +∞.
t→(t+ )− t→(1)− 1−t

(iii) Consider the IVP


1

x0 (t) = ,
x(t)
x(0) = 1.

In this case, Ã = {(t, x) ∈ R2 : x 6= 0}. The solution x and the maximal interval of existence are
given by

 
1
x(t) = 2t + 1, I = − , +∞ .
2
In this case, t− = −1/2 and t+ = +∞. Moreover,
 
1
lim (t, x(t)) = lim (t, x(t)) = − , 0 ∈ ∂ Ã.
t→(t− )+ t→(− 12 )
+ 2

Thus, when t → (t− )+ the graph of the solution approaches the boundary of à .

1.3 Autonomous Systems, Phase Portraits and Stability


In this section we use graphical methods to try and understand the behaviour of autonomous systems

x0 (t) = f (x(t)). (1.15)

If we assume that f is sufficiently smooth (e.g. continuous in an open set A ⊂ Rn with continuous matrix-
valued derivative ∂f
∂x : A → R
n×n
), then Picard’s theorem tells us that the solution to the initial-value
problem
x0 (t) = f (x(t)), x(t0 ) = x0 (1.16)
for any t0 ∈ R and for x0 ∈ A is unique.
Note: In the autonomous case, in (1.18) there is no loss of generality in assuming that t0 = 0. To see
this let x be a solution of (1.18) and set y(t) := x(t + t0 ). Then

y 0 (t) = f (x(t + t0 )) = f (y(t)) , and y(0) = x0 .

Hence we consider only IVP of the type

x0 (t) = f (x(t)), x(0) = x0 (1.17)

for x0 ∈ A. We recall that Ix0 denotes the maximal interval where the solution exists and satisfies the
IVP (1.17).

Definition 1.3.1 (Flow) Let Ω ⊂ Rn be the set

Ω = {(t, x0 ) ∈ R × A : t ∈ Ix0 }.

We define φ : Ω → Rn as the function such that, for x0 ∈ A, φ(·, x0 ) : Ix0 → Rn is the solution of (1.17),
defined on its maximum interval (namely ∂φ
∂t (t, x0 ) = f (φ(t, x0 )) and φ(0, x0 ) = x0 ).
SMST C: Dynamical Systems and Conservation Laws 1–7

We will also use the alternative notation φt to denote

φt (x0 ) = φ(t, x0 ).

The function φt , for t ∈ x0 , is called the flow of the system x0 (t) = f (x(t)), or alternatively, the flow of
the vector field f .

Remark 1.3.2 (The flow of a linear system) In the special case of a linear system

x0 (t) = Jx(t), J ∈ Rn×n

the flow is given by the exponential matrix of J. We recall that for t ∈ R one can define

X Ak t k
eJt := .
k!
k=0

Computing eJt , for a given matrix J ∈ Rn×n , is equivalent to solving the system x0 (t) = Jx(t). Indeed,
it turns out that the unique solution of the initial value problem

x0 (t) = Jx(t), x(0) = x0 , x0 ∈ Rn

can be written in terms of the exponential function as

x(t) = eJt x0 .

Uniqueness is known. To see that the function above defines a solution of the system, note that
d Jt
x0 (t) = (e )x0 = JeJt x0 = Jx(t),
dt
and x(0) = e0 x0 = Ix0 = x0 .
(Essentially, eJt is the matrix whose columns are, up to multiplicative constants, the n independent
solutions of the system.)
In terms of the general notation of the flow, we would have that φt (x0 ) = eJt x0 , with φt : Rn → Rn in
this case. Hence, φt = eJt .

Definition 1.3.3 (Trajectory) The set of points {x(t) ∈ Rn : t ∈ Ix0 }, where x satisfies

x0 (t) = f (x(t)), x(0) = x0 (1.18)

for some x0 ∈ A, is called the trajectory of (1.15) passing through x0 .


Note: Sometimes trajectories are also called orbits, although the latter is more commonly used to denote
special trajectories, namely periodic ones.

Remark 1.3.4 (Trajectories and flow) Let x0 ∈ A be fixed. Then the trajectory of the system through
x0 is the set {φ(t, x0 ) ∈ Rn : t ∈ Ix0 }.
On the other hand, if we let x0 vary in a set K ⊂ A, then the flow φt : K → Rn can be viewed as the
motion of all the points in K.

Definition 1.3.5 (Phase portrait) The phase portrait (or phase diagram) of the autonomous ODE
(1.15) consists of Rn with the trajectories of (1.15) drawn through each point. In the scalar case n = 1
the phase portrait is often called ‘phase line’, while for n = 2 is it called ‘phase plane’.

A phase portrait represents the directional behaviour of a system of ODEs. It is often possible to sketch
the phase portrait for autonomous equations without solving the equations completely, and then deduce
the qualitative nature of the solutions from the portrait.

Definition 1.3.6 (Equilibrium point) A point xe ∈ A is called an equilibrium point of (1.15) if


f (xe ) = 0.
SMST C: Dynamical Systems and Conservation Laws 1–8

If xe is an equilibrium point of (1.15), x(t) ≡ xe is a constant solution of (1.15). Hence the trajectory of
(1.15) through xe is equal to {xe }; that is, the trajectory is the single point xe .

Sketching the phase diagram: First steps.


The first step in finding the phase diagram is to find all the equilibrium points, which are the simplest
solutions to (1.15). Note that, since we are considering systems, solving f (xe ) = 0 means solving a system
of n equations that need to be satisfied simultaneously.
Next, you can try to sketch some other (non-constant) solutions/trajectories. Note that if f has contin-
uous derivative then trajectories cannot intersect. This is a consequence of the existence and uniqueness
theorem for systems of the form (1.15), which implies that in a region where where the (partial) derivatives
of f are continuous, through any point passes one and only one trajectory.
Another important step is to study the stability of equilibria. Namely, we ask the question: Will a
solution starting at a point close to xe stay close to xe ? In the one-dimensional case the answer to this
question can be given by studying the sign of f .

Example 1.3.7 Draw the phase diagram of the equation x0 (t) = x(t) (corresponding to f (x) = x).
First note that f (x) = 0 if and only if x = 0, hence 0 is the only equilibrium point of the ODE. Now we
study the sign of f . Clearly f (x) > 0 for x > 0 and f (x) < 0 if x < 0.
Then it follows that if x(t) > 0 we have x0 (t) = f (x(t)) > 0, and if x(t) < 0 we have x0 (t) = f (x(t)) < 0.
The phase diagram consists of the set of possible trajectories in R and is of the form in Figure 1.1. In the
region of f > 0 we draw an arrow in the positive direction, when f < 0 we draw an arrow in the negative
direction.

x
0

Figure 1.1: Phase line of x0 (t) = x(t).

The picture suggests that 0 is unstable.

Note that in this simple case we can write the solution exactly: x(t) = x(0)et .

Example 1.3.8 Draw the phase diagram of the equation x0 (t) = λx(t), for λ ∈ R (corresponding to
f (x) = λx).
Note that 0 is the only equilibrium point of the ODE for λ 6= 0. Now we study the sign of f .
If λ > 0, the phase diagram is as in the previous exercise, see Figure 1.2. The picture suggests that 0 is
unstable.
x
0

Figure 1.2: Phase line of x0 (t) = λx(t), with λ > 0.

If λ < 0, the phase diagram will be of the same type as in Figure 1.2, but the direction of the arrows will
be reversed, see Figure 1.3. The picture suggests that 0 is stable.

x
0

Figure 1.3: Phase line of x0 (t) = λx(t), with λ < 0.

Finally, If λ = 0, then every x0 ∈ R is an equilibrium of the system (meaning that every constant function
is a solution of the ODE. The phase diagram will be made by infinitely many points, all points in R. There
are no arrows! The diagram is shown in Figure 1.4.
SMST C: Dynamical Systems and Conservation Laws 1–9

x
0

Figure 1.4: Phase line of x0 (t) = λx(t), with λ = 0.

Example 1.3.9 Draw the phase diagram of the equation


 
1 x(t)
x0 (t) = 1− x(t).
2 10

First note that f (x) = 0 if and only if x = 0 or x = 10, hence 0 and 10 are the only equilibrium points
of the ODE. Now we study the sign of f . Clearly f (x) > 0 for 0 < x < 10 and f (x) < 0 if x < 0 and if
x > 10.
Then it follows that if x(t) < 0 or x(t) > 10, we have x0 (t) = f (x(t)) < 0, and if 0 < x(t) < 10 we have
x0 (t) = f (x(t)) > 0. The phase diagram consists of the set of possible trajectories in R and is of the form
in Figure 1.5.

x
0 10

 
x(t)
Figure 1.5: Phase line of x0 (t) = 1
2 1− 10 x(t).

The picture suggests that 0 is unstable and 10 is stable.

Example 1.3.10 Draw the phase diagram of the equation

x0 (t) = (x2 (t) − 4)(x(t) + 1)2 .

First note that f (x) = 0 if and only if x = −1 or x = ±2, hence ±2 and −1 are the only equilibrium
point of the ODE. Now we study the sign of f . Clearly f (x) > 0 if x < −2 and if x > 2, while f (x) < 0
if −2 < x < 2.
Then it follows that if x(t) < −2 or x(t) > 2, we have x0 (t) = f (x(t)) > 0, and if −2 < x(t) < 2 we have
x0 (t) = f (x(t)) < 0. The phase diagram consists of the set of possible trajectories in R and is of the form
in Figure 1.6.

x
−2 −1 2

Figure 1.6: Phase line of x0 (t) = (x2 (t) − 4)(x(t) + 1)2 .

The picture suggests that −2 is stable, 2 is unstable and −1 is something in between.

In the two-dimensional case the study of the equilibria and their stability is more complicated. Sometimes
it can be done ‘by hand’, as in the following example.

Example 1.3.11 Find the phase plane of x00 (t) = −x(t). This is the equation of the simple harmonic
oscillator. We are interested in sketching the phase plane of the associated system.
The second-order equation may be equivalently be rewritten as a system of two first-order scalar equations:
(
x01 (t) = x2 (t)
x02 (t) = −x1 (t).

The only equilibrium point of the system is (x1 , x2 ) = (0, 0). Now we would like to sketch some trajectories.
SMST C: Dynamical Systems and Conservation Laws 1–10

Note that, if t 7→ x(t) = (x1 (t), x2 (t)) is a solution of the system, then it satisfies
d d 2
kx(t)k2 = x1 (t) + x22 (t) = 2x1 (t)x01 (t) + 2x2 (t)x02 (t) = 2x1 (t)x2 (t) − 2x2 (t)x1 (t) = 0,

dt dt
hence for a solution we have that

kx(t)k2 = constant ⇔ x21 (t) + x22 (t) = constant,

namely trajectories are circles centred in (0, 0) (see Figure 1.7).

x2

x1

Figure 1.7: Phase plane of x00 (t) = −x(t).

The fact that x01 (t) > 0 when x2 (t) > 0 gives the direction of the arrows in this picture. Since the
trajectories are closed they correspond to periodic solutions.

Let us go back to the issue of stability of equilibria. Suppose that the system is slightly displaced from
an equilibrium state xe . That is, suppose that we consider a solution x(t) of (1.15) that passes through
the point x0 at time 0, where the distance kx0 − xe k is small. What happens to the system when we start
at a point x0 different from xe but close to xe ? Will the resulting motion remain close to xe for t ≥ 0?
Will the solution x(t) not only stay close to xe , but tend to it as t → +∞?

Definition 1.3.12 (Stable equilibrium) An equilibrium solution xe of

x0 (t) = f (x(t)) (1.19)

is said to be stable if for every ε > 0 there exists δ > 0 (depending on ε) such that if kx(0) − xe k < δ,
then the solution φ(t, x(0)) exists for every t ≥ 0, and kφ(t, x(0)) − xe k < ε for t ≥ 0.
The equilibrium solution xe is said to be unstable if it is not stable.

ε
δ

x
x(0)

Definition 1.3.13 (Asymptotically stable equilibrium) An equilibrium solution xe of

x0 (t) = f (x(t)) (1.20)

is said to be asymptotically stable if it is stable and if there exists a number δ0 > 0 such that if x(t) is
any solution of (1.20) having kx(0) − xe k < δ0 , then limt→+∞ x(t) = xe .
SMST C: Dynamical Systems and Conservation Laws 1–11

x
x(0)

Note: In words, an equilibrium is stable if, whenever a solution starts close to it, it stay close to it. An
equilibrium is asymptotically stable if, whenever a solution starts close enough to it, then it gets closer
and closer as t → +∞.
In Example 1.3.9 0 is an unstable equilibrium and 10 is an asymptotically stable equilibrium. In Example
1.3.8, for λ = 0, all x0 ∈ R are stable but not asymptotically stable. Also, the origin is a stable equilibrium
in Examples 1.3.11, but not asymptotically stable.
In general, however, it is not so easy, in dimension n ≥ 2, to establish the nature of an equilibrium by
looking at the system. If the system is linear there is a criterion for stability.

1.3.1 Stability of linear systems with constant coefficients


Consider the case of linear systems with constant coefficients:

x0 (t) = Jx(t), (1.21)

where J ∈ Rn×n is a constant (t-independent) matrix. This is the simplest general system for which
stability questions are easily and completely decided.
Clearly, xe = 0 is always an equilibrium point of (1.21) and will appear in this phase diagram. We are
going to study the stability of this equilibrium point, in terms of the eigenvalues of J.
As a first step, we recall the expression of a general solution of the linear system (1.21) in the simplest
case.

Lemma 1.1 Let J ∈ Rn×n , and assume that J has n linearly independent eigenvectors v1 , . . . , vn ∈ Cn
(which are then a basis of Cn ) with corresponding eigenvalues λ1 , . . . , λn ∈ C (not necessarily different
from each other). Then the general solution of (1.21) is given by
X
x(t) = ck eλk t vk , ck ∈ C,
i=1

namely the functions xk : R → Cn , xk (t) = eλk t vk , form a basis of n linearly independent solutions.
One can always obtain a general real-valued solution, even if some eigenvalues and corresponding eigen-
vectors are complex. Indeed, if λ, λ̄ ∈ C are complex-conjugate eigenvalues with corresponding complex-
conjugate eigenvectors v, v̄ ∈ Cn , then one can replace the functions

eλ v, eλ̄ v̄

of the basis of solutions with the real-valued functions

<(eλ v), =(eλ v).

Remark 1.3.14 (Real and complex eigenvectors) Assume that J has 2m linearly independent com-
plex eigenvectors, say uk ± iwk , with uk , wk ∈ Rn for k = 1, . . . , m, and n − 2m real eigenvectors
u2m+k ∈ Rn , with k = 1, . . . , n − 2m. Then not only

spanC {u1 ± iw1 , . . . , um ± iwm , u2m+1 , . . . , un } = Cn ,

but also
spanR {u1 , w1 , . . . , um , wm , u2m+1 , . . . , un } = Rn .
SMST C: Dynamical Systems and Conservation Laws 1–12

To see the latter statement, let x ∈ Rn . Since x ∈ Cn , there exist αk ∈ C such that
m
X n
X m
X n
X
 
x = <x = < αk uk ± iαk wk + < αk uk = < αk uk ± iαk wk + <(αk )uk
k=1 k=m+1 k=1 k=2m+1
m
X n
X

= <(αk )uk ∓ =(αk )wk + <(αk )uk ∈ spanR {u1 , w1 , . . . , um , wm , u2m+1 , . . . , un }.
k=1 k=2m+1

The other inclusion is obvious.

Unfortunately not every matrix has n linearly independent eigenvectors, and in that case one has to
resort to generalised eigenvectors in order to write the general solution of the system. We briefly recall
the procedure in the two-dimensional case n = 2.

Example 1.3.15 Consider the linear system


 
1 1
x0 (t) = Jx(t), J= .
0 1

We want to write the general solution of the system.


As a first step, we compute the eigenvalues of J, by solving det(J − µI) = (µ − 1)2 = 0. It turns out that
λ = 1 is the only eigenvalue of J, and it has (algebraic) multiplicity a = 2.
The corresponding eigenvector is any vector v ∈ R2 , v 6= 0, satisfying (J − I)v = 0, namely
 
0 1
(J − I)v = v = 0.
0 0
   
α 1
Thus, any vector v = with α 6= 0 is an eigenvector, e.g. v1 = .
0 0
To compute generalised eigenvectors, since the (algebraic) multiplicity of λ = 1 is a = 2, it is sufficient to
solve (J − I)2 v = 0 for v. Clearly v1 above satisfies (J − I)2 v1 = 0, so we will focus on finding a solution
linearly independent from v1 . We have
 
2 0 0
(J − I) v = v = 0,
0 0

which is satisfied by any vector in R2 . In other words, any vector


 
α
v=
β

is a generalised eigenvector (of degree 2). In particular, a generalised eigenvector


  with β 6= 0 is linearly
0
independent from the only eigenvector v1 , and we can e.g. choose v2 = .
1
It remains to construct two linearly independent solutions to the system.
   
1 1
For the pair λ = 1, v1 = we proceed as in Lemma 1.1, and obtain the solution x1 (t) = et .
0 0
 
0
For the pair λ = 1, v2 = we need to use the following Lemma.
1

Lemma 1.2 Let v ∈ C2 be a generalised eigenvector of J ∈ R2×2 of degree 2 corresponding to λ ∈ C,


namely such that (J − λI)2 v = 0. Then we take as corresponding element of the basis of solutions of
x0 (t) = Jx(t) the function defined as
eλt [v + t(J − λI)v] .
Note that, if v is an eigenvector, then by definition (J − λI)v = 0, and so the formula above reduces to
the well-known eλt v.
SMST C: Dynamical Systems and Conservation Laws 1–13

 
0
Example 1.3.15 continued For the pair λ = 1, v2 = , Lemma 1.2 provides the solution
1
   
0 1
x2 (t) = et +t . The general solution of the system is then
1 0
     
t 1 t 0 1
x(t) = c1 x1 (t) + c2 x2 (t) = c1 e + c2 e +t .
0 1 0

We can now go back to studying the stability of the origin for the system x0 (t) = Jx(t), in terms of the
eigenvalues of J. We have the following.

Theorem 1.4 If all of the eigenvalues of J have nonpositive (namely ≤ 0) real parts, and all the eigen-
values with zero real parts are simple (namely with algebraic multiplicity a = 1), then the solution x(t) ≡ 0
of
x0 (t) = Jx(t), (1.22)
is stable. If (and only if ) all eigenvalues of J have negative (namely < 0) real parts, then the solution
x(t) ≡ 0 of (1.22) is asymptotically stable.
If one or more eigenvalues of J have a positive real part, the zero solution of (1.22) is unstable.

Remark 1.3.16 The case with some eigenvalues with zero real part not simple, assuming the remaining
eigenvalues have negative real parts, requires some special investigation.

Example 1.3.17 Study the stability of equilibria of the equation x00 (t) − x(t) = 0.
This equation can be written as a system x0 (t) = Jx(t), with
 
0 1
J= .
1 0

The eigenvalues of J are λ1 = 1 and λ2 = 1. Since the eigenvalues have positive real part, we conclude
that the zero solution of the system (and hence of the scalar equation) is unstable.

Example 1.3.18 Study the stability of equilibria of the equation x00 (t) + 2kx0 (t) + x(t) = 0, with k > 0.
This equation can be written as a system x0 (t) = Jx(t), with
 
0 1
J= .
−1 −2k

The eigenvalues of J are λ = −k ± k 2 − 1. Now, if k ≥ 1, then both eigenvalues are real and (strictly)
negative, and hence the origin is an asymptotically stable equilibrium. If, instead, 0 < k < 1, then the
eigenvalues are complex and conjugate, but they both have negative real parts, so also in this case the
origin is an asymptotically stable equilibrium.

Phase plane for linear systems with constant coefficients

Here we consider the general case of a two-dimensional linear system with constant coefficients.
For simplicity, we only consider the case det J 6= 0. This means that the origin is the only equilibrium
and that zero is not an eigenvalue of J.
We now discuss the different possibilities that occur for the eigenvalues λ1 , λ2 of a real matrix J: each
case generates a different type of phase plane, and we consider them in turn.
Case 1: Real and distinct eigenvalues, λ1 , λ2 ∈ R, λ1 6= λ2 . Since the eigenvalues are distinct, there
are two linearly independent eigenvectors, v1 eigenvector to λ1 and v2 eigenvector to λ2 . The general
solution is
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 , c1 , c2 ∈ R. (1.23)
Note that the lines through the origin and parallel to v1 and v2 are trajectories in the phase plane: they
are 4 trajectories (4 rays), corresponding to c2 = 0, with c1 > 0 or c1 < 0, and to c1 = 0, with c2 > 0 or
SMST C: Dynamical Systems and Conservation Laws 1–14

c2 < 0 respectively. These 4 rays exclude the origin, since it cannot be reached in finite time. Moreover,
the origin itself is a trajectory! So, what it looks like 2 intersecting trajectories (which would violate
Picard’s Theorem) are in fact 5 non-intersecting trajectories. The direction on the 4 rays will depend on
the sign of the eigenvalues.
To be more precise, we consider 3 sub-cases.

Case 1 (a) λ1 < λ2 < 0 (eigenvalues negative and distinct). In this case the directions on the 4 rays
given by the eigenvectors are all towards the equilibrium, and the trajectories of the other solutions of
(1.23) are ‘parabola-like’ curves following the 4 rays. For large and positive t (and hence close to the
origin), since λ1 < λ2 < 0, we can neglect the term with eλ1 t in (1.23), and approximate

x(t) ∼ c2 eλ2 t v2 as t large and positive.

Namely, close to the origin the solution x(t) follows the ray v2 .
Similarly, for large and negative t (and hence far from the origin), since λ1 < λ2 < 0, we can neglect the
term with eλ2 t in (1.23), and approximate

x(t) ∼ c1 eλ1 t v2 as t large and negative.

Namely, far from the origin x(t) follows the ray v1 .


The phase diagram is in Figure 10. The equilibrium point 0 is asymptotically stable and is called a node
or a nodal sink.

x2 v2

v1

x1

Figure 1.8: Phase plane

Case 1 (b) λ1 > λ2 > 0 (eigenvalues positive and distinct). In this case the directions on the 4 rays given
by the eigenvectors are all away from the equilibrium. The trajectories of the other solutions of (1.23)
are ‘parabola-like’ curves following the 4 rays, exactly as in the previous case. The only difference is the
orientation, which in this case is opposite, away from the equilibrium (see Figure 1.9). The equilibrium
point 0 is unstable and is called a node or a nodal source.

Case 1 (c) λ1 > 0 > λ2 (eigenvalues of opposite sign). In this case the directions on the 4 rays given by
the eigenvectors are slightly more complicated.
Along v1 , which is the eigenvector corresponding to the positive eigenvalue, the directions will be away
from the equilibrium, since eλ1 t → +∞ as t → +∞. Along v2 , which is the eigenvector corresponding to
the negative eigenvalue, the directions will be towards the equilibrium, since eλ2 t → 0 as t → +∞.
The trajectories of the other solutions of (1.23) will be ‘hyperbola-like’ curves having the 4 rays as
asymptotes (see Figure 1.10). The equilibrium point 0 is unstable and is called a saddle point.

Case 2: Purely imaginary eigenvalues, λ1 = iβ, λ2 = λ1 = −iβ, β > 0. The corresponding


eigenvectors are u ± iw with u, w ∈ R2 , and two real solutions are

x1 (t) = (cos(βt)u − sin(βt)w), x2 (t) = (sin(βt)u + cos(βt)w), (1.24)


SMST C: Dynamical Systems and Conservation Laws 1–15

x2 v2

v1

x1

Figure 1.9: Phase plane

x2
v1

x1

v2

Figure 1.10: Phase plane

(where the exponential is missing since the real part of the eigenvalues is zero). So the general solution
is of the form
x(t) = cos(βt)(c1 u + c2 w) + sin(βt)(−c1 w + c2 u), c1 , c2 ∈ R.
Since every solution is periodic, with period 2π/β, the moving point representing it in the phase plane
retraces its path at intervals of 2π/β. The trajectories therefore are closed curves; ellipses, in fact (see
Figure 1.11). Sketching the ellipse is a little troublesome. For this course, it will be enough to determine
whether the motion is clockwise or counterclockwise. This can be done by using the system x0 (t) = Jx(t)
to calculate a single velocity vector; from this the sense of motion can be determined by inspection.
The equilibrium point 0 is stable but not asymptotically stable and is called a center.
Case 3: Complex eigenvalues with non-zero real part, λ1 = α + iβ, λ2 = λ1 = α − iβ, α ∈ R,
β > 0. The corresponding eigenvectors are u ± iw with u, w ∈ R2 , and two real solutions are

x1 (t) = eαt (cos(βt)u − sin(βt)w), x2 (t) = eαt (sin(βt)u + cos(βt)w), (1.25)

so the general solution is of the form

x(t) = eαt cos(βt)(c1 u + c2 w) + eαt sin(βt)(−c1 w + c2 u), c1 , c2 ∈ R.

The solutions are like in the previous case, except for the factor eαt , which decreases to zero if α < 0
or increases to +∞ if α > 0. So the trajectories are similar to ellipses, but the distance from the origin
keeps steadily shrinking or expanding. The result is a trajectory which spirals to the origin (if α < 0) or
spirals away from the origin (if α > 0).
So the equilibrium point 0, called a spiral point, is asymptotically stable if α < 0, and it is unstable if
α > 0.
SMST C: Dynamical Systems and Conservation Laws 1–16

x2

x1

Figure 1.11: Phase plane.

Case 4: Repeated (real) eigenvalues λ1 = λ2 6= 0. Here there are two sub-cases, depending
on whether the repeated eigenvalue admits two different eigenvectors, or instead we need generalised
eigenvectors to construct a basis.
Case 4 (a) (linearly independent eigenvectors). In this case the general solution is

x(t) = eλt (c1 v1 + c2 v2 ), c1 , c2 ∈ R,


so the trajectories are half-lines from the origin, directed towards the origin if λ < 0, and outwards if
λ > 0. The equilibrium point 0 is called a proper node, or a star node, and is asymptotically stable for
λ < 0 and unstable if λ > 0.
Case 4 (b) (one eigenvector and one generalised eigenvector). In this case the general solution is

x(t) = eλt (c1 v1 + c2 (v2 + t(A − λI)v2 ), c1 , c2 ∈ R,


where v1 is the eigenvector and v2 is the generalised eigenvector. Note that (A − λv2 = v1 (up to a
multiplicative constant), so the solution simplifies to
x(t) = eλt (c1 v1 + c2 (v2 + tv1 )), c1 , c2 ∈ R.
Note that if c2 = 0, then we have the family of solutions c1 eλt v1 , which correspond to the two trajectories
given by two rays from the origin with direction v1 or −v1 (and oriented towards or away from zero
depending on the sign of λ).
For the other trajectories, note that, due to the factor t, the term c2 teλt v1 is dominant both for t → +∞
and for t → −∞. The resulting trajectories are ‘S’-shaped (see Figures 1.12 and 1.13 illustrating the case
of λ < 0). The equilibrium point 0 is called an improper node and is asymptotically stable for λ < 0 and
unstable if λ > 0.

1.3.2 Stability of nonlinear systems


For nonlinear systems we show that the ‘local’ behaviour of the nonlinear system
x0 (t) = f (x(t)) (1.26)
near a ‘good’ equilibrium point xe is qualitatively determined by the behaviour of the linear system
y 0 (t) = Jy(t),
with the matrix  ∂f1 ∂f1 
∂x1 (xe ) ... ∂xn (xe )
∂f .. ..
J = Df (xe ) = (xe ) =  ,
 
∂x . .
∂fn ∂fn
∂x1 (xe ) ... ∂xn (xe )
near the origin.
The linear function Jy = Df (xe )y is called the linear part of f at xe .
SMST C: Dynamical Systems and Conservation Laws 1–17

x2

x1

v1

Figure 1.12: Phase plane

x2

x1

v1

Figure 1.13: Phase plane

Definition 1.3.19 A point xe ∈ A is called a hyperbolic equilibrium point of the system

x0 (t) = f (x(t)) (1.27)

if f (xe ) = 0 and none of the eigenvalues of the (constant) matrix J = Df (xe ) have zero real part. The
linear system y 0 (t) = Jy(t), with J = Df (xe ) is called the linearisation of (1.27) at xe .

Remark 1.3.20 Heuristically, if we set x := xe + y with kyk small, namely we perturb the equilibrium
a bit, we have that, by Taylor-expanding f around xe ,
1
f (x) = f (xe + y) = Df (xe )y + D2 f (xe )(y, y) + . . .
2
So since the linear part Df (xe )y is a good approximation of f close to xe , it is reasonable to expect that
the system (1.27) close to xe is well approximated by its linearisation at zero. More precisely, since

x0 (t) = (xe + y)0 (t) = y 0 (t),

we expect that
x0 (t) = f (x(t)) ≈ y 0 (t) = Df (xe )y(t)
close to xe , and hence that to analyse the nature of the equilibrium xe of the initial system one can instead
study the nature of the origin for the linearised equation.

We now classify equilibria for (1.27) according to the sign of the real parts of the eigenvalues of the matrix
Df (xe ).

Definition 1.3.21 An equilibrium point xe of

x0 (t) = f (x(t)) (1.28)


SMST C: Dynamical Systems and Conservation Laws 1–18

is called a sink if all of the eigenvalues of the matrix Df (xe ) have negative real part; it is called a source
if all of the eigenvalues of Df (xe ) have positive real part; and it is called a saddle if it is a hyperbolic
equilibrium point and Df (xe ) has at least one eigenvalue with a positive real part and at least one with a
negative real part.

The linearisation gives a very good description of the behaviour of the nonlinear system, at least in the
neighbourhood of the equilibrium, provided no eigenvalue has real part equal to zero. In particular, the
linearisation correctly characterises the stability of the equilibrium in this case.
We will see that, if xe is a hyperbolic equilibrium point of the nonlinear system x0 (t) = f (x(t)), then
the local behaviour of the nonlinear system is topologically equivalent to the local behaviour of
x0 (t) = Df (xe )x(t). This means that we will be able to map trajectories of the nonlinear system, close
to xe , into trajectories of the linear system close to the origin, and we will in addition be preserving the
direction of the flow along the trajectories.
If instead an eigenvalue has zero real part, then the linearisation is uninformative regarding stability: in
such cases, nonlinear terms determine stability.
The stability for nonlinear systems is summarised in the following theorem.

Theorem 1.5 Let f : A ⊂ Rn → Rn be a differentiable function with continuous (matrix-valued) deriva-


tive, with A open subset of Rn . Suppose that xe ∈ A is a hyperbolic equilibrium point of the autonomous
system x0 (t) = f (x(t)).
If all the eigenvalues of the matrix Df (xe ) have negative real part (sink), then xe is asymptotically stable.
If instead, at least one eigenvalue of Df (xe ) has positive real part, then xe is unstable.

Remark 1.3.22 In the scalar case, we have seen in Examples 1.3.7-1.3.10 that one can study the line
diagram just looking at the zeros and the sign of the right-hand side f . That is a special case which
does not extend to higher dimensions. Moreover, it could be sometimes tricky to determine the sign of f
‘globally’ in its domain.
Hence one can always apply the criterion illustrated in Theorem 1.5. For n = 1 it simplifies as follows:
if f 0 (xe ) < 0, then xe is asymptotically stable. If instead, f 0 (xe ) > 0, then xe is unstable.
Looking back at Example 1.3.8 we see that f 0 (0) = λ, hence the equilibrium x = 0 is unstable if λ > 0
and asymptotically stable if λ < 0. In Example 1.3.10 we have that f (x) = (x2 − 4)(x + 1)2 , so f 0 (x) =
4x3 + 6x2 − 6x − 8. Hence, f 0 (−1) = 0, so we cannot make conclusions for the stability of x = −1 with
this criterion (while the sign analysis gave us the full picture). Instead, f 0 (2) = 36 > 0, hence x = 2 is
unstable, while f 0 (−2) = −4 < 0, hence the equilibrium x = −2 is asymptotically stable.
If f 0 (xe ) = 0, then the stability properties of xe depend on higher-order terms. For example consider the
equilibrium point 0 for x0 (t) = ax2 (t) + bx3 (t), for a, b ∈ R. If a 6= 0 then 0 is unstable. If a = 0 then the
stability depends on the sign of b.

1.4 Exercises
1–1. Find the equilibrium points and draw the phase portrait for the ODE

x0 (t) = (x(t) + 1)(x(t) − 2)(x(t) + 2).

1–2. Find the equilibrium points for the ODE

x0 (t) = x(t)(β − ex(t) ).

Sketch the phase portrait in the three cases : β < 0 , 0 < β < 1 , β > 1.

1–3. Find the equilibrium points for

x0 (t) = x(t)(µ − x(t)2 )(x(t) − µ + 1),

and sketch the phase portrait in the two cases: µ = −1 and µ = 4.


SMST C: Dynamical Systems and Conservation Laws 1–19

1–4. Consider the differential equation x0 (t) = f (x(t)) where f is a C 1 (R) function with exactly two
(distinct) equilibrium points x1 , x2 ∈ R, say x1 < x2 .

• If both equilibria are hyperbolic, what can you deduce about the stability of the two equilibrium
points?
• Give explicit examples to show how the situation changes if either one or two of the equilibria
are non-hyperbolic.

1–5. Let f be the C 1 (R) function given by


(
0 if x = 0,
f (0) = 1

−x3 sin x 6 0.
if x =

Show that for the ODE x0 (t) = f (x(t)) the point xe = 0 is a non-hyperbolic equilibrium point.
Moreover show that in any neighbourhood of the non-hyperbolic equilibrium point xe = 0 there are
infinitely many equilibria. What are the stability properties of the equilibria?

1–6. Consider the system (


x0 (t) = −2x(t) − y(t) + 2,
y 0 (t) = x(t)y(t).
Find the equilibrium points of the system and determine their stability.

1–7. Consider the second-order ODE

z(t)00 + µz(t)0 + z(t) + z 2 (t) = 0.

Write the ODE as a first order system in the form


(
x01 (t) = f (x1 (t), x2 (t), µ)
x02 (t) = g(x1 (t), x2 (t), µ).

Find the two equilibrium points of the first-order system. Determine for each equilibrium point if
there is a value of µ at which they are not hyperbolic.

1–8. Consider the linear system


 
−1 3
x0 (t) = Jx(t), with J = .
2 −6

Compute the equilibrium points, study their stability, and sketch the phase diagram.

You might also like