0% found this document useful (0 votes)
85 views62 pages

Diff. Eqn (MTL102) Complete Notes

This document discusses differential equations, which are equations involving an unknown function and its derivatives. It provides definitions and examples of ordinary differential equations, linear differential equations, methods for solving first order linear equations and equations with separable variables. Applications to physics, engineering, economics and other fields are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views62 pages

Diff. Eqn (MTL102) Complete Notes

This document discusses differential equations, which are equations involving an unknown function and its derivatives. It provides definitions and examples of ordinary differential equations, linear differential equations, methods for solving first order linear equations and equations with separable variables. Applications to physics, engineering, economics and other fields are also mentioned.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

DIFFERENTIAL EQUATIONS (MTL102)

ANANTA KUMAR MAJEE

1. Introduction and motivation


A differential equation is an equation involving an unknown function and its derivatives.
In general the unknown function may depend on several variables and the equation may
include various partial derivatives.
Definition 1.1. A differential equation involving ordinary derivatives is called a ordinary
differential equation (ODE).
• A most general ODE has the form
F x, y, y 0 , . . . , y (n) = 0 ,

(1.1)
where F is a given function of (n + 2) variables and y = y(x) is an unknown
function of a real variable x.
• The maximum order n of the derivative y (n) in (1.1) is called the order of the ODE.
Applications: Differential equations play a central role not only in mathematics but also
in almost all areas of science and engineering, economics, and social sciences:
• Flow of current in a conductor: consider an RC circuit with resistance R and ca-
pacity C with no external current. Let x(t) be the capacitor voltage and I(t) the
current circulating in the circuit. Then according to Kirkhoff’s law,
R.I(t) + x(t) = 0.
Moreover, the constitutive law of capacitor yields
dx(t)
I(t) = C. .
dt
Hence we get the first order differential equation
x(t)
x0 (t) + = 0.
RC
• Population dynamics: Let x(t) be the number of individuals of a population at
time t, b be the birth rate of the population and d be the death rate of the
population. Then according to the simple “Malthus model”, growth rate of the
population is poportional to the number of new born individuals minus the number
of deaths. Hence we get the first order ODE
x0 (t) = kx(t), where k = b − d.
• The Pendulum equation: Consider a point P of mass m suspended from a pivot
by a chord of fixed length l so that P moves in a vertical plane passing through
1
2 A. K. MAJEE

the pivot. The tangential component of the force acting on P is −mg sin(θ), while
the tangential component of the acceleration is lθ00 . Thus by newton’s law
g
θ00 (t) = − sin(θ(t)).
l

• An example of a second order equation is y 00 + y = 0, which arises naturally in the


study of electrical and mechanical oscillations.
• Motion of a missile; The behavior of a mixture; The spread of disease etc.
Definition 1.2. A function y : I → R where I ⊂ R is an open interval, is said to be a
solution of n-th order ODE, if it is n-times differentiable, and satisfies (1.1) for all x ∈ I
Example 1.1. Consider the ODE y 0 = y. Let us first find all positive solutions. Note
0
that yy = (ln y)0 and therefore we obtain (ln y)0 = 1. Thus, this implies that
ln y = x + C =⇒ y = C1 ex where C1 = eC > 0, C ∈ R.
0
If y(x) < 0 for all x, then use yy = (ln(−y))0 , and obtain y = −C1 ex where C1 > 0.
Combining these two cases together, we obtain any solution y(x) has the form
y(x) = Cex , C ∈ R.
1.1. Linear ODE. An n-th order linear ODE is a relation of the form
an (t)u(n) (t) + an−1 (t)u(n−1) (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = b(t) ∀t ∈ I with an 6= 0.
Definition 1.3. We say that the linear ODE is homogeneous if b(t) = 0 for all t ∈ I.
Otherwise we say that it is non-homogeneous.
Theorem 1.1. Consider the linear homogeneous ODEs
u(m) (t) + am−1 (t)u(m−1) (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = 0, t ∈ I. (1.2)
ODES AND PDES 3
n o
Let X = u : I → R : u is a solution of the ODE . Then X is a real vector space with
usual addition of functions and scalar multiplication by real number.
Since an 6= 0, with out loss of generality, we may assume that an (t) = 1 for all t ∈ I.
Let us first consider the linear ODE of 1st order of the form
y 0 + α(x)y = b(x) (1.3)
where α and b are given function defined on I. A linear ODE can be solved as follows:
Theorem 1.2 (The method of variation of parameter). Let the functions α and b be
continuous on I. Then the general solution of (1.3) has the form
h Z i
−A(x)
y(x) = e C + b(x)eA(x) dx

where A(x) is a primitive of α(x) on I, i.e., A0 (x) = α(x).


Proof. We want to find a differential function µ(x) > 0 such that
0
µ(x)y 0 (x) + µ(x)α(x)y(x) = µ(x)y(x) .
Note that µ(x) = eA(x) does this work (check !!!). This µ(x) is called an integrating factor.
Let us make the change of the unknown function
u(x) = y(x)eA(x) ⇔ y(x) = u(x)e−A(x) .
Substituting this in the given ODE, we obtain
(ue−A )0 + αue−A = b =⇒ u0 e−A + ue−A A0 + αue−A = b.
Since A0 = α, we have
Z h Z i
0 A A(x) −A(x) A(x)
u = be =⇒ u(x) = C + b(x)e dx =⇒ y(x) = e C + b(x)e dx .


Example 1.2. Find the general solution of x0 (t) + 4tx(t) = 8t.
Here α(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation
of parameter, the general solution of the given ODE is given by
Z
−2t2 2 2
C + 8te2t dt = 2 + Ce−2t .
 
x(t) = e

Exercise 1.1. Let m = 1 in Theorem 1.1. Show that dim X = 1.


1st order linear IVP: Suppose, we are interested in solving the initial value problem
y 0 + α(x)y = b(x), y(x0 ) = y0 where x0 ∈ I.
From the previous calculation,
Rx we see that u0 = beA with A0 = α. Integrating from x0 to
x and taking A(x) = x0 α(s) ds, we have
Z x Rs
α(r) dr
u(x) − u(x0 ) = b(s)e x0 ds.
x0
4 A. K. MAJEE

Note that u(x0 ) = y(x0 )eA(x0 ) = y(x0 )e0 = y0 . Thus,


Z x Rs Rx h Z x Rs i
α(r) dr − α(s) ds α(r) dr
u(x) = y0 + b(s)e x0 ds =⇒ y(x) = e x0 y0 + b(s)e x0
ds .
x0 x0
(1.4)
Example 1.3. Find the solution of x0 (t) + kx(t) = h, x(0) = x0 , where h and k are
constant. This equation arises in the RC circuit when there is a generator of constant
voltage h. Using (1.4), we get
Z t
−kt
 ks
 −kt
 ekt − 1   k  −kt h
x(t) = e x0 + he ds = e x0 + h = x0 − e + .
0 k h k
Notice that x(t) → hk as t → ∞ from below if x0 < hk , and from above if x0 > hk . Moreover,
the capacitor voltage x(t) does not decay to 0 but tends to the constant voltage hk .
2
Exercise 1.2. Find a general solution to the linear ODE y 0 + x1 y = ex in the domain
x > 0.
Next, we consider the general first order ODE of the form
y 0 = f¯(x, y) ,
where f¯ is some continuous function. We have seen that if f¯(x, y) = α(x)y + b(x), where
α and β are continuous functions defined on some interval I, then the above ODE has
solution in explicit form (cf. the method of variation of parameter).
Equations with variables separated: Let us consider a separable ODE
y 0 = f (x)g(y) , (1.5)
where f and g are given functions.
• y(x) = k is called an equilibrium solution if and only if g(k) = 0.
Any separable equation can be solved by means of the following theorem.
Theorem 1.3 (The method of separation of variables). Let f and g be continuous func-
tions on some intervals I and J respectively such that g 6= 0 on J. Let F resp. G be
a primitive function of f resp. g1 on I resp. J. Then a function y defined on some
subinterval of I, solves the equation (1.5) if and only if
G(y(x)) = F (x) + C , (1.6)
for all x in the domain of y, where C is a real constant.
Proof. Let y(x) solves (1.5). Since F 0 = f and G0 = g1 , the equation (1.5) is equivalent to
0
y 0 G0 (y) = F 0 (x) =⇒ G(y(x)) = F 0 (x) =⇒ G(y(x)) = F (x) + C .
Conversely, if function y satisfies (1.6) and is known to be differentiable in its domain,
then differentiating (1.6) in x, we obtain y 0 G0 (y) = F 0 (x). Arguing backwards, we arrive
at (1.5).
Let us show that y is differentiable. Since g(y) 6= 0, either g(y) > 0 or g(y) < 0 in
the whole domain. Then G is either strictly increasing or strictly decreasing in the whole
domain. In both cases, the inverse function G−1 is well-defined and differentiable. It
follows from (1.6) that
y(x) = G−1 F (x) + C .

ODES AND PDES 5

Since both F and G−1 are differentiable, we conclude that y is differentiable. 


Example 1.4. Consider the ODE
y 0 (x) = y(x) , x ∈ R , y(x) > 0 .
Then f (x) ≡ 1 and g(y) = y 6= 0. Note that F (x) = x and G(y) = log(y). The equation
(1.6) becomes
log(y) = x + C =⇒ y(x) = Cex ,
where C is any positive constant.
Example 1.5. Consider the equation
p
y0 = |y|
which is defined for all y ∈ R. Note that y = 0 is a trivial solution. In the domains y > 0
and y < 0, the equation can be solved using separation of variables. In the domain y > 0,
we obtain

Z Z
dy 1 2
√ = dx =⇒ 2 y = x + C =⇒ y = x + C .
y 4
Since y > 0, we must have x > −C which follows from the second expression. Similarly
in the domain y < 0, we have
1
y = − (x + C)2 , x < −C .
4
We see that the integral curves in the domain y > 0 touch the curve y = 0 and so do the
integral curves in the domain y < 0.
Example 1.6. The logistic equation
y 0 (x) = y(x) α − βy(x) ,

α, β > 0 .
In this model, y(x) represents the population of some species and therefore y(x) ≥ 0.
Note that y(x) = 0 and y(x) = αβ are two equilibrium solutions. Such solutions play an
important role in analyzing the trajectories of solutions in general. In order to solve the
logistic equation, we separate the variables and obtain (assuming that y 6= 0 and y 6= αβ )
dy 1 dy β dy 1 1
= dx =⇒ + = dx =⇒ log(|y|) − log(|α − βy|) = x + c
y(α − βy) α y α α − βy α α
1 y y
=⇒ log | | = x + c =⇒ | | = keαx ,
α α − βy α − βy
where k = ecα . This is a general solution in implicit form. To solve for y, consider the
αkeαx
case 0 < y(x) < αβ . Then, y(x) = 1+βke α
αx . For the case y > β , the solution takes the
−αkeαx α
form y(x) = 1−βke αx . In any case, limx→∞ y(x) = β . This shows that all non-constant

solutions approach the equilibrium solution y(x) = αβ as x → ∞, some from above the line
y = αβ and others from below.
Exercise 1.3. Find p such that the non-trivial solutions of
y 0 = −(p + 1)xp y 2
tend to 0 as x → ∞.
6 A. K. MAJEE

Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (1.7)
where M , N are real-valued functions defined for real x, y on some rectangle R.
Definition 1.4. We say that the equation (1.7) is exact in R if there exists a function F
having continuous first partial derivatives such that
∂F ∂F
=M, =N. (1.8)
∂x ∂y
Theorem 1.4. Suppose the equation (1.7) is exact in a rectangle R, and F is a real-valued
function such that ∂F∂x
= M , ∂F ∂y
= N in R. Then every The differentiable function φ
defined implicitly by a relation
F (x, y) = c (c = constant) ,
is a solution of (1.7), and every solution of (1.7) whose graph lies in R arises in this
way.
Proof. Under the assumptions of the theorem, equation (1.7) becomes
∂F ∂F
(x, y) + (x, y)y 0 = 0 .
∂x ∂y
If φ is any solution on some interval I, then
∂F ∂F
(x, φ(x)) + (x, φ(x))φ0 (x) = 0 , ∀x ∈ I . (1.9)
∂x ∂y
If Φ(x) = F (x, φ(x)), then from the above equation, we see that Φ0 (x) = 0, and hence
F (x, φ(x)) = c, where c is some constant. Thus the solution φ must be a function which is
given implicitly by the relation F (x, φ(x)) = c. Conversely, if φ is a differentiable function
on some interval I defined implicitly by the relation F (x, y) = c, then
F (x, φ(x)) = c , ∀x ∈ I .
∂F
Differentiation along with the property ∂x
= M, ∂F
∂y
= N yields that φ is a solution of
(1.7). This completes the proof. 
Example 1.7. Consider the equation
x − (y 4 − 1)y 0 (x) = 0 .
Here M = x and N = 1 − y 4 . Define F (x, y) = 21 x2 + y − 15 y 5 . Then above equation is
exact. Hence the solution is given by
F (x, y) = c =⇒ 2y 5 − 10y = 5x2 + c .
How do we recognize when an equation is exact? The following theorem gives a neces-
sary and sufficient conditions.
Theorem 1.5. Let M, N be two real-valued functions which have continuous first partial
derivatives on some rectangle
n o
R := (x, y) ∈ R2 : |x − x0 | ≤ a , |y − y0 | ≤ b .
ODES AND PDES 7

Then the equation (1.7) is exact in R if and only if


∂M ∂N
= (1.10)
∂y ∂x
in R.
Proof. It is easy to see that if the equation (1.7) is exact, then (1.10) holds. Now suppose
that (1.10) holds in the rectangle R. We wand to find a function F having continuous
first partial derivatives such that ∂F ∂x
= M and ∂F ∂y
= N . If we had a such function, then
Z x Z y Z x Z y
∂F (s, y) ∂F (x0 , t)
F (x, y) − F (x0 , y0 ) = ds + dt = M (s, y) ds + N (x0 , t) dt
x0 ∂x y0 ∂y x0 y0

Similarly by writing F (x, y) − F (x0 , y0 ) = F (x, y) − F (x, y0 ) + F (x, y0 ) − F (x0 , y0 ), we


could have
Z x Z y
F (x, y) − F (x0 , y0 ) = M (s, y0 ) ds + N (x, t) dt . (1.11)
x0 y0

We now define F by the formula


Z x Z y
F (x, y) = M (s, y) ds + N (x0 , t) dt . (1.12)
x0 y0

Then, F (x0 , y0 ) = 0 and ∂F∂x


(x, y) = M (x, y) for all (x, y) in R. From (1.11), we can
also define F by the formula
Z x Z y
F (x, y) = M (s, y0 ) ds + N (x, t) dt . (1.13)
x0 y0

It is clear from (1.13) that ∂F ∂y


(x, y) = N (x, y) for all (x, y) in R. Therefore, we need
to show that (1.13) is valid, where F is defined by (1.12). Now, by using the condition
(1.10), we have
hZ x Z y i
F (x, y) − M (s, y0 ) ds + N (x, t) dt
x0 y0
Z x Z y
   
= M (s, y) − M (s, y0 ) ds − N (x, t) − N (x0 , t) dt
x y0
Z 0x Z y h
∂M ∂N i
= (s, t) − (s, t) ds dt = 0 .
x0 y0 ∂y ∂x
This completes the proof. 
Example 1.8. Find the general solution of the ODE
2xydx + (x2 + y 2 ) dy = 0 .
Here, M (x, y) = 2xy and N (x, y) = x2 +y 2 . Note that ∂M ∂y
= ∂N
∂x
= 2x. Thus, the equation
is exact. Define the function F by (taking (x0 , y0 ) = (0, 0))
Z x Z y Z x Z y
y3
F (x, y) = M (s, y) ds + N (x0 , t) dt = 2sy ds + t2 dt = yx2 + .
0 0 0 0 3
y3
Therefore, the general solution is given by the formula yx2 + 3
= c, where c is arbitrary
real constant.
8 A. K. MAJEE

The integrating factor: Sometimes, if the equation (1.7) is NOT exact, one can find a
function u, nowhere zero, such that the equation
u(x, y)M (x, y)dx + u(x, y)N (x, y) dy = 0
is exact. Such a function is called an integrating factor. For example ydx − xdy =
0 (x > 0, y > 0) is not exact, by multiplying the equation by u(x, y) = y12 makes it exact.
Note that all the three function
1 1 1
, 2
,
xy x y2
are integrating factors of the above ODE. Thus, integrating factors need not be unique.
Remark 1.1. In view of Theorem 1.5, we see that a function u on a rectangle R, having
continuously partial derivatives, is an integrating factor of the equation (1.7) if and only
if
 ∂M ∂N  ∂u ∂u
u − =N −M . (1.14)
∂y ∂x ∂x ∂y
i) If u is an integrating factor which is function of x only, then
1  ∂M ∂N 
p= −
N ∂y ∂x
is a continuous function of x alone, provided N (x, y) 6= 0 in R.
ii) If u is an integrating factor which is function of y only, then
1  ∂N ∂M 
q= −
M ∂x ∂y
is a continuous function of y alone, provided M (x, y) 6= 0 in R.
Example 1.9. Find an integrating factor of
(2y 3 + 2) dx + 3xy 2 dy = 0 , x 6= 0 , y 6= 0
and solve the ODE.
Here M (x, y) = 2y 3 + 2 and N (x, y) = 2
 3xy . Note that the equation is not exact. Now
∂M
∂x
− ∂N
∂x
= 3y 2 and hence N1 ∂M ∂y
− ∂N∂x
= x1 is a continuous function of x alone. Thus
integrating factor should be only function of x. Note that u(x) = x satisfies the relation
(1.14). After multiplication by integrating factor, equation becomes
M̃ (x, y) dx + Ñ (x, y) dy = 0 , where M̃ (x, y) = 2xy 3 + 2x , Ñ (x, y) = 3x2 y 2 .
∂ F̃
To find F̃ , we know that ∂x
= 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
∂ F̃
where f is independent of x. Again ∂y
= Ñ gives
f 0 (y) = 3x2 y 2 = 3x2 y 2 =⇒ f (y) = c.
Thus, the general solution is given implicitly by
x2 (y 3 + 1) = c , c ∈ R.
ODES AND PDES 9

Bernoulli Equations: We are interested in the ODE of the form


y 0 + p(x)y = q(x)y n
where p and q are continuous functions. Note that for n = 0 or n = 1, the equation is
linear, and we already know how to solve it. For n 6= 0, 1, we first divide the ODE by y n
to get y −n y 0 + p(x)y 1−n = q(x). Take v = y 1−n . Then v 0 = (1 − n)y −n y 0 , and plugging
this, we have
1
v 0 + p(x)v = q(x)
1−n
which is a linear ODE.
Example 1.10. Find all solutions of the ODE 3y 2 y 0 + y 3 = e−x .
−x
We rewrite the equation in the form of Bernoulli equation as y 0 + 31 y = e 3 y −2 . Here
−x
p(x) = 31 , q(x) = e 3 and n = −2. Thus, taking v = y 3 , we get v 0 + v = e−x . Using
the method of variation of parameter, the general solution is given by v(x) = e−x (C + x).
Thus, the general solution of the given ODE is given by
1 x
y(x) = (C + x) 3 e− 3 ∀x ∈ R .
Theorem 1.6. Let I be an interval in R, t0 ∈ I, α0 , . . . , αm−1 ∈ R, and a0 , a1 , . . . , am−1 , b
be the continuous function defined on I. Then the liner non-homogeneous ODE

u(m) (t) + am−1 (t)u(m−1) (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = b(t)


u(t0 ) = α0 , u0 (t0 ) = α1 , . . . , u(m−1) (t0 ) = αm−1
has a unique solution.
Proof. We only prove the uniqueness result. Existence result will be discussed later.
Before proving the uniqueness result, let us recall some results from analysis which will
be used frequently.
• Fundamental theorem of calculus: Let f : [a, b] → R be a differentiable function.
Then
Z x
f (x) − f (a) = f 0 (t) dt x ∈ [a, b] .
a
Rx
• Let g : [a, b] → R be continuous. Define f (x) = a g(t) dt. Then f is continuously
differentiable and f 0 (x) = g(x) for all x ∈ (a, b).
Lemma 1.7 (Gronwall’s Inequality). Let g, h : [a, b] → [0, ∞) be continuous functions
satisfying
Z t
g(t) ≤ C + g(s)h(s) ds, t ∈ [a, b]
a

where C is a nonnegative constant. Then


Rt
h(s) ds
g(t) ≤ Ce a ∀t ∈ [a, b] .
10 A. K. MAJEE

Uniqueness proof of Theorem (1.6): Suppose u1 , u2 : I → R be two solution of the


above problem. We will show that u1 (t) = u2 (t) for all t ∈ I. Define u(t) = u1 (t) − u2 (t).
Then u satisfies
(
u(m) (t) + am−1 (t)u(m−1) (t) + . . . a1 (t)u0 (t) + a0 (t)u(t) = 0
(1.15)
u(t0 ) = u0 (t0 ) = . . . = u(m−1) (t0 ) = 0.

Define g(t) = m−1 (i)


P
i=0 |u (t)|. First we show that u(t) = 0 ∀t > t0 . Fix T > t0 . Since
ai , b are continuous function, there exists M > 0 such that

max |ai (t)| ≤ M i = 0, 1, . . . , m − 1.


t∈[t0 ,T ]

Rt Rt
Note that for i = 0, 1, . . . , m − 1, u(i) (t) = u(i) (t0 )+ t0 u(i+1) (s) ds = t0 u(i+1) (s) ds. Thus,
if i ≤ m − 2,
Z t Z t Z t
(i) (i+1) (i+1)
u (t) = u (s) ds ≤ |u (s)| ds ≤ g(s) ds .
t0 t0 t0

Now by (1.15), we have


Z t Z t 
(m−1) (m)
|u (t)| ≤ |u (s)| ds ≤ |am−1 (s)||u(m−1) (s)| + . . . + |a0 (s)||u(s)| ds
t0 t0
Z t m−1
X Z t
(i)
≤M |u (s)| = M g(s) ds.
t0 i=0 t0

Thus,
m−2
X Z t
(i) (m−1)
g(t) ≤ |u (t)| + |u (t)| ≤ (m − 1 + M ) g(s) ds.
i=0 t0

By Gronwall’s inequality, g(t) = 0 for all t ∈ [t0 , T ]. Since T > t0 is arbitrary, g(t) = 0
for all t > t0 .

Let Ie := t0 − I = t0 − s : s ∈ I . Define v(t) := u(t0 − t), t ∈ I.e Then v(0) = u(t0 )
i
and v (i) (t) = (−1) u(i) (t0 − t). Since u satisfies (1.15), we have

(−1)m v (m) (t) + (−1)m−1 am−1 (t0 − t)v (m−1) (t) + . . . + a0 (t0 − t)v(t) = 0 ,
v(0) = v 0 (0) = . . . = v (m−1) (0) = 0 .

This implies that


(
v (m) (t) + e a1 (t)v 0 (t) + e
am−1 (t)v (m−1) (t) + . . . e a0 (t)u(t) = 0 ∀t ∈ Ie
v(0) = v 0 (0) = . . . = v (m−1) (0) = 0.

It follows from the previous case that v(t) = 0 for all t > 0 and t ∈ I.
e Hence u(t0 − t) = 0
for all t > 0 and t ∈ t0 − I. This implies that u(t) = 0 for all t ∈ I and t < t0 . This
completes the proof. 
ODES AND PDES 11

2. Existence and uniqueness of solutions to first order ODE


2.1. The method of successive approximations: Let us consider the initial value
problem
y 0 = f (x, y) , y(x0 ) = y0 , (2.1)
where f is any continuous real-valued function defined on some rectangle
n o
R := (x, y) ∈ R2 : |x − x0 | ≤ a , |y − y0 | ≤ b , (a, b > 0)
Our aim is to show that on some interval I containing x0 , there is a solution of the
above initial value problem. Let us first show the following.
Theorem 2.1. A function φ is a solution of (2.1) on some interval I if and only if it is
solution of the integral equation (on I)
Z x
y = y0 + f (s, y(s)) ds . (2.2)
x0

Proof. Suppose φ is a solution to the initial value problem on I, then


Z x Z x
0
φ (t) = f (t, φ(t)) =⇒ φ(x) = φ(x0 ) + f (t, φ(t)) dt = y0 + f (t, φ(t)) dt ,
x0 x0

where in the middle equation, we used the fact that f (t, φ(t)) is continuous on I. Hence
φ is a solution of (2.2). Conversely, suppose that φ is a solution to (2.2) i.e.,
Z x
φ(x) = y0 + f (s, φ(s)) ds , ∀x ∈ I .
x0
0
Then φ(x0 ) = y0 and φ (x) = f (x, φ(x)) for all x ∈ I. Thus φ is a solution to the initial
value problem (2.1). 
We now want to solve the integral equation via approximation. Define Picard’s approx-
imations by
φ0 (x) = y0
Z x
(2.3)
φk+1 (x) = y0 + f (s, φk (s)) ds (k = 0, 1, . . .)
x0
First we show that all the functions φk , k = 0, 1, 2, . . . exist on some interval.
Theorem 2.2. The approximate function φk exist as continuous function on
n b o
I := x ∈ R : |x − x0 | ≤ α = min{a, }
M
and (x, φk (x)) ∈ R for all x ∈ I, where M > 0 is such that |f (x, y)| ≤ M for all (x, y) in
R. Indeed,
|φk (x) − y0 | ≤ M |x − x0 | , ∀x ∈ I . (2.4)
b
Note that for x ∈ I, |x − x0 | ≤ M
, and hence (x, φk (x)) are in R for all x ∈ I.
Proof. We will prove it by induction. Clearly φ0 exists on I and (2.4) satisfies. Now
Z x
|φ1 (x) − y0 | = f (s, y0 ) ds ≤ M |x − x0 |
x0
12 A. K. MAJEE

, and φ1 is continuous on I. Now assume that it is valid for the functions φ0 , φ1 , . . . , φk .


Define the function Fk (t) = f (t, φk (t)), which exists for t ∈ I. It is continuous on I.
Therefore φk+1 exists as a continuous function on I. Moreover,
Z x
φk+1 (x) − y0 ≤ Fk (t) dt ≤ M |x − x0 | .
x0

Definition 2.1. Let f be a function defined for (x, y) is a set S. We say that it is
Lipschitz in y, if there exists a constant K > 0 such that
|f (x, y1 ) − f (x, y2 )| ≤ K|y1 − y2 | , ∀(x, y1 ), (x, y2 ) ∈ S .
Exercise 2.1. Show that if the partial derivative fy exists and is bounded in a rectangle
R, then f is Lipschitz in y in R.
We now show that approximate solutions φk converges on I to a solution of the initial
value problem under certain assumptions on f .
Theorem 2.3. Let f be a continuous function defined on the rectangle R. Further suppose
that f is Lipschitz in y. Then {φk } converges to a solution φ of the initial value problem
(2.1) on I.
Proof. Note that
k
X
φk (x) = φ0 (x) + [φi (x) − φi−1 (x)] , ∀x ∈ I .
i=1

Therefore, it suffices to show that the series



X
φk (x) = φ0 (x) + [φi (x) − φi−1 (x)]
i=1

converges. Let us estimate the terms φi (x) − φi−1 (x). Observe that, since f satisfies
Lipschitz condition in R
Z xh i Z x
|φ2 (x) − φ1 (x)| = f (t, φ1 (t)) − f (t, φ0 (t)) dt ≤ K |φ1 (t) − φ0 (t)| dt
x0 x0
Z x
(x − x0 )2
≤ KM |t − x0 | dt ≤ KM .
x0 2
Claim:
M K i−1 M K i |x − x0 |i
φi (x) − φi−1 (x) ≤ |x − x0 |i = , i = 1, 2, . . . (2.5)
i! K i!
We shall prove (2.5) via induction. Note that (2.5) is true for i = 1 and i = 2. Assume
now that (2.5) holds for i = m. Let us assume that x ≥ x0 (similar proof for x ≤ x0 ). By
using Lipschitz condition, and the induction hypothesis, we have
Z x Z x
M K m−1
φm+1 (x) − φm (x) ≤ K φm (t) − φm−1 (t) dt ≤ K |t − x0 |m dt
x0 x0 m!
m
MK
= |x − x0 |m+1
(m + 1)!
ODES AND PDES 13

Hence
P∞ (2.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series |φ0 (x)| +
M
i=1 |φi (x) − φi−1 (x)]| is less than or equal to K
times the i-th term of the power series
K|x−x0 |
P∞
for e . Hence the series φ0 (x) + i=1 [φi (x) − φi−1 (x)] is convergent for all x ∈ I,
and therefore the sequence {φk } converges to a limit φ(x) as k → ∞.
Properties of the limit function φ: We first show that φ is continuous on I. Indeed,
for any x, y ∈ I, we have, by using the boundedness of f
Z x
φk+1 (x) − φk+1 (x̃) = f (t, φk (t)) dt ≤ M |x − x̃|
y

=⇒ φ(x) − φ(x̃) ≤ M |x − x̃| .


This shows that φ is continuous on I. Moreover, taking x̃ = x0 in the above estimate, we
see that
|φ(x) − y0 | ≤ M |x − x0 | , ∀x ∈ I
and hence (x, φ(x)) is in R for all x ∈ I.
We now estimate |φ(x) − φk (x)|. Note that, since φ(x) is the limit of the series φ0 (x) +
P∞ Pk
i=1 [φi (x) − φi−1 (x)] and φk = φ0 (x) + i=1 [φi (x) − φi−1 (x)], we see that
∞ ∞ ∞
X M X K i |x − x0 |i M X K i αi
|φ(x) − φk (x)| = [φi (x) − φi−1 (x)] ≤ ≤
i=k+1
K i=k+1 i! K i=k+1 i!
M (Kα)k+1 n (Kα) (Kα)2 o
= 1+ + + ...
K (k + 1)! K + 2 (k + 2)(k + 3)

M (Kα)k+1 X K i αi M (Kα)k+1 Kα
≤ = e . (2.6)
K (k + 1)! i=0 i! K (k + 1)!
(Kα)k+1
Note that (k+1)!
→ 0 as k → ∞.
Now we will show that φ is a solution to the integral equation (2.2). Note that
Z x
φk+1 (x) = y0 + f (t, φk (t)) dt ,
x0
Rx
and φk+1 (x) → φ(x), as k → ∞. Thus it suffices to show that x0 f (t, φk (t)) dt →
Rx
x0
f (t, φ(t)) dt. Indeed, thanks to Lipschitz condition of f , and (2.6) we have
Z x Z x Z x
f (t, φk (t)) dt − f (t, φ(t)) dt ≤ K |φ(t) − φk (t)| dt
x0 x0 x0
(Kα)k+1 Kα
≤M e |x − x0 | → 0 , (k → ∞) .
(k + 1)!
This completes the proof. 
Let us illustrate this method on the following example:
(
y0 = y ,
y(0) = 1 .
14 A. K. MAJEE

Here φ0 (x) = 1. Moreover, the approximate functions are given by


Z x
φ1 (x) = 1 + φ0 (s) ds = 1 + x
0
Z x
x2
φ2 (x) = 1 + φ1 (s) ds = 1 + x +
2!
Z0 x
x2 x3
φ3 (x) = 1 + φ2 (s) ds = 1 + x + +
0 2! 3!
and by induction
x2 x 3 xk
φk (x) = 1 + x + + + ... + , k = 0, 1, 2, . . . .
2! 3! k!
Clearly, φk (x) → ex as k → ∞, and the function y(x) = ex indeed solves the above IVP.
Definition 2.2. We say that J ⊂ R is the maximal interval of definition of the solution
y(x) of the IVP (2.1), if any interval I where y(x) is defined is contained in J, and y(t)
cannot be extended in an interval greater than J.
Example 2.1. Consider the IVP
y0 = y2 , y(x0 ) = a 6= 0 .
Note that, the function φ(x, c) = − x−c 1
solves y 0 = y 2 . Thus, we need to impose the
requirement that φ(x0 , c) = a ⇔ c = x0 + a1 . Let ca = x0 + a1 . Hence for a > 0, solution
to the above problem is
1
ya (x) = − , x < ca .
x − ca
Thus, in this case, the maximum interval of definition is (−∞, ca ). Similarly, for a < 0,
one can show the maximum interval of definition is (ca , ∞) for the above problem.
Non-local existence of solutions: Theorem 2.3 guarantees a solution only for x near
the initial point x0 . There are some cases, where solution may exists on the whole interval
|x − x0 | ≤ a. For example, consider the ODE of the form y 0 + g(x)y = h(x), where g, h
are continuous on |x − x0 | ≤ a. Let us define the strip
n o
S := |x − x0 | ≤ a , |y| < ∞ .
Since g is contonuous on |x − x0 | ≤ a, there exists K > 0 such that |g(x)| ≤ K. Then the
function f (x, y) = −g(x)y + h(x) are Lipschitz continuous on the strip S.
Theorem 2.4. Let f be a real-valued continuous function on the strip S, and Lipschitz
continuous on S with constant K > 0. Then the Picard’s iterations {φk } for the problem
(2.1) exist on the entire interval |x − x0 | ≤ a, and converge there to a solution of (2.1).
Proof. Note that (x, φ0 (x)) ∈ S. Now since f is continuous on S, there exists M > 0 such
that |f (x, y0 )| ≤ M for |x − x0 | ≤ a, and hence
Z x
|φ1 (x)| ≤ |y0 | + f (t, y0 ) dt ≤ |y0 | + M |x − x0 | ≤ |y0 | + M a < ∞ .
x0

Moreover, each φk is continuous on |x − x0 | ≤ a. Now assume that the points


(x, φ0 (x)), (x, φ1 (x), . . . (x, φk (x))
ODES AND PDES 15

are in S for |x − x0 | ≤ a. We show that (x, φk+1 (x)) lies in S. Indeed,


Z x Z x

|φk+1 (x)| ≤ |y0 | + f (t, φk (t)) dt ≤ |y0 | + M |x − x0 | + f (t, φk (t)) − f (t, y0 ) dt
x0 x0
Z x

≤ |y0 | + M a + K |φk (t)| + |y0 | dt
x0

Since φk (x) is continuous on |x − x0 | ≤ a, we see that |φk+1 (x)| < ∞ for |x − x0 | ≤ a.


Hence, by induction, the points (x, φk (x)) are in S.
To show the convergence of the sequence {φk } to a limit function φ, we can mimic the
proof of Theorem 2.3, once we note that
Z x
φ1 (x) − φ0 (x) ≤ |f (t, y0 )| dt ≤ M |x − x0 | .
x0

Next we show that φ is continuous. Observe that, thanks to (2.5) (which holds in this
case)
k k k
X X M X K i |x − x0 |i
|φk (x) − y0 | = [φi (x) − φi−1 (x)] ≤ φi (x) − φi−1 (x) ≤
i=1 i=1
K i=1 i!

M X K i |x − x0 |i M Ka 
≤ = e − 1 := b .
K i=1 i! K
Taking the limit as k → ∞, we obtain
|φ(x) − y0 | ≤ b , (|x − x0 | ≤ a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) ∈ R2 : |x − x0 | ≤ a , |y − y0 | ≤ b ,
and hence, there exists a constant N > 0 such that |f (x, y)| ≤ N for (x, y) ∈ R. Let x, x̃
be two pints in the interval |x − x0 | ≤ a. Then
Z x
φk+1 (x) − φk+1 (x̃) = f (t, φk (t)) dt ≤ N |x − x̃|

=⇒ |φ(x) − φ(x̃)| ≤ N |x − x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 2.3, with
α replaced by a everywhere. 
Example 2.2. Consider the IVP y 0 = y + λx2 sin(y), y(0) = 1, where λ is a real
constant such that |λ| ≤ 1. The the solution of the IVP exists for |x| ≤ 1.
Here f (x, y) = y + λx2 sin(y). Consider the strip S = {|x| ≤ 1, |y| < ∞}. Then f
is continuous on S and Lipschitz continuous on S as |∂y f (x, y)| ≤ 2 on S. Thus, by
Theorem 2.4, the solution of the given problem exists on the entire interval |x| ≤ 1.
In view of Theorem 2.4, we arrive at the following corollary.
Corollary 2.5. Let f be a real-valued continuous function on the plane |x| < ∞, |y| < ∞,
which satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x| ≤ a , |y| < ∞ , (a > 0) .
16 A. K. MAJEE

Then every initial value problem


y 0 = f (x, y) , y(x0 ) = y0 ,
has a solution which exists for all real x.
Proof. For any real number x, there exists a > 0 such that x is contained inside the
interval |x − x0 | ≤ a. Consider now the strip
n o
S := |x − x0 | ≤ a , |y| < ∞ .
Since S is contained in the strip
n o
S̃ := |x| ≤ |x0 | + a , |y| < ∞ .
f satisfies all the conditions of Theorem 2.4. Thus, {φk (x)} tends to φ(x), where φ is a
solution to the initial-value problem. This completes the proof. 
Example 2.3. Consider the equation
y 0 = h1 (x)p(cos(y)) + h2 (x)q(sin(y)) := f (x, y) ,
where h1 and  h2 are continuous for all real x, and p , q are polynomials. Consider the
strip Sa := |x| ≤ a , |y| < ∞ , where a > 0. Note that, since h1 , h2 are continuous,
there exists Na > 0 such that |hi (x)| ≤ Na for |x| ≤ a. Again, since p, q are polynomials,
there exists a constant C > 0 such that
max p0 (ξ), q 0 (ξ) : ξ ∈ [−1, 1] ≤ C.


Let us check that f is Lipschitz continuous on the strip Sa . Indeed, for any (x, y1 ), (x, y2 ) ∈
Sa ,
f (x, y1 ) − f (x, y2 ) ≤ |h1 (x)||p0 (ξ1 )|| cos(y1 ) − cos(y2 )| + |h2 (x)||q 0 (ξ2 )|| sin(y1 ) − sin(y2 )|
≤ 2 Na C|y1 − y2 | .
Thus, thanks to Corollary 2.5, every initial value problem for this equation has a solution
which exists for all real x.
Continuous dependence estimate: Suppose we have two IVP
y 0 = f (x, y) , y(x0 ) = y1 , (2.7)
y 0 = g(x, y) , y(x0 ) = y2 , (2.8)
where f and g both are real-valued continuous function on the rectangle R, and (x0 , y1 ) , (x0 , y2 )
are points in R.
Theorem 2.6. Let f , g be continuous function on R, and f satisfies a Lipschitz con-
dition there with Lipschitz constant K. Let φ , ψ be solutions of (2.7), (2.8) respectively
on an interval I containing x0 , with graphs contained in R. Suppose that the following
inequalities are valid
|f (x, y) − g(x, y)| ≤ ε , (x, y) ∈ R , (2.9)
|y1 − y2 | ≤ δ , (2.10)
for some non-negative constants ε , δ. Then
ε K|x−x0 |
φ(x) − ψ(x) ≤ δeK|x−x0 | +

e −1 , ∀x ∈ I . (2.11)
K
ODES AND PDES 17

Proof. Since φ , ψ are solutions of (2.7), (2.8) respectively on an interval I containing x0 ,


we see that
Z x
 
φ(x) − ψ(x) = y1 − y2 + f (t, φ(t)) − g(t, ψ(t)) dt
Zx0x Z x
   
= y1 − y2 + f (t, φ(t)) − f (t, ψ(t)) dt + f (t, ψ(t)) − g(t, ψ(t)) dt .
x0 x0

Assume that x ≥ x0 . Then, in view of (2.9), (2.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
|φ(x) − ψ(x)| ≤ δ + K |φ(s) − ψ(s)| ds + ε(x − x0 ) . (2.12)
x0
Rx
Define, E(x) = x0
|φ(s)−ψ(s)| ds. Then E 0 (x) = |φ(x)−ψ(x)| and E(x0 ) = 0. Therefore,
(2.12) becomes
E 0 (x) − KE(x) ≤ δ + ε(x − x0 ) .
Multiplying this inequality by e−K(x−x0 ) , and then integrating from x0 to x, we have
Z x Z x
−K(x−x0 ) −K(t−x0 )
E(x)e ≤δ e dt + ε (t − x0 )e−K(t−x0 ) dt
x0 x0
δ ε ε 
1 − e−K(x−x0 ) + 2 + 2 − K(x − x0 ) − 1 e−K(x−x0 ) .
 
=
K K K
Multiplying both sides of this inequality by eK(x−x0 ) , we have
δ  K(x−x0 ) ε  ε
− 1 − 2 K(x − x0 ) + 1 + 2 eK(x−x0 ) .
 
E(x) ≤ e
K K K
We now use this estimate in (2.12) to arrive at the required result for x ≥ x0 . A similar
proof holds in case x ≤ x0 . This completes the proof. 
As a consequence of Theorem 2.6, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition
on R. If φ and ψ are two solutions of the IVP (2.1) on an interval I containing
x0 , then φ(x) = ψ(x) for all x ∈ I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . .
be continuous on R such that
|f (x, y) − gk (x, y)| ≤ εk , (x, y) ∈ R
with εk → 0 as k → ∞. Let yk → y0 as k → ∞. Let ψk be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,
and φ is a solution to the IVP (2.1) on some interval I containing x0 . Then
ψk (x) → φ(x) on I.
Remark 2.1. The Lipschitz condition on f on the rectangle R is necessary to have
uniqueness of solution of the IVP (2.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
18 A. K. MAJEE

It is easy to check that the function φk , for any positive number k


(
0, −∞ < x ≤ k ,
φk (x) = 3
(x − k) , k ≤ x < ∞ ,
is a solution of the above IVP. So, there are infinitely many solution on any rectangle R
containing the origin. But note that, f does NOT satisfy a Lipschitz condition on R.
Remark 2.2. We have shown the existence of solution of the IVP under more stronger
condition namely Lipschitzness of the function f . But one can relax the Lipschithzness
condition to gurantee the existence of solution of the IVP only under the continuity
assumption on f . This is called Peano Theorem.
Peano Theorem: If the function f (x, y) is continuous on a rectangle R and if (x0 , y0 ∈
R), then the IVP y 0 = f (x, y) with y(x0 ) = y0 has a solution in the neighborhood of x0 .
3. Existence and uniqueness for systems and higher order equations:
Consider the system of differential equations in normal form
y10 = f1 (x, y1 , y2 , . . . , yn )
y20 = f2 (x, y1 , y2 , . . . , yn )
.. (3.1)
.
yn0 = fn (x, y1 , y2 , . . . , yn )
Let Ω̃ ⊂ Rn , and Ω = I × Ω̃. We ntroduce
y1 f1 (x, ~y )
   
 y2   f2 (x, ~y ) 
n ~ n
 ...  ∈ R ; f (x, ~y ) =  ...  ∈ R
~y =    

yn fn (x, ~y )
Then f~ : Ω → Rn , and the system of equation (3.1) can be written in the compact form
~y 0 = f~(x, ~y ) .
An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n−1) ) may also be treated as a
system of the type (3.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n−1) . Then form the
ODE equation y (n) = f (x, y, y 0 , . . . , y (n−1) ), we have

y10 = y2 ; y20 = y3 ; . . . ; yn−1


0
= yn ;
(3.2)
yn0 = f (x, y1 , y2 , . . . , yn ) .
Example 3.1. Consider the initial value problem y (3) + 2y 0 − (y 0 )3 + y = x2 + 1 with
y(0) = 0, y 0 (0) = 1, y 00 (0) = 1. We want to convert into a system of equations. Let
y1 = y, y2 = y 0 and y3 = y 00 . Note that y 0 = y10 = y2 . Using these, the required system
takes the form
y10 = y2 ; y20 = y3 ; y30 = y23 − 2y2 − y1 + x2 + 1 ,
(3.3)
y1 (0) = 0; y2 (0) = 1; y3 (0) = 1 .

Definition 3.1. A solution φ ~ = (φ1 , φ2 , . . . , φn ) of the system ~y 0 = f~(x, ~y ) is a differen-


tiable function on a real interval I such that
ODES AND PDES 19

~
i) (x, φ(x)) ∈Ω
~ 0 ~ ~
ii) φ (x) = f (x, φ(x)) for all x ∈ I .
Theorem 3.1 (Local existence). Let f~ be a continuous vector valued function defined on

R = |x − x0 | ≤ a, |~y − ~y0 | ≤ b, a, b > 0
and suppose f~ satisfies Lipschitz condition on R, then the successive approximation {φ
~ k }∞
k=0
~ 0 (x) = ~y0
φ
Z x
~ k+1 (x) = ~y0 +
φ f~(s, φ
~ k (s)) ds , k = 0, 1, 2, . . .
x0

converges on the interval Icon = |x − x0 | ≤ α = min{a, Mb } to a solution φ ~ of the IVP




~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 on Icon , where M is a positive constant such that |f~(x, ~y )| ≤ M
for all (x, ~y ) ∈ R. Moreover,
k+1
φ ~ ≤ M (Kα) ekα
~ k (x) − φ ∀x ∈ Icon ,
K (k + 1)!
where K is a Lipschitz constant of f~ on R.
Example 3.2. Consider the problem
y10 = y2 , y1 (0) = 0
y20 = −y1 , y2 (0) = 1 .
This can be written in compact form ~y = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) =
~ k (x):
(y2 , −y1 ). Let us calculate the successive approximations φ
~ 0 (x) = ~y0 = (0, 1) ,
φ
Z x
~
φ1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ ~ ~ x2 x2
φ2 (x) = (0, 1) + f (s, φ1 (s)) ds = (0, 1) + (1, −s) ds = (0, 1) + (x, − ) = (x, 1 − ) ,
2 2
Z0 x 2 3
0
2
~ 3 (x) = (0, 1) + s x x
φ (1 − , −s) ds = (x − , 1 − ) ,
2 3! 2
Z0 x 2 3 3
~ s s x x2 x4
φ4 (x) = (0, 1) + (1 − , −s + ) ds = (x − , 1 − + ).
0 2 3! 3! 2 4!
~ k exist for all real x and φ
It is not difficult to show that all the φ ~ k (x) → (sin(x), cos(x)).
Thus, the unique solution of the the given IVP is φ(x) ~ = (sin(x), cos(x)).
Theorem 3.2 (Non-local existence). Let f~ be a continuous vector valued function defined
on 
S = |x − x0 | ≤ a, |~y | < ∞, a > 0
and satisfies there a Lipschitz condition. Then the successive approximation {φ ~ k }∞ for
k=0
the ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 exist on |x − x0 | ≤ a, and converges there to a solution φ
~
of the IVP.
20 A. K. MAJEE

Corollary 3.3. Let f~ be a continuous vector valued function defined on |x| < ∞, |~y | < ∞,
and satisfies there a Lipschitz condition on each strip
Sa = {|x| ≤ a, |~y | < ∞, a > 0}.
Then every initial value probblem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists
for all x ∈ R.
Example 3.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 − y2 + ex y3 .

this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
   
y1 3y1 + 3y3
~y = y2  , and f~(x, ~y ) =  y2 + x3 y3 .
x
y3 2xy1 − y2 + e y3

Note that f~ is a continuous vector-valued function defined on |x| < ∞, |~y | < ∞. It
is lipschitz continuous on the strip Sa = {|x| ≤ a, |~y | < ∞, a > 0}, since for
(x, ~y ), (x, ~ỹ) ∈ Sa ,
f~(x, ~y ) − f~(x, ~ỹ) = |3(y1 − ỹ1 ) + x(y3 − ỹ3 )| + |(y2 − ỹ2 ) + x3 (y3 − ỹ3 )|
+ |2x(y1 − ỹ1 ) − (y2 − ỹ2 ) + ex (y3 − ỹ3 )|
≤ (3 + 2|x|)|y1 − ỹ1 | + |x| + ex + |x|3 |y3 − ỹ3 | + 2|y2 − ỹ2 |


≤ 5 + 3a + ea + a3 |~y − ~ỹ| .


Therefore, every initial value problem for this system has a solution which exists for all
real x. Moreover, solution is unique.
Example 3.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the
above IVP in vector form, one can show that above problem has a solution y(x) defined
on whole real line. Let z(x) = y(−x). Note that z(x) satisfies the above IVP. Hence by
uniqueness, z(x) = y(−x) for all x ∈ R. In other words, y(x) is even in x.
Exercise 3.1. Let f be a Lipschitz continuous function on R such that it is a odd function.
Then the solution of the IVP
y 000 (x) = f (y), y(0) = 0, y 000 (0) = 0
is odd in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 3.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-
valued function defined on a rectangle

R = |x − x0 | ≤ a, |~y − ~y0 | ≤ b, a, b > 0
ODES AND PDES 21

and suppose f~ satisfies a Lipschitz condition on R with Lipschitz constant K. Let φ, ~


~ 0 ~ 0
ψ are solutions of the problems ~y = f (x, ~y ), ~y (x0 ) = ~y1 and ~y = ~g (x, ~y ), ~y (x0 ) = ~y2
respectively on some interval I containing x0 . If for ε, δ ≥ 0,
|~y1 − ~y2 | ≤ δ, |f~(x, ~y ) − ~g (x, ~y )| ≤ ε ∀(x, ~y ) ∈ R
then,
~ ~ ε K|x−x0 |
≤ δeK|x−x0 | +

|φ(x) − ψ(x)| e −1 ∀x ∈ I .
K
In particular, the problem ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 has at most one solution on any interval
containing x0 .
Example 3.5. Consider the system
y10 = y1 + εy2 , y20 = εy1 + y2 , (3.4)
where ε is a positive constant.
i) 
Writing the ~
 system in vetor form, we see that the vector valued function fε (x, ~y ) =
y1 + εy2
is continuous and also Lipschitz continuous on each strip Sa with
εy1 + y2
Lipschitz constant Ka = 1 + ε. Thus, in view of Corrolarry 3.3, every initial value
problem of (3.4) has a solution, which exists for all real x.
ii) Let φ ~ ε be the solution of (3.4) satisfying φ(0) ~ = (1, −1), and let ψ ~ be the solution
0 0
of y1 = y1 , y2 = y2 satisfying ψ(0) ~ = (1, −1). Then, for each real x, by using
Lipschitz condition of fε , we get that
Z x Z x
~ ~
φε (x) − ψ(x) = ~ ~
fε (s, φε (s)) ds − ~
~g (s, ψ(s)) ds
0 0
Z x Z x
~ ~ ~ ~ f~ε (s, ψ(s))
~ ~
 
= fε (s, φε (s)) − fε (s, ψ(s)) ds − − ~g (s, ψ(s)) ds
0 0
Z x Z x
≤ ~ ~ ~ ~
fε (s, φε (s)) − fε (s, ψ(s)) ds + f~ε (s, ψ(s))
~ ~
− ~g (s, ψ(s)) ds
0 0
Z x Z x
≤ (1 + ε) ~ ~
φε (s) − ψ(s) ds + ε ~
|ψ(s)| ds .
0 0

An application of Gronwall’e lemma then implies


Z x
~ ε (x) − ψ(x)
φ ~ ≤ εe(1+ε)x ~
|ψ(s)| ds → 0 as ε → 0 .
0

3.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 =
f~(x, ~y ), where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1

and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the
strip Sa = {|x − x0 | ≤ a, |~y | < ∞}. Suppose ajk , bj are continuous on |x − x0 | ≤ a. Then
22 A. K. MAJEE

there exists a constant K > 0 such that nj=1 |ajk (x)| ≤ K for all k = 1, 2, . . . , n, and for
P

all x satisfying |x − x0 | ≤ a. Note that


n
∂ f~  X
(x, ~y ) = a1k (x), a2k (x), . . . , ank (x) = |ajk (x)| ≤ K .
∂yk j=1

Thus, f~ satisfies a Lipschitz condition on S with Lipschitz constant K. Hence we arrive


at the following theorem
Theorem 3.5. Let ~y 0 = f~(x, ~y ) be a linear system described as above. If ~y0 ∈ Rn , there
exists one and only one solution φ ~ of the IVP ~y 0 = f~(x, ~y ), ~y (x0 ) = ~y0 .
Note that linear system can be written in the equivalent form
~y 0 (x) = A(x)~y (x) + ~b(x)
where
a11 (x) a12 (x) a13 (x) . . . a1n (x) b1 (x)
   
 a21 (x) a22 (x) a23 (x) . . . a2n (x)   b2 (x) 
A(x) =  and ~
b(x) =  . .
 ... .. .. ... . 
. . ..   .. 
an1 (x) an2 (x) an3 (x) . . . ann (x) bn (x)
Example 3.6. Consider the homogeneous linear system yj0 = nk=1 ajk yk (j = 1, 2, . . . , n),
P

where ajk are continuous on some interval I. Then it isPeasy to see that ψ ~ = ~0 is a solution
n
. This is called a trivial solution. Let K be such that j=1 |ajk (x)| ≤ K. Let x0 ∈ I, and
~ be any solution of the linear homogeneous system. Now consider two IVP
φ
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = φ(x
~ 0)
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous depen-
P
dence estimate theorem, we have
~ ~ ~ 0 ) − ~0|eK|x−x0 | + ε = 0 eK|x−x0 | − 1 = |φ(x
− ~0| ≤ |φ(x ~ 0 )|eK|x−x0 | ∀x ∈ I .

|φ(x)| = |φ(x)
K
For linear equations of order n, we have non-local existence.
Theorem 3.6. Let a0 , a1 , . . . , an−1 and b be continuous real valued functions on an interval
I containing x0 . If α0 , α1 , . . . , αn−1 are any n constant, there exists one and only one
solution φ of the ODE
y (n) + an−1 (x)y (n−1) (x) + . . . + a0 (x)y = b(x) on I ,
φ(x0 ) = α0 , φ0 (x0 ) = α1 , . . . , φ(n−1) (x0 ) = αn−1 .
Proof. Let ~y0 = (α0 , α1 , . . . , αn−1 ). Given ODE can be writen as a system of linear
equation
yj0 = yj+1 (j = 1, 2, . . . , n − 1); yn0 = b(x) − an−1 (x)yn − . . . − a0 (x)y1 .
~ = φ1 , φ2 , . . . , φn

Then, according to Theorem 3.5, above problem has a unique solution φ
on I satisfying φ1 (x0 ) = α0 , φ2 (x0 ) = α1 , . . . , φn (x0 ) = αn−1 . But, since
(n−1)
φ2 = φ01 , φ3 = φ02 = φ001 , . . . , φn = φ1 ,
the function φ1 is the required solution on I. 
ODES AND PDES 23

4. General solution of linear equations


4.0.1. Linear independence and Wronskian:
Definition 4.1. Let I be an interval in R and u1 , u2 , . . . , um be real value functions
defined on I. We say that the functions u1 , u2 , . . . , um are
i) linearly dependent if there exist constants c1 , c2 , . . . , cm not all zero such that
m
X
ci ui (t) = 0 ∀ t ∈ I.
i=1
Pm
ii) linearly independent if it is not linearly dependent, i.e., i=1 ci ui (t) = 0 ∀t ∈
I =⇒ ci = 0 ∀i = 1, 2, . . . , m.
Example 4.1. sin(t) and cos(t) are linearly independent on R. To see this, let c1 sin(t) +
c2 cos(t) = 0. Then c1 cos(t) − c2 sin(t) = 0. Multiplying the first equation by cos(t), the
second one by − sin(t) and adding, we obtain c2 = 0. Now, returning to the first equation,
we get c1 sin(t) = 0 for all t ∈ R. This implies that c1 = 0. Hence sin(t) and cos(t) are
linearly independent on R.
Example 4.2. If u1 (t) and u2 (t) are two linearly independent functions on the interval I,
and v(t) is a function such that v(t) > 0 on I. Then u1 v and u2 v are linearly independent
on I. To see this, let c1 u1 (t)v(t) + c2 u2 (t)v(t) = 0. Since v(t) > 0 on I, we divide
the equation by v(t) and have c1 u1 (t) + c2 u2 (t) = 0 for all t ∈ I. Since u1 and u2 are
independent, we have c1 = 0 = c2 . In other words, u1 v and u2 v are linearly independent
on I.
Definition 4.2 (Wronskian). Let I be an interval in R and u1 , u2 , . . . , um are real val-
ued functions defined on I. We define the Wronskian of u1 , u2 , . . . , um , denoted by
W [u1 (t), u2 (t), . . . , um (t)], by
 
u1 (t) u2 (t) ... um (t)
 u01 (t) u02 (t) ... u0m (t) 
W [u1 (t), u2 (t), . . . , um (t)] = det  .. .. .. .
 
..
 . . . . 
(m−1) (m−1) (m−1)
u1 (t) u2 (t) . . . um (t)

Theorem 4.1. Suppose that W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 ∈ I. Then
u1 , u2 , . . . , um are linearly independent.
Proof. Suppose that u1 , u2 , . . . , um arePmlinearly dependent. Then there exist constants
c1 , c2 , . . . , cm not all zero such that i=1 ci ui (t) = 0 ∀ t ∈ I. Taking differentiation
(m − 1)-times, we have
c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.. .. ..
. . .
(m−1) (m−1)
c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m−1)
m (t0 ) = 0 .
24 A. K. MAJEE

This can we written in the matrix form A C = 0, where


 
u1 (t0 ) u2 (t0 ) ... um (t0 ) c1 0
   
 u01 (t0 ) u0
(t
2 0 ) . . . u 0
(t
m 0 )   c2  0
A= .. .. .. , C = 
 ...  , and 0 = 
 ...  .
 
..  
 . . . . 
(m−1) (m−1) (m−1)
u1 (t0 ) u2 (t0 ) . . . um (t0 ) cm 0
Since W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0, we see that A is invertible and therefore we have
that c1 = c2 = . . . = cm = 0, which is a contradiction. This completes the proof. 
Example 4.3. f (t) = sin(t) and g(t) = t3 are linearly independent functions on the
interval (0, π). To see this, we need to show that Wronskian at some point is non-zero.
Note that,
π 1
W [sin(t), t3 ]( ) = √ (π/4)2 3 − π/4 6= 0.

4 2
3
Hence f (t) = sin(t) and g(t) = t are linearly independent functions on the interval (0, π).
Remark 4.1. Let us make the following remarks:
i) If u1 , u2 , . . . , um are linearly dependent, then W [u1 (t), u2 (t), . . . , um (t)] = 0 for all
t ∈ I.
ii) Converse of i) is not true in general. For example, let m = 2 and u1 (t) = t2
and u2 (t) = |t|t. We first show that u1 and u2 are linearly independent. Let
c1 u1 (t) + c2 u2 (t) = 0. Then
(
(c1 + c2 )t2 = 0 t > 0
 =⇒ c1 = c2 = 0.
c1 − c2 t2 = 0 t < 0.
But

2 t2 |t|t
W [t , |t|t] = det = 0 ∀t ∈ R.
2t 2|t|
Theorem 4.2. Let I be an interval in R and ai : I → R (i = 0, 1, . . . , m − 1) be contin-
uous. Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (1.2). Suppose
that u1 , u2 , . . . , um are linearly independent. Then
W [u1 (t), u2 (t), . . . , um (t)] 6= 0 ∀t ∈ I .
Proof. Suppose that the result is NOT true. Then there exists t0 ∈ I such that
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] = 0 .
c1 0
   
 c2  0
 ...  6= 0 =  ...  such that AC = 0, where
Then, there exists C :=    

cm 0
 
u1 (t0 ) u2 (t0 ) ... um (t0 )
 u01 (t0 ) u02 (t0 ) ... u0m (t0 ) 
A :=  .. .. .. .
 
...
 . . . 
(m−1) (m−1) (m−1)
u1 (t0 ) u2 (t0 ) . . . um (t0 )
ODES AND PDES 25
Pm
Now, define v(t) = i=1 ci ui (t), t ∈ I. Then
v(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
v 0 (t0 ) =c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.... .. ..
.. . .
(m−1) (m−1)
v (m−1) (t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m−1)
m (t0 ) = 0 .
Since ui , i = 1, 2, . . . , m solves the linear homogeneous ODE (1.2), v satisfies the following
initial value problem:
v (m) (t) + am−1 (t)v (m−1) (t) + . . . a1 (t)v 0 (t) + a0 (t)u(t) = 0 t ∈ I ,
v(t0 ) = v 0 (t0 ) = . . . = v (m−1) (t0 ) = 0 .
Thus, v(t) = 0 for all t ∈ I. In other words, m
P
i=1 ci ui (t) = 0 for all t ∈ I with the above
choice of the constants c1 , c2 , . . . , cm , which is a contradiction. This finishes the proof. 
Corollary 4.3. Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (1.2).
Then u1 , u2 , . . . , um are linearly independent if and only if
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 ∈ I .
Example 4.4. sin(x) and cos(x) are two linearly independent solution of the homogeneous
ODE y 00 (x) + y(x) = 0. Note that sin(x) and cos(x) solves the ODE. Moreover,
W [sin(x), cos(x)] = −1 ∀x ∈ R.
Therefore, they are two linearly independent solution of the homogeneous ODE y 00 (x) +
y(x) = 0.
Theorem 4.4. The real vector space X defined in Theorem 1.1 is of finite dimensional
and dim X = m.
Corollary 4.5. If u1 , u2 , . . . , um are any independent solutions of the linear homogeneous
ODE (1.2), then any solution u of (1.2) can be written as
u(t) = c1 u1 (t) + c2 u2 (t) + . . . + cm um (t) t ∈ I .
Example 4.5. General solution of the ODE y 00 (x) + y(x) = 0 is given by
y(x) = c1 sin(x) + c2 cos(x), c1 , c2 ∈ R.
Theorem 4.6 (Abel’s Theorem). Let u1 , u2 , . . . , um be m-solutions of the linear homoge-
neous ODE (1.2) on an interval I, and let t0 be any point in I. Then
Z t
 
W [u1 , u2 , . . . , um ](t) = exp − am−1 (s) ds W [u1 , u2 , . . . , um ](t0 )
t0

As a consequence, W [u1 , u2 , . . . , um ](t) is either identically zero or it never vanishes.


Proof. We prove this result for m = 2. In this case, W [u1 , u2 ] = u1 u02 − u01 u2 , and hence
W 0 [u1 , u2 ] = u1 u002 − u001 u2 . Since u1 and u2 are solutions of the linear homogeneous ODE,
we have
u00i (t) = −a1 (t)u0i (t) − a0 (t)ui (t) i = 1, 2 .
26 A. K. MAJEE

Thus
W 0 [u1 , u2 ] = −a1 u1 u02 − u01 u2 = −a1 W [u1 , u2 ] .


We see that W [u1 , u2 ] satisfies the first order linear homogeneous equation u0 (t)+a1 (t)u(t) =
0, and hence
Z t
 
W [u1 (t), u2 (t)] = C exp − a1 (s) ds .
t0

Putting t = t0 in the above expression, we obtain C = W [u1 (t0 ), u2 (t0 )], and hence
Z t
 
W [u1 (t), u2 (t)] = exp − a1 (s) ds W [u1 (t0 ), u2 (t0 )].
t0

For general m, one needs to make use of some general properties of the determinant. From
the definition of W = W [u1 , u2 , . . . , um ] as a determinant, it follows that its derivative W 0
is a sum of m determinants
W 0 = V1 + V2 + . . . + Vm ,
where Vk differs from W only its k-th row, and the k-th row of Vk is obtained by differ-
entiating the k-th row of W . The first m − 1 determinant are all zero as they each have
two identical rows. Hence
   
u1 u2 . . . um u1 u2 ... um
 u01 u02 . . . u0m   u01 u02 ... u0m 
0
W = det  .. .. ..  = det  .. .. .. .. .
   
... .
 . . .   . . . 
(m) (m) (m) Pm−1 (j) Pm−1 (j) Pm−1 (j)
u1 u2 . . . um − j=0 aj u1 − j=0 aj u2 . . . − j=0 aj um
The value of the determinant is unchanged if we multiply any row by a number and add
to the last row. we multiply the first row by a0 , the second row by a1 , . . ., the (m − 1)
row by am−2 , and add these to the last row, we have
 
u1 u2 ... um
 u01 u02 ... u0m 
W 0 = det  .. .. ..  = −am−1 W
 
..
 . . . . 
(m−1) (m−1) (m−1)
−am−1 u1 −am−1 u2 ... −am−1 um
Thus, W satisfies the linear first order equation u0 (t) + am−1 (t)u(t) = 0, and hence
Z t
 
W [u1 , u2 , . . . , um ](t) = exp − am−1 (s) ds W [u1 , u2 , . . . , um ](t0 ) .
t0


Theorem 4.7. Let up be a particular solution of the non-homogeneous ODE

L(u) := u(m) (t) + am−1 (t)u(m−1) (t) + . . . + a0 (t)u(t) = b(t) ∀t ∈ I


and v is any solution of the corresponding linear homogeneous ODE. Then any solution
u of the non-homogeneous ODE can be written as u = v + up .
ODES AND PDES 27

Proof. Let u be any solution and up be its particular solution. Then u − up is a solution of
the homogeneous ODE. Hence u−up = c1 v1 (t)+c2 v2 (t)+. . .+cm vm (t), where v1 , v2 , . . . , vm
are any linearly independent solutions of the linear homogeneous ODE . Since any solution
v of the homogeneous ODE can be written in the form c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t),
the result follows easily. 
Later, we use variation of parameters method to find the particular solution up .
Example 4.6. Let x1 , x2 , x3 and x4 be solutions of the linear homogeneous ODE x(4) (t) −
3x(3) (t) + 2x0 (t) − 5x(t) = 0 such that W [x1 , x2 , x3 , x4 ](0) = 5. Then by Abel’s theorem
Z 6
−3 ds W [x1 , x2 , x3 , x4 ](0) = 5e18 .
 
W [x1 , x2 , x3 , x4 ](6) = exp −
0

Example 4.7. The functions u1 (t) = sin t and u2 (t) = t2 can not be solutions of a differ-
ential equation u00 (t) + a1 (t)u0 (t) + a0 (t)u(t) = 0, where a0 , a1 are continuous functions.
To see this, we first consider the Wronskian of u1 and u2 . Note that W [u1 (t), u2 (t)] =
2t sin t − t2 cos t. Thus W [u1 ( π2 ), u2 ( π2 )] = π 6= 0, and W [u1 (0), u2 (0)] = 0. Thus, in view
of the previous theorem, u1 and u2 can be a solution.
Example 4.8. Let us explain why et , sin(t) and t cannot be solutions of a third order
homogeneous equation with continuous coefficients. Notice that
π π π
W [et , sin(t), t](0) = 0; W [et , sin(t), t]( ) = (2 − )e 2 6= 0.
2 2
If they are solutions of a third order homogeneous equation with continuous coefficients,
then by Abel’s theorem, Wronskian be either identically zero or nonzero. Therefore,
et , sin(t) and t cannot be solutions of a third order homogeneous equation with contin-
uous coefficients.
4.1. Linear homogeneous equation with constant coefficients. We are interested
in the ODE
u(m) (t) + am−1 u(m−1) (t) + . . . + a0 u(t) = 0 , where ai ∈ R . (4.1)
Define the differential operator L with constant coefficients as
m
X di
L≡ ai i , ai ∈ R with am = 1 .
i=0
dt
For u : R → R which is m-times differentiable, we define
m
X di u(t)
Lu(t) = ai .
i=0
dti
In this notation, we are interested in finding u such that Lu(t) = 0 for all t ∈ R. Define
a polynomial
p(λ) = λm + am−1 λm−1 + . . . + a1 λ + a0 = 0 . (4.2)
The polynomial p is called the characteristic polynomial of the operator L.
Remark 4.2. We observe the followings:
a) For a given polynomial p of degree m, we can associate a differential operator Lp
such that p is the characteristic polynomial of Lp .
28 A. K. MAJEE

b) Let p be a polynomial of degree m such that p = p1 p2 , where p1 and p2 are


polynomial with real coefficients. Then for any m-times differentiable function u,
Lp (u) = Lp1 Lp2 (u) = Lp2 Lp1 (u) .
Thus, if u is a solution of Lp1 u = 0 or Lp2 u = 0, then u is a solution of Lp u = 0.
By the fundamental theorem of algebra, we can write the characteristic polynomial as
p(λ) = (λ − λ1 ) . . . (λ − λm ) , λi ∈ C .
Note that λi ’s may not be distinct. Suppose that λi is not real. Then λ̄i is a root of p
and hence λ̄i = λj for some j. Hence we can write
p(λ) = (λ − λ1 )m1 . . . (λ − λk1 )mk1 (λ − z1 )n1 (λ − z̄1 )n1 . . . (λ − zk2 )nk2 (λ − z̄k2 )nk2
where λi ∈ R, zj = xj + iyj with yj 6= 0, and ki=1 mi + 2 kj=1
P1 P2
nj = m. Thus
n
p(λ) = p1 p2 . . . pk1 q1 q2 . . . qk2 ; pi = (λ − λi )mi , qj = (λ − zj )(λ − z̄j ) j .


Therefore, if we find Lpi (u) = 0 or Lqj (u) = 0, then u is a solution of Lp u = 0.


m
Finding the solutions of Lpi (u) = 0: We need to find u such that dtd − λi i u = 0.
This gives ( dtd − λi )( dtd − λi ) . . . ( dtd − λi )u = 0. Now ( dtd − λi )u = 0 =⇒ u0 − λi u = 0 and
hence u = eλi t . Suppose that ( dtd − λi )2 u = 0. Set v = ( dtd − λi )u. Then ( dtd − λi )v = 0,
and hence v = eλi t . Thus,
eλi t = u0 − λi u
=⇒ u0 e−λi t − uλi e−λi t = 1
=⇒ (ue−λi t )0 = 1
=⇒ e−λi t u = t + C
=⇒ u(t) = teλi t + Ceλi t .
Therefore, u = teλi t + Ceλi t solves Lpi u = 0. This yields that Lpi (teλi t ) = 0, and hence
eλi t and teλi t are solutions of Lpi u = 0.
Theorem 4.8. The functions eλi t , teλi t , . . . , tmi −1 eλi t are mi linearly independent solution
of Lpi u = 0 and hence solution of Lu = 0.
Proof. We can easily check that Lpi (tj eλi t ) = 0 if j ≤ mi −1. Let us show the independence
P i −1
of the functions. Let m j=0 Cj+1 t e
j λi t
= 0. Putting t = 0, we get C1 = 0, and hence
Pmi −1 j−1 λi t
j=1 Cj+1 t e = 0. Again putting t = 0 in the above expression, we have C2 = 0.
Continuing like this, we get C1 = C2 = . . . = Cmi = 0. This completes the proof. 
Finding the solutions of Lqj (u) = 0: Consider the case nj = 1. Then we have ( dtd −
z̄j )( dtd − zj )u = 0, and hence ezj t is a solution. Now
ezj t = exj t+iyj t = exj t cos(yj t) + i sin(yj t)
 

Lqj u = 0 gives that its real part and imaginary part are zero. Again, if u = u1 + iu2 , then
Lu = L(u1 ) + iL(u2 ) and hence L(u) = 0 iff L(u1 ) = 0 and L(u2 ) = 0. This implies that
exj t cos(yj t), exj t sin(yj t) are solutions of Lqj u = 0, and they are linearly independent.
ODES AND PDES 29

Theorem 4.9. The functions


exj t sin(yj t), texj t sin(yj t), . . . , tnj −1 exj t sin(yj t) ,
exj t cos(yj t), texj t cos(yj t), . . . , tnj −1 exj t cos(yj t)
are 2nj -linearly independent solutions of Lqj u = 0. Moreover, a basis for the space of
solutions of Lu = 0 is given by
tj eλi t j = 0, 1, . . . , mi − 1 (i = 1, 2, . . . , k1 )
tj exi t sin yi t, tj exi t cos yi t j = 0, 1, . . . , ni − 1 (i = 1, 2, . . . , k2 ) .
Example 4.9. Solve the initial value problem
2y 00 + y 0 − y = 0, y(0) = 1, y 0 (0) = 2 .
We need to find two linearly independent solutions of the above ODE. The characteristic
polynomial p in this case is given by p(λ) = 2λ2 + λ − 1. Since p(λ) = (2λ − 1)(λ + 1),
the roots of p(λ) are −1 and 21 . Thus the general solution is given by
1
y(x) = c1 e−x + c2 e 2 x .
In view of the initial conditions, we have
1
c1 + c2 = 1, −c1 + c2 = 2 .
2
1
Solving the above equation, we get c1 = −1 and c2 = 2. Therefore, y(x) = −e−x + 2e 2 x is
the desired solution.
Example 4.10. Find a general solution of the ODE:
y (4) + y = 0.
This equation arises in the study of the deflection of beams. The characteristic polynomial
is given by p(λ) = λ4 + 1. Now p(λ) can be written as
√ √ √
p(λ) = (λ2 + 1)2 − ( 2λ)2 = λ2 + 2λ + 1 λ2 − 2λ + 1 .
 

Thus the roots of p(λ) are given by


1 1
√ (1 ± i), − √ (1 ± i) .
2 2
Thus every real solution has the form
x 
√ x x  −x
√  x x 
y(x) = e 2 c1 cos( √ ) + c2 sin( √ ) + e 2 c3 cos( √ ) + c4 sin( √ )
2 2 2 2
where c1 , c2 , c3 and c4 are real constants.
Example 4.11. Let us find the solution φ of the initial-value problem:
y (3) + y = 0, y(0) = 0, y 0 (0) = 1, y 00 (0) = 0 .
The characteristic

polynomial p(λ) = λ3 + 1 = (λ + 1)(λ2 − λ + 1). Thus root of p is given
by −1, 1±2 3i . Therefore, the general solution is given by
√ √
−x x 3 3 
y(x) = c1 e + e 2 c2 cos( x) + c3 sin( x) .
2 2
30 A. K. MAJEE

Note that
y(0) = 0 =⇒ c1 + c2 = 0

y 0 (0) = 1 =⇒ −2c1 + c2 + 3c3 = 2

y 00 (0) = 0 =⇒ c1 − c2 + 3c3 = 0 .

Solving the above equations, we get c1 = − 52 , c2 = 25 and c3 = 5√4 3 . Thus, the solution is
√ √
2 −x x 2 3 4 3 
y(x) = − e + e 2 cos( x) + √ sin( x) .
5 5 2 5 3 2
4.2. Finding particular solution to non-homogeneous ODE (Method of Varia-
tion of parameters). Let ui (1 ≤ i ≤ m) be m-linearly independent solutions to the
linear
Pm homogeneous ODE (1.2). We want to find functions ci (t) suchPthat up (t) =
m
c
i=1 i (t)u (t)
i P is a solution to the non-homogeneous ODE. Let u(t) = i=1 i (t)ui (t).
c
0 m  0 0
 Pm 0
Then u (t) = i=1 ci (t)ui (t) + ci (t)ui (t) . Assume that i=1 ci (t)ui (t) = 0. Then
m
X m
X
u0 (t) = ci (t)u0i (t) and hence u00 (t) =
 0
ci (t)u0i (t) + ci (t)u00 (t) .

i=1 i=1
Pm 0 0
Pm
Again assume that i=1 ci (t)ui (t) = 0. Then u00 (t) = 00
i=1 ci (t)ui (t). Therefore by
assuming
m
X m
X m
X (m−2)
c0i (t)ui (t) = 0, c0i (t)u0i (t) = 0, . . . , c0i (t)ui (t) = 0 ,
i=1 i=1 i=1

we get
m
X
(j) (j)
u (t) = ci (t)ui (t) = 0 j = 0, 1, . . . , m − 1 .
i=1

(m−1) (m)
Then, u(m) (t) = m
P  0 
i=1 ci (t)ui (t)+ci (t)ui (t) . Now u satisfies the non-homogeneous
equation iff
m
X m
X m
X
 0 (m−1) (m)  (m−1)
ci (t)ui (t) + ci (t)ui (t) + am−1 (t) ci (t)ui (t) + . . . + a0 (t) ci (t)ui (t) = b(t)
i=1 i=1 i=1
m
X m
X h i
(m−1) (m) (m−1)
⇔ c0i (t)ui (t) + ci (t) ui (t) + am−1 (t)ui (t) + . . . + a0 (t)ui (t) = b(t)
i=1 i=1
m
X (m−1)
⇔ c0i (t)ui (t) = b(t) .
i=1

We now arrive at the following theorem.


Theorem 4.10. The function u(t) = m
P
i=1 ci (t)ui (t) is a particular solution of the non-
homogeneous ODE
u(m) (t) + am−1 (t)u(m−1) (t) + . . . + a0 (t)u(t) = b(t) t∈I
ODES AND PDES 31

if c1 , c2 , . . . , cm satisfy the the following matrix equation A C = B, where


   0
u1 u2 ... um c1 0
 
 u01 u 0
2 . . . u 0
m 
 0
 c2  0
A =  .. .. ..  , C =   and B =  .  .

.. .
.  .. 
 . . . .   . 
(m−1) (m−1) (m−1)
u1 u2 . . . um c0m b
Remark 4.3. Note that, since u1 , u2 , . . . , um are linearly independent, the Wronskian is
non-zero. Thus the above matrix equation has an unique solution for C. Moreover, by
using Cramer’s rule we have explicit formula for c0i , namely
Wi (t)b(t)
c0i (t) = (1 ≤ i ≤ m)
W [u1 , u2 , . . . , um ](t)
where Wi is the determinant obtained from W [u R t1 , u0 2 , . . . , ui , . . . , um ] by replacing i-th
column by 0, 0, . . . , 0, 1. One can take ci (t) = t0 ci (s) ds for some t0 ∈ I. Thus, the
particular solution up now takes of the form
m Z t
X Wi (s)b(s)
up (t) = ui (t) ds .
i=1 t0
W [u1 , u 2 , . . . , u m ](s)
Remark 4.4. Fix t0 ∈ I. Then the particular solution up of the non-homogeneous ODE
u(m) (t) + am−1 (t)u(m−1) (t) + . . . + a0 (t)u(t) = b(t) t ∈ I
Rt
given by up (t) m Wi (s)b(s)
P
i=1 ui (t)ci (t), where ci (t) = t0 W [u1 ,u2 ,...,um ](s) ds satisfies the following
initial conditions
up (t0 ) = u0p (t0 ) = . . . = u(m−1)
p (t0 ) = 0 ,
Pm 0 (j)
Note that, since i=1 ci (t)ui (t) = 0 for all 0 ≤ j ≤ (m − 2), we have
Xm
(j)
u(j)
p (t) = ci (t)ui (t) ∀ 0 ≤ j ≤ (m − 1) .
i=1
(m−1)
Since ci (t0 ) = 0, it easily follows that up (t0 ) = u0p (t0 ) = . . . = up (t0 ) = 0.
Example 4.12. Find a general solution of the ODE
y (3) + y 00 + y 0 + y = 1 .
The characteristic polynomial associated to the homogeneous ODE p(λ) is given by
p(λ) = λ3 + λ2 + λ + 1 = (λ + 1)(λ2 + 1) = (λ + 1)(λ + i)(λ − i)
Thus, the roots of p are i, −i, and −1. Hence three independent solutions of the homoge-
neous ODE are given by
u1 (t) = e−t , u2 (t) = cos(t), u3 (t) = sin(t) .
Let us first calculate W [u1 , u2 , u3 ](0). Notice that
 
1 1 0
W [u1 , u2 , u3 ](0) = det −1 0 1 = 2 .
1 −1 0
32 A. K. MAJEE
Rt
Thus, W [u1 , u2 , u3 ](t) = exp[− 0 ds]W [u1 , u2 , u3 ](0) = 2e−t . Observe that c0i (t) = 21 et Wi (t).
To find a particular solution, we need to calculate Wi (t). Note that
 
0 cos(t) sin(t)
W1 (t) = det 0 − sin(t) cos(t)  = 1 ,
1 − cos(t) − sin(t)
 −t 
e 0 sin(t)
W2 (t) = det −e−t 0 cos(t)  = −e−t sin(t) + cos(t) ,
 
e−t 1 − sin(t)
 −t 
e cos(t) 0
W3 (t) = det −e−t − sin(t) 0 = e−t − sin(t) + cos(t) .
 
e−t − cos(t) 1
Thus,
Z t
1 1
c1 (t) = es ds = [et − 1]
2 0 2
1 t
Z
 1 
c2 (t) = − sin(s) + cos(s) ds = cos(t) − sin(t) − 1 ,
2 0 2
Z t
1  1 
c3 (t) = − sin(s) + cos(s) ds = cos(t) + sin(t) − 1 .
2 0 2
Thus the particular solution up is given by
3
X 1
cos(t) + sin(t) + e−t .

up (t) = ci (t)ui (t) = 1 −
i=1
2
Hence, a general solution to the non-homogeneous ODE is given by
3
X
u(t) = up + ci ui (t) = 1 + c1 e−t + c2 cos(t) + c3 sin(t) ,
i=1

where c1 , c2 and c3 are arbitrary constants.


Example 4.13. We would like to find general solution of the ODE
y 000 − y 0 = x.
The characteristic polynomial p(λ) = λ3 − λ, and its roots are given by 0, 1, −1. Thus,
three independent solution of the homogeneous ODE y 000 − y 0 = 0 are u1 (x) = 1, u2 (x) =
ex and u3 (x) = e−x . Note R t that W [u1 , u2 , u3 ](0) = 2, and hence by Abel’s theorem,
0
W [u1 , u2 , u3 ](t) = exp[− 0 0 ds]W [u1 , u2 , u3 ](0) = 2. Observe that ci (t) = 2t Wi (t). To
find a particular solution, we need to calculate Wi (t). Note that
   
0 et e−t 1 0 e−t
W1 (t) = det 0 et −e−t  = −2 , W2 (t) = det 0 0 −e−t  = e−t ,
1 et e−t 0 1 e−t
 
1 et 0
W3 (t) = det 0 et 0 = et .
0 et 1
ODES AND PDES 33

Thus,
t
t2 1 t −s
Z Z
1
se ds = 1 − e−t − te−t ,

c1 (t) = − s ds = , c2 (t) =
0 2 2 0 2
Z t
1 1 t2 1
ses ds = 1 − et + tet , and hence up = − 1 + (et − e−t ) .
 
c3 (t) =
2 0 2 2 2
Therefore, a general solution to the non-homogeneous ODE is given by
t2
u(t) = c1 + c2 et + c3 e−t + − 1,
2
where c1 , c2 and c3 are arbitrary constants.
4.2.1. Euler’s Equation: Consider the equation
tm u(m) (t) + am−1 tm−1 u(m−1) (t) + . . . + a1 tu0 (t) + a0 u(t) = b(t) (4.3)
where a0 , a1 , . . . , am−1 are constants. Consider t > 0. Let s = log(t) (for t < 0, we must
use s = log(−t)). Then
du du 1
=
dt ds t
2
du d2 u du  1
= −
dt2 ds2 ds t2
3 3
du du d2 u du  1
= − 3 + 2
dt3 ds3 ds2 ds t3
.. .. ..
. . .
m m m−1
d u  d u d u du  1
= + C m−1 + . . . + C 1 ,
dtm dsm dsm−1 ds tm
for some constants C1 , C2 , . . . , Cm−1 . Substituting these in (4.3), we obtain a non-homogeneous
ODE with constant coefficients
dm u dm−1 u du
m
+ Bm−1 m−1
+ . . . + B 1 + B0 u = b(es )
ds ds ds
for some constants B0 , B1 , . . . , Bm−1 . One can solve this ODE and then substitute s =
log(t) to get the solution of the Euler’s equation (4.3).
Example 4.14. Solve
x2 y 00 + xy 0 − y = x3 x > 0.
dy d2 y dy
Let x = eu . Then xy 0 = du
and x2 y 00 = du2
− du
. Therefore, we have
d2 y
− y = e3u u ∈ R .
du2
The characteristic polynomial corresponding to the homogeneous ODE is given by
p(λ) = λ2 − 1 = (λ + 1)(λ − 1) .
Therefore, y1 = eu and y2 = e−u are two linearly independent solutions. Note that
W [y1 , y2 ](u) = −2. Moreover
0 e−u
   u 
−u e 0
W1 (u) = det = −e , W2 (u) = det u = eu .
1 −e−u e 1
34 A. K. MAJEE

Hence
Z u Z u
1 1 1 1
e dr = e2u − 1 ,
2r
e4r dr = 1 − e4u ,
 
c1 (u) = c2 = −
2 0 4 2 0 8
and therefore the particular solution yp is given by
1 1 1
yp = e3u − eu + e−u .
8 4 8
Therefore the general solution is
1 C 2 x3
y = C1 eu + C2 e−u + e3u = C1 x + + ,
8 x 8
where C1 , C2 are arbitrary constants.
4.3. On Comparison theorems of Sturns: Consider a general 2nd order linear homo-
geneous ODE
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0 , p, q ∈ C(I) . (4.4)
We know that, if y1 (x) is a solution of (4.4), then cy1 (x) is also a solution, where c is a
constant. Is c(x)y1 (x) a solution? Answer is yes, and it is given by the following theorem.
Theorem 4.11. Let y1 (x) be a solution of (4.4) with y1 (x) 6= 0 on I. Then
Z − R p(x) dx
e
y2 (x) = y1 (x) dx
y12 (x)
is a solution of (4.4). Moreover, y1 and y2 are linearly independent.
Proof. Let y2 (x) = v(x)y1 (x). We would like to find v(x) such that y2 (x) satisfies (4.4).
To do so, suppose y2 (x) satisfies (4.4). Then by calculating y200 (x) + p(x)y20 (x) + q(x)y2 (x),
we see that
v 0 (x)[2y10 (x) + p(x)y1 (x)] + y1 (x)v 00 (x) = 0
where we have used the fact that y1 is a solution of (4.4). Let w = v 0 . Then w satisfies a
first oder ODE given by
2y10 (x)
w0 (x) + [ + p(x)]w(x) = 0.
y1 (x)
Hence by the method of variation of parameters, we obtan
R
e− p(x) dx
w(x) = c 2 .
y1 (x)
Since we only needR one function v(x) so that v(x)y1 (x) is a solution, we can let c = 1 and
R e− p(x) dx R e− R p(x) dx
hence v(x) = y12 (x)
dx. Thus, y2 (x) = y1 (x) y12 (x)
dx is a solution of (4.4).
Let
R us calculate the Wronskian of y1 and y2 . It is easy to see that W [y1 , y2 ](x) =
− p(x) dx
e 6= 0. Therefore, y1 and y2 are linearly independent. 
Example 4.15. Find a general solution of
y0 1
y 00 − + 2 y = 0 , x > 0.
x x
ODES AND PDES 35

Note that y1 = x is a solution and y1 (x) 6=R 0. To find another independent solution, using
R − − 1 dx
above theorem, we obtain y2 (x) = x e x2x dx = x ln(x). Therefore, general solution
is given by
y(x) = c1 x + c2 x ln(x) .
Example 4.16. Find general solution of the ODE
1
(3t − 1)2 y 00 (t) + 3(3t − 1)y 0 (t) − 9y(t) = 0 , t >
.
3
Note that y1 (t) = (3t − 1) is a solution, and y1 (t) =
6 0. To find another independent
solution, using Theorem 4.11, we obtain
Z − R 3 dt
e 3t−1 1
y2 (t) = (3t − 1) dt = − .
(3t − 1)2 6(3t − 1)
Therefore, general solution is given by
1 1
y(t) = c1 (3t − 1) − c2 , t> .
6(3t − 1) 3
1
R
One can easily check that the substitution x(t) = y(t)e− 2 p(t) dt transform the equation
x + p(t)x0 + q(t)x = 0 into the form y 00 + P (t)y(t) = 0, where p, q are continuous
00

functions such that p0 is continuous. Therefore, instead of studing the equation of the
form x00 + p(t)x0 + q(t)x = 0, we will study the equation of the form
y 00 + α(x)y(x) = 0.
Oscillatory behavior of solutions: Consider the second order linear homogeneous
equation
y 00 + p(x)y(x) = 0 . (4.5)
For simplicity, we assume that p(x) is continuous everywhere.
Definition 4.3. We say that a nontrivial solution y(x) of (4.5) is oscillatory (or it os-
cillates) if for any number T , y(x) has infinitely many zeros in the interval (T, ∞); or
equivalently, for any number τ , there exists a number ξ > τ such that y(ξ) = 0. We also
call the equation (4.5) oscillatory if it has an oscillatory solution.
Consider the equation y 00 + 4y = 0. Two independent solutions are y1 (x) = sin(2x)
and y2 (x) = cos(2x). Note that y1 (x) three zeros on (0, 2π). Moreover, between two
consecutive zeros of y1 (x), there is only one zero of y2 (x). We have the following general
result.
Theorem 4.12 (Sturm Separation Theorem). Let y1 (x) and y2 (x) be two linearly inde-
pendent solutions, and suppose a and b are two consecutive zeros of y1 (x) with a < b.
Then y2 (x) has exactly one zero in the interval (a, b).
Proof. Notice that y2 (a) 6= 0 6= y2 (b)( otherwise y1 and y2 would have common zero
and hence their Wronskian would be zero, contradicting the fact that they are linearly
independent). Suppose y2 (x) 6= 0 on (a, b). Then y2 (x) 6= 0 on [a, b]. Define h(x) = yy21 (x)(x)
.
Then h satisfies all the conditions of Rolle’s theorem. Hence there exists c ∈ (a, b) such
that h0 (c) = 0. In other words, W [yy12,y(c)2 ](c) = 0. Since y2 (c) 6= 0, W [y1 , y2 ](c) = 0, a
2
36 A. K. MAJEE

contradiction as y1 and y2 are linearly independent. Thus, there exists c ∈ (a, b) such that
y2 (c) = 0.
We now show the uniqueness. Suppose there exist c1 , c2 ∈ (a, b) such that y2 (c1 ) =
y2 (c2 ) = 0. Then, by what we have just proved, there would exist a number d between c1
and c2 such that y1 (d) = 0, contradicting the fact that a and b are consecutive zeros of
y1 (x). 
Example 4.17. Show that between any two consecutive zeros of sin(t), there exists only
one zero of a1 sin(t) + a2 cos(t), where a1 , a2 ∈ R with a2 6= 0. To see this, we apply
Sturm Separation Theorem. Note that y1 (t) := sin(t) and y2 (t) := a1 sin(t) + a2 cos(t) are
two solutions of the ODE y 00 (t) + y(t) = 0. Since W [y1 , y2 ](t) = −a2 6= 0, y1 and y2 are
linearly independent. Therefore, by Theorem 4.12, between any two consecutive zeros of
sin(t), there exists only one zero of a1 sin(t) + a2 cos(t).
In view of Theorem 4.12, we arrive at the following corollary.
Corollary 4.13. If (4.5) has one oscillatory solution, then all of its solutions are oscil-
latory.
Theorem 4.14 (Sturm Comparison Theorem). Consider the equations
y 00 (x) + α(x)y(x) = 0 , (4.6)
y 00 (x) + β(x)y(x) = 0 . (4.7)
Suppose that yα (x) is a nontrivial solution of (4.6) with consecutive zeros at x = a and
x = b. Assume further that α, β ∈ C[a, b] and α(x) ≤ β(x), with strict inequality holding
at least at one point in the interval [a, b]. If yβ (x) is any nontrivial solution of (4.7) such
that yβ (a) = 0, then there exists a number c with a < c < b such that yβ (c) = 0.
Proof. Suppose that yβ (x) 6= 0 on (a, b). W.L.O.G, we assume that yβ (x) > 0 and
yα (x) > 0 on the interval (a, b). Multiplying (4.6) by yβ (x) and (4.7) by yα (x), and then
subtracting the resulting equations, we obtain
yβ (x)yα00 − yα (x)yβ00 + (α(x) − β(x))yα (x)yβ (x) = 0
0
=⇒ yβ yα0 − yα yβ0 = (β(x) − α(x))yα (x)yβ (x)
Z b Z b
0 0 0

=⇒ yβ yα − yα yβ dx = (β(x) − α(x))yα (x)yβ (x) dx
a a
Z b
0
=⇒ yβ (b)yα (b) = (β(x) − α(x))yα (x)yβ (x) dx .
a
Note that, since α, β ∈ C[a, b] and α(x0 ) < β(x0 ), for some x0 [a, b], in a nbd. of x0 ,
β(x) − α(x) > 0, and hence by positivity of yα and yβ on (a, b), we see that
Z b
(β(x) − α(x))yα (x)yβ (x) dx > 0 .
a
On the other hand, since yα > 0 on (a, b) and yα (b) = 0, we must have yα0 (b) ≤ 0, and
yβ (b) ≥ 0. Therefore,
Z b
0
0 ≥ yβ (b)yα (b) = (β(x) − α(x))yα (x)yβ (x) dx > 0 —a contradiction !
a

ODES AND PDES 37

Corollary 4.15. All solutions of (4.7) vanish between a and b.


Proof. Let z(x) be a given solution of (4.7). We have shown that yβ vanishes at a and
at some point c ∈ (a, b). By the Sturm Separation Theorem, z(x) has a zero between a
and c and hence between a and x if z and yβ are linearly independent. If they re linearly
dependent, then they are constant multiples of each other and have the same zeros. Since
yβ has a zero in (a, b), z(x) has zero in (a, b). 
Example 4.18. We now show that any solution ψ of y 00 + x2 y = 0 has infinitely many
zeros there. Note that all solutions of y 00 + y = 0 are oscillatory. Consider the problem
on [1, ∞). On this interval, 1 ≤ x2 . Hence by Sturm Comparison Theorem, there exists
infinitely many zeros of ψ in [1, ∞).

5. Sturm Liouville eigen-value problem:


Consider the boundary value problem
y 00 + A(x)y 0 + B(x)y(x) + λC(x)y(x) = 0 , y(a) = 0 = y(b) ,
where a <Rx
b, λ is a real parameter and A, B, C ∈ C[a, b]. Multiplying the equation by
A(s) ds
p(x) = e a , and setting r(x) = p(x)B(x), q(x) = p(x)C(x), the original equation
becomes
0
p(x)y 0 + r(x)y(x) + λq(x)y(x) = 0 .
Note here that p(t) > 0 and p ∈ C 1 [a, b]. We simplify the equation by letting r(x) = 0(
equivalent to letting B(x) = 0). Thus, we consider the following boundary-value problem
( 0
p(x)y 0 + λq(x)y(x) = 0 in [a, b]
(5.1)
y(a) = 0 = y(b)
where p ∈ C 1 [a, b] with p > 0 and 0 6= q ∈ C[a, b]. Note that one of the solutions of (5.1)
is the trivial solution y(x) ≡ 0.
Definition 5.1. We say that λ is an eigen value of (5.1) if it has a non-trivial solution,
called eigenfunction, corresponding to λ.
Example 5.1. Consider the problem
− u00 = λu in (0, 1) (5.2)
u(0) = 0 = u(1) , (5.3)
where λ is a given constant. Let us consider the following case.
Case 1: λ < 0. Then, λ = −k 2 for some k > 0. A solution of (5.3) takes the form
u(t) = c1 e−kt + c2 ekt .
Since u(0) = u(1) = 0, we get c1 + c2 = 0, and c1 e−k + c2 ek = 0. Thus, c1 = c2 = 0. This
implies that for λ < 0, this problem does not have a non-trivial solution.
Case 2: λ = 0. in this case, it is easy to check that the problem does not have any
non-trivial solution.
Case 3: λ > 0. Then a general solution of −u00 = λu is
√ √
u(t) = c1 sin( λt) + c2 cos( λt).
38 A. K. MAJEE
√ √
Now u(0) = 0 gives c2 = 0, and hence u(t) = c1 sin( √ λt). u(1) = 0 then
√ gives c1 sin( λ) =
0. Since we are looking for non-trivial solution, sin( λ) = 0, i.e., λ = nπ or
λ ∈ n2 π 2 n = 1, 2, . . . .


Conclusion: The problem (5.3) has a non-trivial solution only when λ ∈ n2 π 2 n =
1, 2, . . . .
Example 5.2. Consider the boundary value problem y 00 + λmy = 0 with y(a) = y(b) = 0.
Then, for m > 0, the eigenvalues are given by
k2π2
λk = , k = 1, 2, . . . .
m(b − a)2
Theorem 5.1. If q(x) > 0, then the eigen values of (5.1) are strictly positive.
Theorem 5.2. Let φ1 and φ2 be two eigen function of (5.1) associated to λ1 and λ2
respectively with λ1 6= λ2 . Then
Z b
q(t)φ1 (t)φ2 (t) dt = 0.
a
As a consequence, eigenfunctions corresponding to different eigenvalues are linearly inde-
pendent.
Proof. Since φ1 and φ2 are eigen functions, we have
(pφ01 )0 + λ1 qφ1 = 0 , (pφ02 )0 + λ2 qφ1 = 0 .
Multiplying the first equation by φ2 and then integrating by parts from a to b along with
boundary condition, we obtain
Z b Z b Z b
0 0
λ1 qφ1 φ2 dt = − (pφ1 ) φ2 dt = pφ01 φ02 dt .
a a a
Rb Rb
Similarly, we have a
λ2 qφ1 φ2 dt = a
pφ01 φ02 dt. Thus,
Z b
(λ1 − λ2 )qφ1 φ2 dt = 0.
a
Rb
Since λ1 6= λ2 , we have a q(t)φ1 (t)φ2 (t) dt = 0.
Suppose that φ1 and φ2 are linearly dependent. Then there exists a non-zero constant
α such that φ2 = αφ1 . Then we have
Z b
0=α q(t)φ21 (t) dt − − − a contradiction!.
a

In the previous example, we have noticed that there exists infinitely many eigenvalues
0 < λk ↑ ∞. This result holds for the equation (5.1) when q(t) > 0.
Theorem 5.3. Suppose that q(t) > 0. Then there exist infinitely many positive eigenval-
ues λk such that
0 < λ1 < λ2 < . . . < λk < . . . , and lim λk = ∞.
k→∞
ODES AND PDES 39

Variational characterization of the smallest eigenvalue λ1 : Denote by λk [q] the


eigenvalues of (5.1) and by φk (t) a corresponding eigenfunction. Let λ1 [q] is the smallest
eigen value. Then by multiplying the equation (pφ01 )0 +λ1 [q]qφ1 by φ1 and then integrating
by parts, we have
Rb
p(t)(φ01 )2 dt
Z b Z b
λ1 [q] q(t)φ21 (t) dt = p(t)(φ01 )2 dt =⇒ λ1 [q] = Rab ,
2
a a
a
q(t)φ1 (t) dt
where in the last last line we have used the fact that q(t) > 0. Let E denotes the class of
functions φ ∈ C 1 (a, b) such that φ(a) = 0 = φ(b). Then
Rb
a
p(t)(φ0 )2 dt
λ1 [q] ≤ R b , ∀φ ∈ E .
a
q(t)φ2 (t) dt
Moreover,
Rb
p(t)(φ0 )2 dt
λ1 [q] = min R(φ), where R(φ) = R ab , called Rayleigh Quotient.
φ∈E
a
q(t)φ2 (t) dt
Example 5.3. Consider the problem x00 + λx = 0 with boundary condition x(0) = 0 =
x(π). Then the eigen values are λk = k 2 , k = 1, 2, . . .. Hence by variational characteri-
zation of first eigenvalue, we have
Z π Z π
2
φ (t) dt ≤ (φ0 )2 dt , ∀φ ∈ E.
0 0
The above inequality is known as Poincaré inequality.
One can use variational characterization of the smallest eigenvalue to arrive at the
following theorem.
Theorem 5.4. Let λ1 [qi ], i = 1, 2 be the first eigenvalues of (px0 )0 + λqi x = 0, x(a) =
0 = x(b). If q1 (t) ≤ q2 (t) for all t ∈ [a, b], then
λ1 [q2 ] ≤ λ1 [q1 ] .
Let λ1 [q] resp. λ̃1 [q] be the first eigenvalue of (px0 )0 + λqx = 0 resp. (p̃x0 )0 + λqx = 0 with
boundary conditions x(a) = 0 = x(b). If p(t) ≤ p̃ for all t ∈ [a, b], then
λ1 [q] ≤ λ̃1 [q] .
Example 5.4. We would like to estimate the first eigenvalue λ1 [q] of
(p(t)x0 )0 + λq(t)x = 0, x(a) = 0 = x(b)
under the assumptions 0 < α ≤ p(t) ≤ β , and 0 < m ≤ q(t) ≤ M in [a, b]. Let us denote
λ̄1 [q] resp. λ̃1 [q] the first eigen value of (αx0 )0 + λq(t)x = 0 resp. (βx0 )0 + λq(t)x = 0 with
boundary conditions x(a) = 0 = x(b). Then in view of previous theorem, λ̄1 [M ] ≤ λ̄1 [q]
and λ̃1 [q] ≤ λ̃1 [m]. Again since 0 < α ≤ p(t) ≤ β, we have λ̄1 [q] ≤ λ1 [q] ≤ λ̃1 [q].
απ 2
Combining these two, we have λ̄1 [M ] ≤ λ1 [q] ≤ λ̃1 [m]. Note that, λ̄1 [M ] = M (b−a) 2 and
βπ 2
λ̃1 [m] = m(b−a)2
. Thus,
απ 2 βπ 2
≤ λ 1 [q] ≤ .
M (b − a)2 m(b − a)2
40 A. K. MAJEE

Example 5.5. Estimate the first eigen value of x00 + λ(1 + t)x = 0 x(0) = x(1) = 0. Note
here that q(t) = 1+t, and hence 1 ≤ q(t) ≤ 2 for all t ∈ [0, 1]. Thus, λ1 [2] ≤ λ1 [q] ≤ λ1 [1].
In other words,
π2
≤ λ1 [q] ≤ π 2 .
2
Remark 5.1. We have considered only the Dirichlet boundary conditions x(a) = 0 =
x(b). One could also consider the Neumann boundary conditions x0 (a) = 0 = x0 (b), or
the mixed boundary conditions
α1 x(a) + β1 x0 (a) = 0 , α2 x(b) + β2 x0 (b) = 0 ,
 
α1 β1
where the matrix is non-singular.
α2 β2
6. Phase-plane analysis:
Consider the nonlinear system
x0 = y ,
(6.1)
y 0 = f (x)
where f is a smooth function on R. We assume that solution of the above problem exists
for all t ∈ R. The plane (x, y) is called phase plane and study of the system (6.1) is called
phase plane analysis. Note that the system (6.1) can be written as
x0 = Hy (x, y); y 0 = −Hx (x, y)
where
1
H(x, y) = y 2 − F (x), with F 0 (x) = f (x).
2
Definition 6.1. Let H(x, y) be a differentiable function on R2 . The autonomous system
(
x0 = Hy (x, y)
(6.2)
y 0 = −Hx (x, y)
is called a Hamiltonian system and H is called Hamiltonian.
Lemma 6.1. If (x(t), y(t)) is a solution of (6.2), then there exists c ∈ R such that
H(x(t), y(t)) = c.
Proof. Let (x(t), y(t)) be a solution of (6.2). Then by chain rule
d
H(x(t), y(t)) = Hx (x, y)x0 + Hy (x, y)y 0 = Hx Hy − Hy Hx = 0 .
dt
=⇒ H(x(t), y(t)) = c .

Define
Λc = {(x, y) ∈ R2 : H(x, y) = c}.
Note that if (x(t), y(t)) solves (6.2), then (x(t), y(t)) ∈ Λc for all t.
Example 6.1. Let H(x, y) = Ax2 + Bxy + Cy 2 . Then (0, 0) is the only equilibrium point.
• If c 6= 0, then the curve Λc is a conic. Precisely
i) If B 2 − 4AC < 0, and c > 0, then Λc is an ellipse.
ODES AND PDES 41

ii) If B = 0, A = C and c > 0, then Λc is a circle.


iii) If B 2 − 4AC > 0 and c 6= 0, then Λc is a hyperbola.
• If c = 0 or B 2 = 4AC, then the conic can be a pair of straight lines or it reduces
to a point.
Definition 6.2. Let Ω be an open set in Rn , f~ : Ω → Rn be locally Lipschitz. Consider
the ODE
~u0 (t) = f~(~u(t)) . (6.3)

A point x0 ∈ Ω is called an equilibrium point or critical point of (6.3), if f~(x0 ) = ~0.


Example 6.2. Equilibrium point of the system x0 = x + 1, y 0 = x + 3y − 1 is (−1, 32 ).
Remark 6.1. The point (x∗ , y ∗ ) ∈ R2 such that Hx (x∗ , y ∗ ) = 0 = Hy (x∗ , y ∗ ) are precisely
the equilibria of the Hamiltonian system (6.2).
Remark 6.2. Λc does not contain equilibria of (6.2) if and only if Hx and Hy do NOT
vanish simultaneously on Λc .
Denote (xc (t), yc (t)), the solution of the Hamiltonian system (6.2) such that H(xc (t), yc (t)) =
c. We are interested in the periodicity of the solution of Hamiltonian system (6.2).
Theorem 6.2. Suppose that Λc 6= ∅ is compact curve (closed and bounded) that does not
contain equilibria of (6.2). Then (xc (t), yc (t)) is a periodic solution of (6.2).
Example 6.3. Consider the IVP
(
x0 = 2x + 3y , x(0) = 0 ,
y 0 = −3x − 2y , y(0) = 1 .

It is a Hamiltonian system with H(x, y) = 2xy + 23 y 2 + 32 x2 := Ax2 + Bxy + Cy 2 . The


curve Λc has equation 2xy + 23 y 2 + 32 x2 = c. Since x(0) = 0 and y(0) = 1, we get c = 23 .
Thus the curve λc is an ellipse as B 2 − 4AC < 0 and c = 32 > 0. Note that it does not
contain the equilibrium (0, 0). Hence the solution of the IVP is periodic.
Example 6.4. We would like to find C such that the system x0 = x + 5y , y 0 = −Cx − y
has no periodic solution but the equilibrium x(t) = y(t) ≡ 0. Note that the given system
is a Hamiltonian system with Hamiltonian H(x, y) = C2 x2 + xy + 52 y 2 . Consider the curve
Λc for some c 6= 0. If we show that Λc is hyperbola, then it has no periodic solution. For
hyperbola, we need to choose 1 − 5C > 0. In this choice of C, we see that the system
x0 = x + 5y , y 0 = −Cx − y has no periodic solution but the equilibrium x(t) = y(t) ≡ 0.
Example 6.5. Consider the IVP
(
x0 = x + y , x(0) = 1 ,
y 0 = −2x − y , y(0) = 0 .

This is a Hamiltonian system with H(x, y) = xy + 12 y 2 + x2 := Ax2 + Bxy + Cy 2 . The


curve Λc has equation xy + 21 y 2 + x2 = c. Using initial conditions, we get that c = 1.
Again, it is easy to see that Λc is an ellipse, and hence compact. Moreover, the equilibrium
(0, 0) ∈
/ Λc . Hence the solution of the IVP is periodic.
42 A. K. MAJEE

Example 6.6. Consider the IVP


(
x0 = x − 6y , x(0) = 1 ,
y 0 = −2x − y , y(0) = 0 .
This is a Hamiltonian system with H(x, y) = xy − 3y 2 + x2 := Ax2 + Bxy + Cy 2 . The
equation of the curve Λc is x2 + xy − 3y 2 = c. From initial conditions, c = 1. This is a
hyperbola (B 2 − 4AC = 13 > 0), and hence the solution is unbounded.
Let us come back to the equation (6.1). The equilibria of (6.1) are the points (x0 , 0) ∈
R2 such that f (x0 ) = 0, that correspond to the constant solutions x(t) = x0 , y(t) = 0 of
(6.1). In this case,
1
Λc = {(x, y) ∈ R2 : y 2 − F (x) = c}, where F 0 (x) = f (x).
2
One can have the followings:
i) (x, y) ∈ Λc if and only if (x, −y) ∈ Λc .
ii) (x, 0) ∈ Λc if and only if F (x) = −c. √
iii) (0, y) ∈ Λc if and only if c ≥ 0. In this case, y = 2c.
iv) If a point (x0 , y0 ) ∈ Λc , then c = 12 y02 − F (x0 ).
Example 6.7. Consider a second order autonomous ODE x00 = f (x). Then, it can be
re-written as system (6.1). Hence it is a Hamiltonian system with Hamiltonian
1
H(x, y) = y 2 − F (x), where F 0 (x) = f (x).
2
Thus, if H = c is compact curve, and does not contain any zeros of f , then it carries a
periodic solution of the equation x00 = f (x).
Example 6.8. Consider the IVP
1
x00 = −x + x3 , x(0) = 0 , x0 (0) = .
2
This is a Hamiltonian system with Hamiltonian
1 1 1
H(x, y) = y 2 + x2 − x4 .
2 2 4
Moreover, Λc has the equation 2 y + 2 x − 4 x = c. Since x(0) = 0 and x0 (0) := y(0) = 21 ,
1 2 1 2 1 4

we have c = 18 . Thus the curve is defined by 2y 2 + 2x2 − x4 = 12 . Note that the curve does
not contain any zeros of f , which are 0, 1, −1. It is a closed curve surrounding the origin,
and hence the corresponding solution is periodic.
Example 6.9. Consider the IVP
x00 + x + 6x5 = 0 , x(0) = 0 , x0 (0) = a 6= 0 .
Then this is Hamiltonian system with H(x, y) = 21 y 2 + 12 x2 + x6 . Hence equation of Λc is
1 2
2
y + 12 x2 + x6 = c. From the initial conditions, we obtain c = 12 a2 , and hence the equation
of the curve is given by
x2 + 2x6 + y 2 = a2 .
Note that it is a compact curve and does not contain the zeros of f , which is 0. Thus, the
solution is periodic.
ODES AND PDES 43

7. Stability Analysis:
Consider the initial value problem
(
~u0 = f~(t, ~u(t)) , t ∈ [t0 , ∞)
(7.1)
~u(t0 ) = ~x ,
where we assume that f~ : Ω → Rn is continuous and locally Lipschitz in second argument.
Moreover, we assume that (7.1) has a solution defined in [t0 , ∞). The unique solution is
denoted by ~u(t, t0 , ~x).
Definition 7.1. Let ~u(·, t0 , ~x) be a solution of (7.1).
i) It is said to be stable if for every ε > 0, there exists δ = δ(ε, t0 , ~x) > 0 such that
|~x − ~y | < δ =⇒ |~u(t, t0 , ~x) − ~u(t, t0 , ~y )| < ε .
ii) It is called asymptotically stable if it is stable and there exists δ > 0 such that
for all ~y ∈ B(~x, δ), there holds
lim |~u(t, t0 , ~x) − ~u(t, t0 , ~y )| = 0.
t→∞

iii) It is called unstable if it is NOT stable.


Example 7.1. Consider the IVP ~u0 = f~(t) , ~u(t0 ) = ~x , where f~ : [t0 , ∞) → Rn is
Rt
continuous. Then ~u(t, t0 , ~x) = ~x + t0 f~(s) ds. Thus,
|~u(t, t0 , ~x) − ~u(t, t0 , ~y )| = |~x − ~y | .
This shows that ~u(t, t0 , ~x) is stable but NOT asymptotically stable.
Example 7.2. Consider the IVP
u0 (t) = a(t)u(t) , u(t0 ) = x , with a ∈ C[t0 , ∞) .
Rt
a(s) ds
Then u(t, t0 , x) = xe t0
, and hence
Rt
a(s) ds
|u(t, t0 , x) − u(t, t0 , y)| = |x − y|e t0 .
Rt
i) It is stable if there exists M > 0 such that t0 a(s) ds ≤ M for all t > t0 .
Rt
ii) It is unstable if limt→∞ t0 a(s) ds = ∞.
Rt
iii) It is asymptotically stable if limt→∞ t0 a(s) ds = −∞.
Theorem 7.1. Consider the linear system ~u0 (t) = A~u(t), where A ∈ M(n, R): set of all
n × n matrices with real entries.
a) If the real parts of all multiple eigenvalues are negative and the real part of simple
eigen-values are non-positive, then all solutions of the system are stable.
b) If real part of any eigenvalue is negative, then every solution of the linear system
is asymptotically stable.
Example 7.3. Consider the linear system
u01 = −u1 + u2 , u02 = u1 − u2 .
This can be written as
 
0 −1 1
~u (t) = A~u(t) , where A = .
1 −1
44 A. K. MAJEE

Note that eigenvalues of A are 0 and −2. Hence all solutions of the linear system are
stable.
Example 7.4. Consider the linear system of equation
u01 = −u1 , u02 = u1 − 2u2 , u03 = u1 + 2u2 − 5u3 .
We rewrite the above system as
 
−1 0 0
~u0 (t) = A~u(t) , where A =  1 −2 0  .
1 2 −5
It is easy to show that the eigenvalues of A are −1, −2 and −5. Thus, the solution of the
given system is asymptotically stable.
Theorem 7.2. Let t → 7 A (t) be continuous function from [t0 , ∞) 7→ M(n, R). Then all
solutions ~u(·, t0 , ~x) of the linear system
~u0 = A (t)~u(t) , ~u(t0 ) = ~x
are stable if and only if all solutions are bounded.
Theorem 7.3. Let t 7→ A (t) be continuous function from [t0 , ∞) 7→ M(n, R), where
A (t) = A + B (t). Then
a) If the real part of all multiple eigenvaluesR of A are negative and the real part of

simple eigenvalues are non-negative, and t0 kB B (t)k dt < +∞, then any solution
0
of ~u = A (t)~u(t) is stable.
b) If real part of any eigenvalue of A is negative, and kB B (t)k → 0 as t → ∞, then
0
any solution of ~u = A (t)~u(t) is asymptotically stable.
Example 7.5. Consider the linear system of equation
1
u01 = −u1 − u2 + e−t u3 , u02 = −u2 + u4 ,
1+t
u03 = e−2t u2 − 3u3 − 2u4 , u04 = u3 − u4 .
This can be written as ~u0 = A (t)~u(t) with A (t) = A + B (t), where
0 0 e−t
   
−1 −1 0 0 0
 0 −1 0 1 
0 , and B (t) = 0 −2t 0 0

A= 1+t 
.
0 0 −3 −2   0 e 0 0 
0 0 1 −1 0 0 0 0
A − λII ) = 0, and solve it. It is
To find the eigen values of A , consider the equation det(A
easy to check that eigenvalues are −1, −1, −2 ± i. Moreover, kBB (t)k1 = e−t + e−2t + 1+t1
,
and hence kBB (t)k → 0 as t → ∞. Thus, in view of Theorem 7.3, any solution of the
given system is asymptotically stable.
Remark 7.1. For nonlinear systems, boundedness and stability are distinct concepts. for
example, consider the scalar first order ODE
(
y 0 (t) = tp , p ≥ 1 ,
y(t0 ) = y0 .
ODES AND PDES 45

1
tp+1 − tp+1

Then solution is given by y(t, t0 , y0 ) = y0 + p+1 0 . Hence
|y(t, t0 , y0 ) − y(t, t0 , y0 + δy0 )| = |∆y0 | < δ , if |∆y0 | < δ .
Thus, it is stable. But it is NOT bounded.
7.1. Critical points and their stability: Consider the system ~u0 (t) = f~(~u(t)). If
~x0 ∈ Ω is an equilibrium point,i.e., f~(~x0 ) = ~0, then ~u(t) = ~x0 is a solution of the ODE
~u0 (t) = f~(~u(t)), t > 0
~u(0) = ~x0 .
Conversely, if ~u(t) ≡ ~x0 is a constant solution, then f~(~x0 ) = ~0.
Definition 7.2. We say that ~x0 is stable/asymptotically stable/ unstable if this solution
is stable/asymptotically stable/unstable.
For any f~ ∈ C 1 (Ω), where Ω is an open subset of Rn , we denote by Df~(~x0 ) is the
matrix
 ∂f1 ∂f1 ∂f1 
∂u1
(~x0 ) ∂u 2
(~x0 ) . . . ∂u n
(~x0 )
 ∂f2 (~x0 ) ∂f2 (~x0 ) . . . ∂f2 (~x0 )
Df~(x0 ) =  1 .
∂u ∂u 2 ∂u n
.
 
. .
. . . .
.
 . . . . 
∂fn ∂fn ∂fn
∂u1
(~x0 ) ∂u2
(~x0 ) ... ∂un
(~x0 )
Definition 7.3. A critical point ~x0 ∈ Ω is said to be hyperbolic if none of the eigenvalues
of Df~(~x0 ) are purely imaginary.
Example 7.6. Consider the nonlinear system
u01 = −u1 , u02 = −u2 + u21 , u03 = u3 + u21 .
The only equilibrium point of this system is ~0. The matrix Df~(~0) is given by
 
−1 0 0
Df~(~0) =  0 −1 0 .
0 0 1
Eigenvalues of Df~(~0) are −1, −1 and 1. Hence the equilibrium point ~0 is hyperbolic.
Theorem 7.4. A hyperbolic equilibrium point ~x0 is asymptotically stable if and only if
all eigenvalues of Df~(~x0 ) have negative real part.
Example 7.7. Consider the nonlinear system
u01 = −u1 + u23 , u02 = u21 − 2u2 , u03 = u21 + u32 − 4u3 .
Note that ~0 is an equilibria of the given system. The matrix Df~(~0) is given by
 
−1 0 0
Df~(~0) =  0 −2 0  .
0 0 −4
Eigenvalues of Df~(~0) are −1, −2 and −4, none of them are purely imaginary. Hence ~0
is a hyperbolic equilibrium point. Since all eigenvalues of Df~(~x0 ) have negative real part,
we conclude that ~0 is asymptotically stable.
46 A. K. MAJEE

Theorem 7.5. If ~x0 is a stable equilibrium point of the system ~u0 (t) = f~(~u(t)), the no
eigenvalues of Df~(~x0 ) has positive real part.
Remark 7.2. Hyperbolic equilibrium points are either asymptotically stable or unstable.
The stability of non-hyperbolic equilibrium points is typically more difficult to de-
termine. A method, due to Liapunov, that is very useful for deciding the stability of
non-hyperbolic equilibrium points.
7.2. Liapunov functions and stability.
Definition 7.4. Let f~ ∈ C 1 (Ω), V ∈ C 1 (Ω; R), and φ ~ t (~x) is the flow of the system
~u0 (t) = f~(~u(t)), i.e., φ
~ t (~x) = ~u(t, ~x). Then, for any ~x ∈ Ω, the derivative of the function
V along the solution ~u(t, ~x) is given by
. d ~
V (~x) = V (φ t (~
x)) = ∇V (~x) · ~u0 (0, ~x) = ∇V (~x) · f~(~x).
dt t=0

Theorem 7.6. Let Ω be an open set in Rn and f~ : Ω → Rn be C 1 . Let ~x0 ∈ Ω be


an equilibrium point of the system ~u0 (t) = f~(~u(t)). Assume that there exists a function
V : Ω → R such that V is C 1 and
V (~x0 ) = 0 , V (~x) > 0 , ∀~x 6= x~0 .
.
i) If V (~x) ≤ 0 for all ~x ∈ Ω, then ~x0 is stable.
.
ii) If V (~x) < 0 for all ~x ∈ Ω \ {~x0 }, then ~x0 is asymptotically stable.
.
iii) If V (~x) > 0 for all ~x ∈ Ω \ {~x0 }, then ~x0 is unstable.
Remark 7.3. The function V satisfying the assumption of Theorem 7.6 is called the
Liapunov function.
Example 7.8. Consider the linear system
x01 = −4x1 − 2x2 , x02 = x1 .
Note that (0, 0) is only equilibrium point of the given system. Let V : R2 → R be a C 1
function defined by
V (x1 , x2 ) = c1 x21 + c2 x22 , c1 , c2 ∈ R+ .
Since v(0, 0) = 0 and V (x1 , x2 ) > 0 for (x1 , x2 ) 6= (0, 0), V is a Liapunov function. Now
.
V (x1 , x2 ) = ∇V (x1 , x2 ) · f~(x1 , x2 ) = −8c1 x21 + (2c2 − 4c1 )x1 x2 .
Choose c1 , c2 > 0 such that c2 = 2c1 . Taking c1 = 1, we have c2 = . 2, and hence the
Liapunov function takes of the form V (x1 , x2 ) = x21 + 2x22 . Note that V (x1 , x2 ) = −8x21 .
.
Hence V (~x) < 0 for all ~x ∈ R2 \ {~0}. Thus, (0, 0) is asymptotically stable equilibrium
point.
Example 7.9. Consider the nonlinear system
x01 = −x2 − x1 x2 , x02 = x1 + x21 .
Origin is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) = x21 + x22 .
.
Then V (x1 , x2 ) = 0 for all (x1 , x2 ) ∈ R2 . Thus, (0, 0) is an stable equilibrium point.
.
Furthermore, since dtd V (x1 (t), x2 (t)) = 0, V (x1 (t), x2 (t)) = c1 for some constant. i.e., the
ODES AND PDES 47

trajectories of this system lie on the circle x21 +x22 = c2 . Hence (0, 0) is NOT asymptotically
stable equilibrium point.
Example 7.10. Consider the second order differential equation x00 + q(x) = 0, where
q : R → R is a continuous function such that xq(x) > 0 ∀x 6= 0. This can be written as a
system
x01 = x2 , x02 = −q(x1 ) where x1 = x .
The total energy of the system (sum of kinetic energy 21 (x01 )2 and the potential energy)
Z x1
x22
V (~x) = + q(s) ds .
2 0
Note that (0, 0) is an equilibrium point, and V (0, 0) = 0. Moreover, since xq(x) > 0 ∀x 6=
0, it is easy to check that V (x1 , x2 ) > 0 for all (x1 , x2 ) ∈ R2 \ {~0}. Therefore, V is a
Liapunov function. Now
.
V (x1 , x2 ) = (q(x1 ), x2 ) · (x2 , −q(x1 )) = 0 .
The solution curves are given by V (~x) = c, i.e., the energy is constant on the solution
curves or trajectories of this system. Hence the origin is a stable equilibrium point.
Example 7.11. Consider the nonlinear system
x01 = −x2 + x31 + x1 x22 , x02 = x1 + x32 + x2 x21 .
Note that (0, 0) is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) =
.
x21 + x22 . Then V (x1 , x2 ) = (2x1 , 2x2 ) · (−x2 + x31 + x1 x22 , x1 + x32 + x2 x21 ) = 2(x21 + x22 )2 >
0 , ∀(x1 , x2 ) ∈ R2 \ {~0}. Thus, by Theorem 7.6, we conclude that (0, 0) is unstable
equilibrium point.
Example 7.12. Let f (x) resp. g(x) be an even polynomial resp. odd polynomial in x.
Consider the 2nd order ODE y 00 + f (y)y 0 + g(y) = 0. This can be written as
Z x
0 0
x1 = x2 − F (x1 ) , x2 = −g(x1 ) , where F (x) = f (s) ds .
0
To see this, let x1 = y. Then From the equation, we have
d d 0
x001 + F (x1 ) + g(x1 ) = 0 =⇒ [x + F (x1 )] = −g(x1 ) .
dt dt 1
Set x2 = x01 + F (x1 ). Then we have x01 = x2 − F (x1 ) , x02 = −g(x1 ).
Rx
Let G(x) = 0 g(s) ds, and suppose that G(x) > 0 and g(x)F (x) > 0 in a deleted
neighborhood of the origin. Then the origin is a asymptotically stable equilibrium point.
Indeed, consider the Liapunov function in a nbd. of (0, 0) as
Z x1
1
V (x1 , x2 ) = g(s) ds + x22 .
0 2
Note that V (0, 0) = 0 and V (x1 , x2 ) > 0 in a deleted nbd. of (0, 0). Moreover,
.
V (x1 , x2 ) = (g(x1 ), x2 ) · (x2 − F (x1 ), −g(x1 )) = −g(x1 )F (x1 ) < 0 .
Hence origin is a asymptotically stable equilibrium point. Note that if we assume that
G(x) > 0 and g(x)F (x) < 0 in a deleted neighborhood of the origin,
then origin will be a unstable equilibrium point.
48 A. K. MAJEE

8. Partial Differential equations:


A partial differential equation (PDE) is an equation involving an unknown function
u(~x) and its derivatives. PDEs occur naturally in applications; they model the rate of
change of a physical quantity with respect to both space variables and time variables.
Definition 8.1. A multi-index α is anP
n-tuple of non-negative integers, say α = (α1 , α2 , . . . , αn ).
For a multi-index α, we define |α| = ni=1 αi , and the differential operator
∂ |α|
Dα = .
∂xα11 ∂xα22 · · · ∂xαnn
∂f ∂2f
For α = (1, 0), Dα f = ∂x
, and for α = (1, 1), Dα f = ∂x∂y
where f is a function of x
and y.
Definition 8.2. A PDE is an equation involving an unknown function of two or more
variables and certain of its partial derivatives.
• The order of the PDE is the highest order of derivatives involved.
• An expression of the form
F (~x, Du(~x), · · · , Dk u(~x)) = 0, ~x ∈ Ω ⊂ Rn (8.1)
is called a k-th order PDE, where k ≥ 1 is an integer and F : Ω × R × Rn × · · · ×
k
Rn → R is given and u : Ω → R is the unknown.
Example 8.1. Examples of PDEs
i) ux (x, y) + uy (x, y) = 0 for (x, y) ∈ R2 .
ii) ut + uux = 0, t > 0, x ∈ R (Burger’s equation).
2
iii) ∆u = 0, where ∆u = ni=1 ∂∂xu2 (Laplace equation).
P
i
iv) ut − ∆u = 0, t > 0, ~x ∈ Rn (Heat equation).
v) utt − ∆u = 0 (Wave equation).
Definition 8.3. We say that u is a solution (classical solution) of the k-th order PDE
(8.1), if all partial derivatives involved exist and satisfies the PDE (8.1).
Remark 8.1. In general we look for solutions which satisfies certain boundary condition
or initial condition.
Definition 8.4. A k-th order PDE (8.1) is called
a) linear if it has the form |α|≤k aα (~x)Dα u = f (~x),
P

b) semi-linear if it is of the form |α|=k aα (~x)Dα u + a0 (~x, u, Du, . . . , Dk−1 u) = 0,


P

c) quasilinear if it is of the form


X
aα (~x, u, Du, . . . , Dk−1 u)Dα u + a0 (~x, u, Du, . . . , Dk−1 u) = 0,
|α|=k

d) fully nonlinear if it depends nonlinearly upon the highest order derivatives.


Example 8.2. Consider the following PDEs in two independent variables:
a) Linear PDEs: ux + uy = 0; xux + yuy = 0; ∆u = f (x); ut − ∆u = f (x).
b) Semi-linear PDEs: ux + uy = u2 ; a(x, y)ux + b(x, y)uy = f (x, y, u).
c) Quasilinear PDEs: a(x, y, u)ux + b(x, y, u)uy = c(x, y, u); ut + uux = 0.
ODES AND PDES 49

d) Fully nonlinear PDE: u2x + u2y = 1 (Eikonal equation).


Definition 8.5. We say that a PDE is well-posed if
i) it has a solution
ii) solution is unique
iii) solution depends continuously on the given data, i.e., if the given data varies, then
the solution may not varies widely.
8.1. First Order PDEs & Method of Characteristics: For simplicity, we consider
the PDEs in two independent variables. We begin with a simple example, called transport
equation.
Example 8.3. Consider the problem of finding u(t, x) satisfying
ut + aux = 0 t > 0, x ∈ R
(8.2)
u(0, x) = h(x) , x ∈ R , where a ∈ R is a constant .
Case I: a = 0. Then the equation is ut = 0. Hence u(t, x) = h(x) ∀x ∈ R. So, to get a
solution at (t, x), we project this on to x-axis and take the initial value at this point, i.e.,
we are traveling back along the lines parallel to t-axis to the initial curve to identify the
solution.
Case II: a 6= 0: Note that the left hand side of (8.2) is the directional derivative of u(t, x)
in the direction (1, a). Find a curve in R2 such that it passes through the point (t, x) with
slope 1 and a. Consider the function g : R → R as g(s) = (t + s, x + as). This line hits
the plane Γ := {t = 0} × R when s = −t at the point (0, x − at). Then F (s) = u(g(s))
satisfies
F 0 (s) = ut + aux = 0.
Therefore, F (·) is a constant function of s, and consequently for each point (t, x), u is
constant on the line through (t, x) with the direction (1, a). Hence u(t, x) = h(x − at).
Conversely, every u of the form u(t, x) = h(x − at) is a solution of (8.2) with initial
values h provided h is of class C 1 (R).
Non-homogeneous transport problem: Consider the non-hommogeneous transport
problem as
ut + aux = f (t, x); u(0, x) = h(x) , x ∈ R.
For fixed (t, x), define F (s) = u(t + s, x + as). Then we have
F 0 (s) = ut + aux = f (t + s, x + as) .
Integrating, we have
Z −t Z t
F (0) − F (−t) = f (t + s, x + as) ds = f (s, x + a(s − t)) ds.
0 0

On the other hand, F (0) − F (−t) = u(t, x) − u(0, x − at) = u(t, x) − h(x − at). Therefore
Z t
u(t, x) = h(x − at) + f (s, x + a(s − t)) ds .
0
50 A. K. MAJEE

8.1.1. Method of characteristics: Consider the quasilinear equation in two indepen-


dent variables x, y
aux + buy = c , (8.3)
where a, b and c are continuous functions in x, y and u. Let u(x, y) be a solution and
z = u(x, y) be the graph of u. Let z0 = u(x0 , y0 ). Then the normal to the surface

S := (x, y, z) : z = u(x, y)
at any point (x0 , y0 , z0 ) is N0 = (ux (x0 , y0 ), uy (x0 , y0 ), −1). Observe that the vector
V0 = (a(x0 , y0 , z0 ), b(x0 , y0 , z0 ), c(x0 , y0 , z0 ))
is perpendicular to the normal N0 , and hence V0 must lie on the tangent plane to the
surface S at (x0 , y0 , z0 ).
Aim: Find the surface z = u(x, y) knowing that the vector field V (x, y, z) = (a, b, c) lies
on the tangent plane of the surface S at the point (x, y, z). Such a surface is called the
integral surface. Thus, to find a solution of (8.3), we should try to find an integral
surface.
Cauchy problem: Given a space curve Γ in R3 , find a function u(x, y) satisfying (8.3)
such that the level surface z = u(x, y) contains Γ. In other words, find u(x, y) satisfying
aux + buy = c in U ; u(x, y) = h(x, y) on Γ (8.4)
where U is an open domain that contains the curve Γ.
Let the initial curve Γ is parameterized by (f (s), g(s), h(s)). For each s, we construct
an integral surface S parameterized by s and t as S = {(x(s, t), y(s, t), z(s, t)) : t ≥ 0}
so that at t = 0, it coincides the initial parameterization. Since (a, b, c) is perpendicular
to the normal to the surface, it is natural that it satisfies the system of equations with
initial conditions:
dx


 = a(x(s, t), y(s, t), z(s, t)) , x(s, 0) = f (s) ,
 dt


dy
= b(x(s, t), y(s, t), z(s, t)) , x(s, 0) = g(s) , (8.5)
 dt
 dz = c(x(s, t), y(s, t), z(s, t)) , x(s, 0) = h(s) .



dt
We can solve this system of equations uniquely (via Picard’s theorem) for all small t
under the assumption that a, b and c are C 1 function. The curves so obtained are called
characteristic curves. To obtain the surface in the variable x, y, we need to invert the
map (x, y) 7→ (s, t). For this we use inverse function theorem. The above map is
invertible near t = 0 if the Jacobian
x y
J = s s 6= 0 at t = 0 i.e., bf 0 (s) − ag 0 (s) 6= 0 .
xt y t
Definition 8.6 (Non-Characteristic Curve). A curve Γ is called non-characteristic if
bf 0 (s) − ag 0 (s) 6= 0.
Geometrically, this means that the tangent to Γ and the vector field (a, b, c) along Γ
project to vectors in the xy-plane are nowhere parallel.
In view of the above discussions, we arrive at the following theorem.
ODES AND PDES 51

Theorem 8.1. Let Γ be a non-characteristic curve, and a, b, c are C 1 functions. Then


the Cauchy problem (8.4) has a solution in a nbd. of the initial curve Γ.
Example 8.4. Solve the IVP
ux + 2uy = u2 , x ∈ R, y > 0 ; u(x, 0) = h(x) , x ∈ R .
Solution: A parametrization of the initial curve Γ is {(s, 0, h(s)) : s ∈ R}. The charac-
teristic equations are
xt = 1, x(s, 0) = s; yt = 2, y(s, 0) = 0; zt = z 2 , z(s, 0) = h(s) .
Note that the Jacobian J 6= 0. Therefore, Γ is non-characteristic. Solving the character-
istic equations , we have
1 1
y = 2t, x = t + s, − = t − .
z h(s)
Inverting the variables, we get
h(s)
t = y/2, s = x − y/2, z = .
1 − th(s)
The characteristic lines are y = 2x − 2s and the solution is
h(x − y/2)
u(x, y) =
1 − y2 h(x − y/2)
which is defined upto 1 − y2 h(x − y/2) 6= 0.
Example 8.5 (Characteristic problem). Consider the problem
ux + uy = 0 , (x, y) ∈ R2 ; u = h on Γ .
Show that
a) Solution exists uniquely if Γ is not parallel to y = x.
b) If Γ is parallel to y = x, the solution exists if and only if g is constant.
Solution: If Γ is parallel to y = x, the initial parametrization of Γ is {(s, s, h(s)) : s ∈ R}
and hence f 0 (s) = 1 = g 0 (s). Thus the Jacobian J = 0. Therefore, solution exists uniquely
if Γ is NOT parallel to y = x.
b) Let us solve the problem with u(x, 0) = h̄(x). Then solving the characteristic equa-
tions, we have
x = t + s , y = t , z = h̄(s) hence u(x, y) = h̄(x − y) .
Therefore, on y = x, u(x, x) = h̄(0). If u is a solution of the underlying problem, then
it is necessary and sufficient that h(x) = h̄(0) i.e., h is constant. In this case, there are
infinitely many solutions exist, a family of them is given by
u(x, y) = c + h̃(x − y) with h̃(0) = 0 .
Example 8.6. Solve the IVP:
uy + uux = 0 , x ∈ R, y > 0; u(x, 0) = h(x) . (8.6)
Solution: The initial parametrization of the initial space curve is (s, 0, h(s)). The char-
acteristic equations are
xt = z, x(s, 0) = s; yt = 1, y(s, 0) = 0; zt = 0, z(s, 0) = h(s) .
52 A. K. MAJEE

Therefore, z = h(s), x = h(s)t + s and y = t. Hence the characteristic lines are x =


h(s)y + s and the speed of the curve is dx
dy
= h(s). The solution can be defined implicitly
as

u(x, y) = h(x − u(x, y)y) .

Observation: let γ1 , γ2 be characteristic curve of (8.6) at s1 resp. at s2 with speed h(s1 )


resp. h(s2 ). If they intersect, then
s1 − s2
s1 + h(s1 )y = s2 + h(s2 )y =⇒ y = .
h(s2 ) − h(s1 )
Thus, if h is decreasing function, then the characteristic lines intersect at a point y > 0.
This phenomena is called Gradient Catastrope or blow up. Now from the solution
h0 (s) 0
u = h(x − uy), we see that ux = 1+yh 0 (s) . Hence if h (s) < 0, we find that ux becomes

infinite at the positive time y = − h01(s) . Thus, if h0 (s0 ) < 0, at any point s0 , then the
solution does not exist globally. We can interpret the above as follows:
• If the initial velocity u(x, 0) of the fluid flow form a non-decreasing function of
position, then the fluid moves out in a smooth fashion.
• If the initial velocity is decreasing function, then the fluid flow undergo a shock
that correspond to collision of particles i.e., the integral surface folds itself.

Example 8.7. Consider the Burger’s equation as in Example 8.6 with initial condition
h(x) given by

1 , x < 0

h(x) = 1 − x , x ∈ [0, 1)

0 , x > 1

In this case, the characteristic lines are z = h(s), x = h(s)t + s and y = t. So,

s + t , s < 0

x(s, t) = s + t(1 − s) , s ∈ [0, 1]

s , s > 1

For y < 1, the characteristic lines do not intersect. So, given a point (x, y) with y < 1,
we can draw the backward through characteristics

x − y , x < y < 1 ,

s = x−y1−y
, y ≤ x ≤ 1,

x, x > 1.

Therefore, the solution for y < 1 is given by



1 , x < y < 1 ,

u(x, y) = 1 − x−y
1−y
, y ≤ x ≤ 1,

0, x > 1.

ODES AND PDES 53

8.2. General solution of quasilinear 1st order PDE:. In ODEs, an IVP is often
solved by finding a general solution that depends on an arbitrary constant and then using
the initial condition to evaluate the constant. For 1st order quasilinear PDE, a similar
process may be achieved by the method of Lagrange.
Definition 8.7 (General solution). F (φ, ψ) = 0, where φ = φ(x, y, z), ψ = ψ(x, y, z) and
F is an arbitrary smooth function, is called a general solution of f (x, y, z, p, q) = 0 if z, p
and q as determined by the relation F (φ, ψ) = 0 satisfies the PDE f (x, y, z, p, q) = 0.
Theorem 8.2. Suppose there exist two functions φ and ψ such that they are constant along
the characteristic equations of the quasilinear PDE aux + buy = c. Then F (φ, ψ) = 0 is a
general solution of the PDE, where F is such that Fφ2 + Fψ2 6= 0.
Remark 8.2. Since F should satisfy only condition, Fφ2 + Fψ2 6= 0, one may choose F of
the form:
F (φ, ψ) = φ + g(ψ),
where g is a smooth arbitrary function.
Example 8.8. Find a general solution of uux + yuy = x.
Solution: The characteristic equation in the non-parametric form can be written as
dx dy dz
= = (8.7)
z y x
From (8.7), we have xdx−zdz = 0 i.e., d(x2 −z 2 ) = 0. Therefore, take φ(x, y, z) = x2 −z 2 .
Then φ(x, y, z) is constant along (8.7). Now by using (8.7), we see that
y
xdy − ydx = ydz − zdy =⇒ (x + z)dy − yd(x + z) = 0 =⇒ d( ) = 0.
x+z
y
Therefore, take φ(x, y, z) = x+z . Then ψ(x, y, z) is constant along the characteristic
equations. Thus, the general solution is F (φ, ψ) = φ + g(ψ) = 0, i.e.,
y
u2 = x2 + g( ).
x+u
Remark 8.3. For nonlinear equations, the term general solution need not mean that all
solutions are of this form. This phenomenon should be familiar from ODEs. For example,
√ 2
the general solution of ux + uy = u is given by u(x, y) = (x+f (x−y))
4
for arbitrary smooth
function f . But the trivial solution u ≡ 0 is not covered by the general solution.
8.3. Nonlinear equation: A general nonlinear 1st order PDE in x, y takes of the form
F (x, y, u, ux , uy ) = 0. Let p = ux and q = uy . Suppose F has a quasilinear form
F ≡ a(x, y, z)p + b(x, y, z)q − c(x, y, z) = 0.
Then the characteristic equations are
dx dy dz
Fp = a = , Fq = b = , pFp + qFq = ap + bq = c = .
dt dt dt
Taking this as motivation, we write three equations , for general nonlinear 1st order PDE
dx dy dz
= Fp , = Fq , = pFp + qFq .
dt dt dt
54 A. K. MAJEE

We need equations satisfies by p and q as well. Observe that


dp d dx dy
= ux = uxx + uxy = F p px + F q qx
dt dt dt dt
We do not want px and qx . To do so, we differentiate F = 0, with respect to x, we get
Fx + Fz Zx + Fp px + Fq qx = 0 =⇒ Fp px + Fq qx = −Fx − Fz zx = −Fx − Fz p .
dp dq
Thus, dt
= −Fx − Fz p. Similarly, we have dt
= −Fy − Fz q. Thus, the five equations
dx dy dz dp dq
= Fp , = Fq , = pFp + qFq , = −Fx − Fz p , = −Fy − Fz q
dt dt dt dt dt
form the characteristic strip. The initial parametrization of the given initial curve Γ gives
the initial conditions for x, y and z. To solve the above system of ODEs, we need to find
initial values for p and q. Observe that, on Γ, h(s) = u(f (s), g(s)). Thus, if p0 (s) and
q0 (s) be the initial values for p and q resp. then it should satisfy the strip condition
p0 (s)f 0 (s) + q0 (s)g 0 (s) = h0 (s) . (8.8)
and the admissible condition, i.e., F (f (s), g(s), h(s), p0 (s), q0 (s)) = 0 on the initial curve.
So, p0 and q0 are such that
(
p0 (s)f 0 (s) + q0 (s)g 0 (s) = h0 (s)
(8.9)
F (f (s), g(s), h(s), p0 (s), q0 (s)) = 0 .
Observe that, in order to construct the integral surface S, we are interested in the support
of the strip, namely the curve (x(t), y(t), z(t)), but to find it, we need to find the functions
p(t) and q(t). Suppose Γ is non-characteristic, i.e.,
f 0 (s)Fp (f (s), g(s), h(s), p0 (s), q0 (s)) − g 0 (s)Fq (f (s), g(s), h(s), p0 (s), q0 (s)) 6= 0 .
As a consequence of the existence and uniqueness of solutions of ODEs, we have
Theorem 8.3. If Γ is non-characteristic for nonlinear problem F (x, y, z, p, q) = 0 and
functions p0 (s) and q0 (s) exist and satisfying (8.9), then there is an integral surface S
containing Γ (which is unique for the choice of p0 and q0 ).
Example 8.9. Solve the IVP
ux uy = u , x, y ∈ R; u(0, y) = y 2 .
Solution: The problem can be written as F (x, y, u, ux , uy ) = 0 where F (x, y, z, p, q) =
pq − z. Parametrization of initial curve is {(0, s, s2 ) : s ∈ R}. Initial functions p0 and q0
must satisfy the conditions (8.9), i.e., q0 (s) = 2s and p0 (s) = 2s . The characteristic strip
satisfy
dx dy dz
= q , x(s, 0) = 0; = p , y(s, 0) = s; = 2pq = 2z , z(s, 0) = s2 ;
dt dt dt
dp s dq
= p , p(s, 0) = ; = q , q(s, 0) = 2s .
dt 2 dt
Solving the above equations, we have
s s
q(s, t) = 2set , p(s, t) = et , z(s, t) = s2 e2t , x(s, t) = 2s(et − 1) , y(s, t) = (1 + et ).
2 2
ODES AND PDES 55

Notice that set = 21 ( x2 + 2y). Hence the solution is given by


x
u(x, y) = z = (set )2 = (y + )2 .
4
Example 8.10. Solve the IVP
3
uy = u3x , x, y ∈ R; u(x, 0) = 2x 2 , x ∈ R .
3
Solution: Here F (x, y, z, p, q) = p3 −q. Parametrization of the initial curve is {(s, 0, 2s 2 ) :
3
s ∈ R}. Therefore, f (s) = s, g(s) = 0 and h(s) = 2s 2 . From the strip condition, we have
1 3
p0 (s) = 3s 2 . Again from the admissible condition, we get q0 (s) = 27s 2 . The characteristic
strip are
dx dy dz 3
= 3p2 , x(s, 0) = s; = −1 , x(s, 0) = 0; = 3p3 − q , z(s, 0) = 2s 2 ,
dt dt dt
dp 1 dq 3
= 0 , p(s, 0) = 3s 2 ; = 0 , q(s, 0) = 27s 2 .
dt dt
3 1
Therefore q(s, t) = 27s 2 , p(s, t) = 3s 2 , y(s, t) = −t, x(s, t) = 27st + s, and z(s, t) =
3 3 x
2s 2 + 54s 2 t. Note that from x(s, t) and y(s, t), we have s = 1−27y . Thus, the solution
u(x, y) is given by
3
3 x2 3 1
u(x, y) = z = 2s (1 − 27y) = 2
2
3 (1 − 27y) = 2x 2 (1 − 27y)− 2 .
(1 − 27y) 2

8.4. Complete integral and general solutions: We have considered general solution
solution for quasilinear problem. Do such general solutions exist for fully nonlinear equa-
tions? The answer is yes but the process is more complicated than the quasilinear case.
Let us first consider so called complete integrals. Let A ⊂ R2 be an open set which is the
parameter set. For any C 2 -function u, we denote
 
2 ua1 uxa1 uya1
(Da u, Dxa u) = .
ua2 uxa2 uya2
Definition 8.8 (Complete integral). A C 2 - function u(x, a) is said to be a complete
integral in U × A of the equation F (x, y, u, ux , uy ) = 0 in U if u(x, a) solves the PDE
2
F (x, y, u, ux , uy ) = 0 and rank of (Da u, Dxa u) is equal to 2.
Example 8.11. Find a complete integral of ux uy = u.
Solution: From the given equation, we have F (x, y, z, p, q) = pq − z. The characteristic
equations are
dx dy dz dp dq
= q, = p . = 2z = p, = q.
dt dt dt dt dt
From last two equations, we have p = c1 et and p = c2 et . Thus from third equation,
we have z = c1 c2 e2t + c3 . From the first equation, we have x = c2 et + a and from
the second equation, we have y = c1 et + b. Thus, (x − a)y − b = c1 c2 e2t , and hence
u(x, y, a, b) = z = (x − a)(y − b) + c3 . So, u(x, y, a, b) will be a solution if c3 = 0. Then
we get u(x, y, a, b) = (x − a)(y − b). It is easy to check that
 
2 b−y 0 −1
(Da u, Dxa u) = .
−x + a −1 0
whose rank is 2. Therefore, u(x, y, a, b) = (x − a)(y − b) is a complete integral.
56 A. K. MAJEE

Example 8.12. Find a complete integral of uy + H(ux ) = 0.


Solution: Equation can be written as F (x, y, z, p, q) := q + H(p) = 0. The characteristic
equations are
dp dq dy dx dz dx dy
= 0, = 0, = 1, = Fp , =p +q .
dt dt dt dt dt dt dt
Now, dp
dt
= 0 gives p = a, and using this, we have from equation that q = −H(a). Again,
dy
from dt = 1, we have y = t, and hence the solution is given by z = u(x, y, a, b) =
ax − H(a)y + b. Note that
x − H 0 (a)y 1 −H 0 (a)
 
2
(Da u, Dxa u) = ,
1 0 0
whose rank is 2. Therefore, u(x, y, a, b) = ax − H(a)y + b is a complete integral of the
given Hamiltonian-Jacobi equation.
Remark 8.4. In general complete integral is not unique. Moreover, all the solutions can
not covered from the complete integral.
Next we study how to build more complicated solutions for nonlinear 1st order PDEs.
We construct these new solutions as envelopes of complete integrals.
Definition 8.9 (Envelope of a family). Suppose u(x, y, a1 , a2 ) be a C 1 function on U × A
where U is an open subset of R2 and A be the parameters set. If the equations
∂u
(x, y, a1 , a2 ) = 0, (x, y) ∈ U , (a1 , a2 ) ∈ A (i = 1, 2)
∂ai
can be solved for the parameters and has solution a1 = φ(x, y), a2 = ψ(x, y), i.e.,
∂u
(x, y, φ(x, y), ψ(x, y)) = 0 (x, y) ∈ U , (i = 1, 2),
∂ai
then we call the function v(x, y) = u(x, y, φ(x, y), ψ(x, y)) the envelope of the functions
{u(·, a)}a∈A .
Theorem 8.4. Suppose for each (a1 , a2 ) ∈ A, u = u(·, a1 , a2 ) solves the PDE
F (x, y, u, ux , uy ) = 0.
Assume that the envelope v defined as above exists and is C 1 - function. Then v(x, y)
solves F (x, y, u, ux , uy ) = 0.
Proof. v(x, y) is a envelope of the family {u(·, a)}a∈A such that
v(x, y) = u(x, y, φ(x, y), ψ(x, y)).
In view of the assumption, we see that
F (x, y, u(x, y, φ(x, y), ψ(x, y)), ux (x, y, φ(x, y), ψ(x, y)), uy (x, y, φ(x, y), ψ(x, y))) = 0 .
Suppose v is C 1 . Then
∂u
vx (x, y) = ux (x, y, φ(x, y), ψ(x, y)) + (x, y, φ(x, y), ψ(x, y))φx (x, y)
∂a1
∂u
+ (x, y, φ(x, y), ψ(x, y))ψx (x, y)
∂a2
ODES AND PDES 57

∂u
= ux (x, y, φ(x, y), ψ(x, y)) , as (x, y, φ(x, y), ψ(x, y)) = 0 .
∂ai
Similarly, vy (x, y) = uy (x, y, φ(x, y), ψ(x, y)). Thus, F (x, y, v(x, y), vx (x, y), vy (x, y)) = 0.
This completes the proof. 
Definition 8.10 (Singular solution). The solution v described in Theorem 8.4 is called
singular solution of the nonlinear 1st order PDE F (x, y, u, ux , uy ) = 0.
Example 8.13. Find the singular solution of u2x + u2y = 1 + 2u.
Solution: To find singular solution, we first need to find complete integral of the given
PDE. Here F (x, y, z, p, q) = p2 + q 2 − 1 − 2z. The characteristic equations are
dx dy dp dq
= 2p , = 2q , = 2p , = 2q .
dt dt dt dt
Solving these, we have p = c1 e2t , q = c2 e2t . Hence pq = a. Since p2 + q 2 − 1 − 2z = 0, we
have r r
1 + 2z 1 + 2z
p = ±a , q = ±a .
1 + a2 1 + a2
Now from the strip condition
r r r
dz dx dy 1 + 2z dx 1 + 2z dy 1 + 2z  dx dy 
=p +q = ±a ± = ± a +
dt dt dt 1 + a2 dt 1 + a2 dt 1 + a2 dt dt
√  2
Integrating, we have 1 + 2z = ± √ax+y 1+a2
+ b, and hence u(x, y, a, b) = 1 √ax+y
2 1+a2
+ b − 12 .
 2
One can check that rank of (Da u, Dxa 2
u) is 2. Hence u(x, y, a, b) = 12 √ax+y 1+a2
+ b − 21 is
a complete integral. Now ua = 0 and ub = 0 gives √ax+y + b = 0. Thus, v(x, y) = − 21 is a
1+a2
singular solution.
To generate more solutions from the complete integrals, we vary the above construction.
Choose an open set à ⊂ R and a C 1 - function h : à → R so that the graph (ã, h(ã)) lies
with in A ⊂ R2 .
Definition 8.11 (General integral). The general integral (depending on h) is the enve-
lope ṽ(x, y) of the functions {u(·, ã)}ã∈Ã provided this envelope exists and is C 1 , where
u(x, y, ã) = u(x, y, ã, h(ã)).
Example 8.14. Find a general integral of ux uy = u.
Solution: We have shown that a complete integral of the above PDE is given by
u(x, y, a, b) = (x − a)(y − b).
Let h : R → R be a function such that h(a) = a. Then u(x, y, a) = (x − a)(y − a), and
 2
hence ua = 0 gives a = x+y
2
. Therefore, the general integral is v(x, y) = − x−y
2
.

Example 8.15. Find a general integral of ut + u2x = 0.


Solution: Given problem is a Hamilton-Jacobi equation with H(p) = p2 . Hence a complete
integral is given by u(x, t, a, b) = ax − ta2 + b. Let h : R → R be a function such that
x
h(a) = a2 . Then u(x, y, a) = ax − ta2 + a2 , and hence ua = 0 gives a = 2(t−1) . Therefore,
x2
the general integral is v(x, y) = 4(t−1)
.
58 A. K. MAJEE

9. Second order PDEs in two independent variables


A general second order quasilinear PDE in two independent variables x, y takes the
form
auxx + 2bux,y + cuyy = d , (9.1)
where a, b, c and d are functions of x, y, u, ux and uy .
Cauchy Problem: Find u(x, y) satisfying (9.1) with given (compatible) values of u, ux
and uy on a given curve γ in the xy-plane. If γ has a parametrization (f (s), g(s)), then
we prescribe, on γ, the Cauchy data
u = h(s) , ux = φ(s) , uy = ψ(s) . (9.2)
∂u
In general, Cauchy data along γ is u = h, = h1 , where η denotes a unit normal
γ ∂η
γ
vector along γ, and ∂u
∂η
= ∇u · η.
Note that, the values of any function v(x, y) and its first partial derivatives vx and vy
along the curve γ are connected by the compatibility condition (Strip condition)
dv
= vx f 0 (s) + vy g 0 (s)
ds
which follows by differentiating v(f (s), g(s)) with respect to s. Thus, we have
h0 (s) = φ(s)f 0 (s) + ψ(s)g 0 (s) .
Compatibility condition also hold for the higher partial derivatives of any function on γ.
Thus, taking v = ux and v = uy , we find that
dux
= uxx f 0 (s) + uxy g 0 (s) ,
ds
duy
= uxy f 0 (s) + uyy g 0 (s) .
ds
Therefore, if u(x, y) is a solution of (9.1) and (9.2), then
a uxx + 2b uxy + c uyy = d
f 0 uxx + g 0 uxy + 0 uyy = φ0
0 uxx + f 0 uxy + g 0 uyy = ψ 0 .
This determines uxx , uxy and uyy uniquely if
f 0 g0 0
2 2
6 0 i.e., ag 0 − 2bf 0 g 0 + cf 0 6= 0 .
∆ = 0 f 0 g0 =
a 2b c
Definition 9.1. The initial curve γ is called a characteristic curve ( with respect to
the differential equation and data) if ∆ = 0. It is called non-characteristic if ∆ 6= 0 along
γ.
Along a non-characteristic curve, the Cauchy data uniquely determine the second
derivatives of u on γ. Moreover, we can obtain higher derivatives of u along γ. The
Cauchy problem with Cauchy data prescribed on a characteristic curve generally has no
solution.
ODES AND PDES 59

It is useful to express the characteristic condition in more geometrical and algebraic


terms. The principle symbol σ(ξ), associated to principle part of (9.1), defined by
σ(ξ) = σ(x, y; ξ) = a(x, y)ξ12 + b(x, y)ξ1 ξ2 + c(x, y)ξ22 , ξ = (ξ1 , ξ2 ).
Since γ has a parametrization (x = f (s), y = g(s)), we see that dx
ds
= f 0 (s) and dy
ds
= g 0 (s),
0 0
and hence the vector ξ = (g , f ) is normal to γ. Therefore, the curve γ is characteristic
at (x, y) if and only if the principal symbol vanishes on its normal vector ξ.
For further investigation, let us remove the parameter s by writing the characteristic
condition
a dy 2 − 2b dx dy + c dx2 = 0 .
dy
We can solve the above equation for dx
in the form

dy b ± b2 − ac
= . (9.3)
dx a
Note that (9.3) is an ordinary differential equation for γ provided a, b and c are known
functions of x and y, i.e., the equation (9.1) is principally linear.
Definition 9.2. We say that the quasilinear PDE (9.1) is called
i) Elliptic if ac − b2 > 0 (there is no real characteristic curve).
ii) Hyperbolic if ac − b2 < 0. In this case, there are two families of characteristic
curve.
iii) Parabolic if ac = b2 . In this case, only one characteristic curve is possible.
Remark 9.1. For nonlinear case, type (Elliptic, Parabolic, Hyperbolic) is not determined
by the differential equation but can depend on the individual solution. The ”type ” may
change with the point of the plane.
Example 9.1. Consider the one dimensional heat equation uxx − uy = 0. The equation
is everywhere parabolic as ac = b2 . The characteristic curve is given by y = c ( solving
dy
the equation dx = 0).
Example 9.2. Consider the PDE uxx − uyy = 0. Here, b = 0, a = 1, and c = −1, so
b2 > ac. Thus, the equation is of hyperbolic type in the xy-plane. The characteristic
dy
equation is dx = ±1. Hence the characteristic curves are given by y = ±x + c. The above
equation is known as one dimensional wave equation.
Example 9.3. Consider now the 2nd order PDE uxx + uyy = 0. Here, a = 1 = c and
b = 0. Equation is everywhere elliptic. There is no real characteristic. This equation is
called Laplace equation.
Example 9.4 (Tricomi equation). Consider the linear PDE uyy − yuxx = 0. Here,
b2 − ac = y.
a) The equation is of hyperbolic type in the upper half plane (i.e., y > 0). The char-
dy
acteristic equation is given by dx = ± √1y , and hence the equation of characteristic
3
curves are 3x ± 2y 2 = c.
b) It is parabolic on the x-axis. The characteristic curve is given by y = c.
c) It is of elliptic type in the lower half plane (i.e., y < 0). There is no real charac-
teristic.
60 A. K. MAJEE

Remark 9.2. Consider a general 2nd order PDE in two independent variables x, y as
F (x, y, u, ux , uy , uxx , uxy , uyy ) = 0.
Let
∂F 1 ∂F ∂F
a= , b= , c= .
∂uxx 2 ∂ux,y ∂uyy
Then the PDE F (x, y, u, ux , uy , uxx , uxy , uyy ) = 0 is hyperbolic, elliptic, and parabolic, if
ac − b2 < 0 , ac − b2 > 0 , ac − b2 = 0
respectively.
Example 9.5. Consider the Monge-Ampère equation
uxx uyy − u2xy = f (x).
Here, a = uyy , b = −uxy and c = uxx . Thus,
i) equation is elliptic for a solution u exactly when f (x) > 0.
ii) equation is hyperbolic for a solution u exactly when f (x) < 0.
iii) equation is parabolic for a solution u exactly when f (x) = 0.
9.1. Canonical form of 2nd order principally linear PDE. Consider a 2nd order
principally linear PDE
a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy = d(x, y, ux , uy ) . (9.4)
We use a change of coordinates from (x, y) to (ξ, η) so that equation (9.4) may transformed
into a equation so that its principal part takes the form of wave, heat or Laplace equation.
The Canonical forms may sometimes be used to find the general solution.
Theorem 9.1 (For hyperbolic equation). Let the equation (9.4) by hyperbolic in a region
Ω of the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→
(ξ, η) in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the
form
uξη = D̃(ξ, η, u, uξ , uη ) . (9.5)
ξ resp. η is constant along the characteristic curve determined by the ODE
√ √
dy b(x, y) − b2 − ac dy b(x, y) + b2 − ac
= , resp. = .
dx a(x, y) dx a(x, y)
Remark 9.3. Taking further change of variables x̃ = ξ + η and ỹ = ξ − η, the equation
(9.7) transformed into the PDE
uỹỹ − ux̃x̃ = D̃(x̃, ỹ, u, ux̃ , uỹ ).
Example 9.6. Find the canonical form of the PDE: x2 uxx − 2xyuxy − 3y 2 uyy + uy = 0.
Solution: In this case, a = x2 , b = −xy and c = −3y 2 . Thus, b2 − ac = 4 x2 y 2 .
Therefore, the equation is hyperbolic at every point (x, y) such that xy 6= 0. At the
point on the coordinate axes , the equation is of parabolic type. Let us consider the case
x > 0, y > 0. Then equation is of hyperbolic type there. The equation of characteristic
dy −y ± 2y
curves are = . Thus, the solutions are x−1 y = c and x3 y = c. Define
dx x
ξ(x, y) = x−1 y and η(x, y) = x3 y. Then we have
−16x2 y 2 uξη + 5x−1 uξ + x3 uη = 0 .
ODES AND PDES 61
1
The above equation can be written in the variable ξ and η completely. We see that x = ( ηξ ) 4
1
and y = (ξ 3 η) 4 . Substituting the values of x and y, we have
5 1 1 1
uξη − 1 uξ − uη = 0 .
16 (ξ 3 η 5 ) 4 16 (ξ 7 η) 14
This is required canonical form.
Example 9.7. Find the general solution of the 2nd order PDE: xuxx + 2x2 uxy = ux − 1.
Solution: Here a = x, b = x2 and c = 0. So, b2 − ac = x4 > 0 for x 6= 0. Hence the
equation is hyperbolic provided x 6= 0. The characteristic curves are found by solving
√ (
dy x2 ± x4 2x
= =
dx x 0.
Hence we have y = x2 +c and y = c. Therefore, ξ(x, y) = x2 −y and η(x, y) = y. Then the
−3
equation reduces to 4x3 uξη = −1 and hence uξη = − 41 (ξ + η) 2 . This is desired canonical
−1
form. Now integrating with respect to η, we have uξ = 12 (ξ + η) 2 + f (ξ). Integrating
again with respect to ξ, we obtain
1
u = (ξ + η) 2 + F (ξ) + G(η).
Inverting to the variables x and y, we obtain our general solution as
u(x, y) = x + F (x2 − y) + G(y) ,
where F and G are arbitrary functions.
Theorem 9.2 (For parabolic equation). Let the equation (9.4) be parabolic in a region Ω
of the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→ (ξ, η)
in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the form
uηη = D̃(ξ, η, u, uξ , uη ) . (9.6)
dy b(x,y)
ξ is constant along the characteristic curve determined by the ODE dx = a(x,y) , and η is
chosen such a way that (ξ, η) defines a new coordinate system near (x0 , y0 ).
Example 9.8. Find the canonical form of the PDE
x2 uxx − 2xyuxy + y 2 uyy = 0.
Solution: Observe that the PDE is of parabolic type at every point (x, y) ∈ R2 . Note that
at (0, 0), the equation deduces to 0 = 0, and thus, we can determine canonical form in
any domain NOT containing the origin. In order to find the new coordinate system (ξ, η),
dy
we need to solve the ODE dx = − xy to find ξ and then choose η so that (ξ, η) represents
a coordinate system. Note here that ξ(x, y) = xy. Take η(x, y) = y so that the Jacobian
J = −x 6= 0. For this coordinate system, we have
ux = yuξ + uη ; uxx = y 2 uξξ + 2yuξη + uηη ;
uy = xuξ ; uxy = xyuξξ + xuξη + uξ ; uyy = x2 uξξ .
Thus, the PDE reduces to
ξ
x2 uηη − 2xyuξ = 0 , i.e, uηη = 2 uξ .
η2
62 A. K. MAJEE

Theorem 9.3 (For elliptic equation). Let the equation (9.4) be elliptic in a region Ω of
the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→ (ξ, η)
in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the form
uηη + uξξ = D̃(ξ, η, u, uξ , uη ) . (9.7)
In order to√ find the new coordinate (ξ, η), one needs to solve the characteristic ODE
b2 −ac
dy
dx
= b(x,y)−
a(x,y)
in the complex plane. Let Φ is constant along the characteristic. Then
(ξ, η) is given by ξ(x, y) = Re Φ(x, y) and η(x, y) = Im Φ(x, y).
Example 9.9. Find the canonical form of the PDE uxx + x2 uyy = 0.
Solution: Observe that the equation is of elliptic type at every point execpt on the y-
dy
axis. Let us solve the characteristic equation dx = ±ix in the complex plane. Note that
x2
Φ = y + i 2 is constant along the characteristic. Set ξ(x, y) = Re Φ(x, y) = y and
2
η(x, y) = Im Φ(x, y) = x2 . For this coordinate system, we have
uxx = uη = x2 uηη , uyy = uξξ .
Therefore, required canonical form is
1
uξξ + uηη +uη = 0 .

Here ξ = const. lines represents a family of straight lines parallel to x- axis and η =
const. lines represents family of parabolas.
References
[1] Shair Ahmad, Antonio Ambrosetti. A textbook on Ordinary Differential Equations. Unitext-La
Matematica per il 3+2, 73 (2013).
[2] Earl A. Coddington. An Introduction to ordinary Differential Equations Prentice-Hall Mathematics
Series, Prentice-Hall of India (2003).

You might also like