Diff. Eqn (MTL102) Complete Notes
Diff. Eqn (MTL102) Complete Notes
the pivot. The tangential component of the force acting on P is −mg sin(θ), while
the tangential component of the acceleration is lθ00 . Thus by newton’s law
g
θ00 (t) = − sin(θ(t)).
l
Example 1.2. Find the general solution of x0 (t) + 4tx(t) = 8t.
Here α(t) = 4t, and hence we have A(t) = 2t2 . Therefore, using the method of variation
of parameter, the general solution of the given ODE is given by
Z
−2t2 2 2
C + 8te2t dt = 2 + Ce−2t .
x(t) = e
solutions approach the equilibrium solution y(x) = αβ as x → ∞, some from above the line
y = αβ and others from below.
Exercise 1.3. Find p such that the non-trivial solutions of
y 0 = −(p + 1)xp y 2
tend to 0 as x → ∞.
6 A. K. MAJEE
Exact equations: Suppose that the first order equation y 0 = f¯(x, y) is written in the
form
M (x, y) + N (x, y)y 0 = 0 , (1.7)
where M , N are real-valued functions defined for real x, y on some rectangle R.
Definition 1.4. We say that the equation (1.7) is exact in R if there exists a function F
having continuous first partial derivatives such that
∂F ∂F
=M, =N. (1.8)
∂x ∂y
Theorem 1.4. Suppose the equation (1.7) is exact in a rectangle R, and F is a real-valued
function such that ∂F∂x
= M , ∂F ∂y
= N in R. Then every The differentiable function φ
defined implicitly by a relation
F (x, y) = c (c = constant) ,
is a solution of (1.7), and every solution of (1.7) whose graph lies in R arises in this
way.
Proof. Under the assumptions of the theorem, equation (1.7) becomes
∂F ∂F
(x, y) + (x, y)y 0 = 0 .
∂x ∂y
If φ is any solution on some interval I, then
∂F ∂F
(x, φ(x)) + (x, φ(x))φ0 (x) = 0 , ∀x ∈ I . (1.9)
∂x ∂y
If Φ(x) = F (x, φ(x)), then from the above equation, we see that Φ0 (x) = 0, and hence
F (x, φ(x)) = c, where c is some constant. Thus the solution φ must be a function which is
given implicitly by the relation F (x, φ(x)) = c. Conversely, if φ is a differentiable function
on some interval I defined implicitly by the relation F (x, y) = c, then
F (x, φ(x)) = c , ∀x ∈ I .
∂F
Differentiation along with the property ∂x
= M, ∂F
∂y
= N yields that φ is a solution of
(1.7). This completes the proof.
Example 1.7. Consider the equation
x − (y 4 − 1)y 0 (x) = 0 .
Here M = x and N = 1 − y 4 . Define F (x, y) = 21 x2 + y − 15 y 5 . Then above equation is
exact. Hence the solution is given by
F (x, y) = c =⇒ 2y 5 − 10y = 5x2 + c .
How do we recognize when an equation is exact? The following theorem gives a neces-
sary and sufficient conditions.
Theorem 1.5. Let M, N be two real-valued functions which have continuous first partial
derivatives on some rectangle
n o
R := (x, y) ∈ R2 : |x − x0 | ≤ a , |y − y0 | ≤ b .
ODES AND PDES 7
The integrating factor: Sometimes, if the equation (1.7) is NOT exact, one can find a
function u, nowhere zero, such that the equation
u(x, y)M (x, y)dx + u(x, y)N (x, y) dy = 0
is exact. Such a function is called an integrating factor. For example ydx − xdy =
0 (x > 0, y > 0) is not exact, by multiplying the equation by u(x, y) = y12 makes it exact.
Note that all the three function
1 1 1
, 2
,
xy x y2
are integrating factors of the above ODE. Thus, integrating factors need not be unique.
Remark 1.1. In view of Theorem 1.5, we see that a function u on a rectangle R, having
continuously partial derivatives, is an integrating factor of the equation (1.7) if and only
if
∂M ∂N ∂u ∂u
u − =N −M . (1.14)
∂y ∂x ∂x ∂y
i) If u is an integrating factor which is function of x only, then
1 ∂M ∂N
p= −
N ∂y ∂x
is a continuous function of x alone, provided N (x, y) 6= 0 in R.
ii) If u is an integrating factor which is function of y only, then
1 ∂N ∂M
q= −
M ∂x ∂y
is a continuous function of y alone, provided M (x, y) 6= 0 in R.
Example 1.9. Find an integrating factor of
(2y 3 + 2) dx + 3xy 2 dy = 0 , x 6= 0 , y 6= 0
and solve the ODE.
Here M (x, y) = 2y 3 + 2 and N (x, y) = 2
3xy . Note that the equation is not exact. Now
∂M
∂x
− ∂N
∂x
= 3y 2 and hence N1 ∂M ∂y
− ∂N∂x
= x1 is a continuous function of x alone. Thus
integrating factor should be only function of x. Note that u(x) = x satisfies the relation
(1.14). After multiplication by integrating factor, equation becomes
M̃ (x, y) dx + Ñ (x, y) dy = 0 , where M̃ (x, y) = 2xy 3 + 2x , Ñ (x, y) = 3x2 y 2 .
∂ F̃
To find F̃ , we know that ∂x
= 2xy 2 + 2x, and hence
F̃ (x, y) = x2 y 3 + x2 + f (y) ,
∂ F̃
where f is independent of x. Again ∂y
= Ñ gives
f 0 (y) = 3x2 y 2 = 3x2 y 2 =⇒ f (y) = c.
Thus, the general solution is given implicitly by
x2 (y 3 + 1) = c , c ∈ R.
ODES AND PDES 9
Rt Rt
Note that for i = 0, 1, . . . , m − 1, u(i) (t) = u(i) (t0 )+ t0 u(i+1) (s) ds = t0 u(i+1) (s) ds. Thus,
if i ≤ m − 2,
Z t Z t Z t
(i) (i+1) (i+1)
u (t) = u (s) ds ≤ |u (s)| ds ≤ g(s) ds .
t0 t0 t0
Thus,
m−2
X Z t
(i) (m−1)
g(t) ≤ |u (t)| + |u (t)| ≤ (m − 1 + M ) g(s) ds.
i=0 t0
By Gronwall’s inequality, g(t) = 0 for all t ∈ [t0 , T ]. Since T > t0 is arbitrary, g(t) = 0
for all t > t0 .
Let Ie := t0 − I = t0 − s : s ∈ I . Define v(t) := u(t0 − t), t ∈ I.e Then v(0) = u(t0 )
i
and v (i) (t) = (−1) u(i) (t0 − t). Since u satisfies (1.15), we have
(−1)m v (m) (t) + (−1)m−1 am−1 (t0 − t)v (m−1) (t) + . . . + a0 (t0 − t)v(t) = 0 ,
v(0) = v 0 (0) = . . . = v (m−1) (0) = 0 .
It follows from the previous case that v(t) = 0 for all t > 0 and t ∈ I.
e Hence u(t0 − t) = 0
for all t > 0 and t ∈ t0 − I. This implies that u(t) = 0 for all t ∈ I and t < t0 . This
completes the proof.
ODES AND PDES 11
where in the middle equation, we used the fact that f (t, φ(t)) is continuous on I. Hence
φ is a solution of (2.2). Conversely, suppose that φ is a solution to (2.2) i.e.,
Z x
φ(x) = y0 + f (s, φ(s)) ds , ∀x ∈ I .
x0
0
Then φ(x0 ) = y0 and φ (x) = f (x, φ(x)) for all x ∈ I. Thus φ is a solution to the initial
value problem (2.1).
We now want to solve the integral equation via approximation. Define Picard’s approx-
imations by
φ0 (x) = y0
Z x
(2.3)
φk+1 (x) = y0 + f (s, φk (s)) ds (k = 0, 1, . . .)
x0
First we show that all the functions φk , k = 0, 1, 2, . . . exist on some interval.
Theorem 2.2. The approximate function φk exist as continuous function on
n b o
I := x ∈ R : |x − x0 | ≤ α = min{a, }
M
and (x, φk (x)) ∈ R for all x ∈ I, where M > 0 is such that |f (x, y)| ≤ M for all (x, y) in
R. Indeed,
|φk (x) − y0 | ≤ M |x − x0 | , ∀x ∈ I . (2.4)
b
Note that for x ∈ I, |x − x0 | ≤ M
, and hence (x, φk (x)) are in R for all x ∈ I.
Proof. We will prove it by induction. Clearly φ0 exists on I and (2.4) satisfies. Now
Z x
|φ1 (x) − y0 | = f (s, y0 ) ds ≤ M |x − x0 |
x0
12 A. K. MAJEE
converges. Let us estimate the terms φi (x) − φi−1 (x). Observe that, since f satisfies
Lipschitz condition in R
Z xh i Z x
|φ2 (x) − φ1 (x)| = f (t, φ1 (t)) − f (t, φ0 (t)) dt ≤ K |φ1 (t) − φ0 (t)| dt
x0 x0
Z x
(x − x0 )2
≤ KM |t − x0 | dt ≤ KM .
x0 2
Claim:
M K i−1 M K i |x − x0 |i
φi (x) − φi−1 (x) ≤ |x − x0 |i = , i = 1, 2, . . . (2.5)
i! K i!
We shall prove (2.5) via induction. Note that (2.5) is true for i = 1 and i = 2. Assume
now that (2.5) holds for i = m. Let us assume that x ≥ x0 (similar proof for x ≤ x0 ). By
using Lipschitz condition, and the induction hypothesis, we have
Z x Z x
M K m−1
φm+1 (x) − φm (x) ≤ K φm (t) − φm−1 (t) dt ≤ K |t − x0 |m dt
x0 x0 m!
m
MK
= |x − x0 |m+1
(m + 1)!
ODES AND PDES 13
Hence
P∞ (2.5) holds for i = 1, 2, . . .. It follows that the i-th term of the series |φ0 (x)| +
M
i=1 |φi (x) − φi−1 (x)]| is less than or equal to K
times the i-th term of the power series
K|x−x0 |
P∞
for e . Hence the series φ0 (x) + i=1 [φi (x) − φi−1 (x)] is convergent for all x ∈ I,
and therefore the sequence {φk } converges to a limit φ(x) as k → ∞.
Properties of the limit function φ: We first show that φ is continuous on I. Indeed,
for any x, y ∈ I, we have, by using the boundedness of f
Z x
φk+1 (x) − φk+1 (x̃) = f (t, φk (t)) dt ≤ M |x − x̃|
y
Next we show that φ is continuous. Observe that, thanks to (2.5) (which holds in this
case)
k k k
X X M X K i |x − x0 |i
|φk (x) − y0 | = [φi (x) − φi−1 (x)] ≤ φi (x) − φi−1 (x) ≤
i=1 i=1
K i=1 i!
∞
M X K i |x − x0 |i M Ka
≤ = e − 1 := b .
K i=1 i! K
Taking the limit as k → ∞, we obtain
|φ(x) − y0 | ≤ b , (|x − x0 | ≤ a) .
Note that f is continuous on R, where the rectangle R is given by
n o
R := (x, y) ∈ R2 : |x − x0 | ≤ a , |y − y0 | ≤ b ,
and hence, there exists a constant N > 0 such that |f (x, y)| ≤ N for (x, y) ∈ R. Let x, x̃
be two pints in the interval |x − x0 | ≤ a. Then
Z x
φk+1 (x) − φk+1 (x̃) = f (t, φk (t)) dt ≤ N |x − x̃|
x̃
=⇒ |φ(x) − φ(x̃)| ≤ N |x − x̃| .
Rest of the proof is a repetition of the analogous parts of the proof of Theorem 2.3, with
α replaced by a everywhere.
Example 2.2. Consider the IVP y 0 = y + λx2 sin(y), y(0) = 1, where λ is a real
constant such that |λ| ≤ 1. The the solution of the IVP exists for |x| ≤ 1.
Here f (x, y) = y + λx2 sin(y). Consider the strip S = {|x| ≤ 1, |y| < ∞}. Then f
is continuous on S and Lipschitz continuous on S as |∂y f (x, y)| ≤ 2 on S. Thus, by
Theorem 2.4, the solution of the given problem exists on the entire interval |x| ≤ 1.
In view of Theorem 2.4, we arrive at the following corollary.
Corollary 2.5. Let f be a real-valued continuous function on the plane |x| < ∞, |y| < ∞,
which satisfies a Lipschitz condition on each strip Sa defined by
n o
Sa := |x| ≤ a , |y| < ∞ , (a > 0) .
16 A. K. MAJEE
Let us check that f is Lipschitz continuous on the strip Sa . Indeed, for any (x, y1 ), (x, y2 ) ∈
Sa ,
f (x, y1 ) − f (x, y2 ) ≤ |h1 (x)||p0 (ξ1 )|| cos(y1 ) − cos(y2 )| + |h2 (x)||q 0 (ξ2 )|| sin(y1 ) − sin(y2 )|
≤ 2 Na C|y1 − y2 | .
Thus, thanks to Corollary 2.5, every initial value problem for this equation has a solution
which exists for all real x.
Continuous dependence estimate: Suppose we have two IVP
y 0 = f (x, y) , y(x0 ) = y1 , (2.7)
y 0 = g(x, y) , y(x0 ) = y2 , (2.8)
where f and g both are real-valued continuous function on the rectangle R, and (x0 , y1 ) , (x0 , y2 )
are points in R.
Theorem 2.6. Let f , g be continuous function on R, and f satisfies a Lipschitz con-
dition there with Lipschitz constant K. Let φ , ψ be solutions of (2.7), (2.8) respectively
on an interval I containing x0 , with graphs contained in R. Suppose that the following
inequalities are valid
|f (x, y) − g(x, y)| ≤ ε , (x, y) ∈ R , (2.9)
|y1 − y2 | ≤ δ , (2.10)
for some non-negative constants ε , δ. Then
ε K|x−x0 |
φ(x) − ψ(x) ≤ δeK|x−x0 | +
e −1 , ∀x ∈ I . (2.11)
K
ODES AND PDES 17
Assume that x ≥ x0 . Then, in view of (2.9), (2.10), and the Lipschitz condition of f with
Lipschitz constant K, we obtain from the above expression
Z x
|φ(x) − ψ(x)| ≤ δ + K |φ(s) − ψ(s)| ds + ε(x − x0 ) . (2.12)
x0
Rx
Define, E(x) = x0
|φ(s)−ψ(s)| ds. Then E 0 (x) = |φ(x)−ψ(x)| and E(x0 ) = 0. Therefore,
(2.12) becomes
E 0 (x) − KE(x) ≤ δ + ε(x − x0 ) .
Multiplying this inequality by e−K(x−x0 ) , and then integrating from x0 to x, we have
Z x Z x
−K(x−x0 ) −K(t−x0 )
E(x)e ≤δ e dt + ε (t − x0 )e−K(t−x0 ) dt
x0 x0
δ ε ε
1 − e−K(x−x0 ) + 2 + 2 − K(x − x0 ) − 1 e−K(x−x0 ) .
=
K K K
Multiplying both sides of this inequality by eK(x−x0 ) , we have
δ K(x−x0 ) ε ε
− 1 − 2 K(x − x0 ) + 1 + 2 eK(x−x0 ) .
E(x) ≤ e
K K K
We now use this estimate in (2.12) to arrive at the required result for x ≥ x0 . A similar
proof holds in case x ≤ x0 . This completes the proof.
As a consequence of Theorem 2.6, we have
i) Uniqueness Theorem: Let f be continuous and satisfies a Lipschitz condition
on R. If φ and ψ are two solutions of the IVP (2.1) on an interval I containing
x0 , then φ(x) = ψ(x) for all x ∈ I.
ii) Let f be continuous and satisfies a Lipschitz condition on R, and gk , k = 1, 2, . . .
be continuous on R such that
|f (x, y) − gk (x, y)| ≤ εk , (x, y) ∈ R
with εk → 0 as k → ∞. Let yk → y0 as k → ∞. Let ψk be a solution to the IVP
y 0 = gk (x, y) , y(x0 ) = yk ,
and φ is a solution to the IVP (2.1) on some interval I containing x0 . Then
ψk (x) → φ(x) on I.
Remark 2.1. The Lipschitz condition on f on the rectangle R is necessary to have
uniqueness of solution of the IVP (2.1). To see this, consider the IVP
2
y 0 = 3y 3 , y(0) = 0 .
18 A. K. MAJEE
yn fn (x, ~y )
Then f~ : Ω → Rn , and the system of equation (3.1) can be written in the compact form
~y 0 = f~(x, ~y ) .
An equation of the n-th order ODE y (n) = f (x, y, y 0 , . . . , y (n−1) ) may also be treated as a
system of the type (3.1). To see this, let y1 = y, y2 = y 0 , . . . , yn = y (n−1) . Then form the
ODE equation y (n) = f (x, y, y 0 , . . . , y (n−1) ), we have
~
i) (x, φ(x)) ∈Ω
~ 0 ~ ~
ii) φ (x) = f (x, φ(x)) for all x ∈ I .
Theorem 3.1 (Local existence). Let f~ be a continuous vector valued function defined on
R = |x − x0 | ≤ a, |~y − ~y0 | ≤ b, a, b > 0
and suppose f~ satisfies Lipschitz condition on R, then the successive approximation {φ
~ k }∞
k=0
~ 0 (x) = ~y0
φ
Z x
~ k+1 (x) = ~y0 +
φ f~(s, φ
~ k (s)) ds , k = 0, 1, 2, . . .
x0
~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 on Icon , where M is a positive constant such that |f~(x, ~y )| ≤ M
for all (x, ~y ) ∈ R. Moreover,
k+1
φ ~ ≤ M (Kα) ekα
~ k (x) − φ ∀x ∈ Icon ,
K (k + 1)!
where K is a Lipschitz constant of f~ on R.
Example 3.2. Consider the problem
y10 = y2 , y1 (0) = 0
y20 = −y1 , y2 (0) = 1 .
This can be written in compact form ~y = f~(x, ~y ), ~y (0) = ~y0 = (0, 1), where f~(x, ~y ) =
~ k (x):
(y2 , −y1 ). Let us calculate the successive approximations φ
~ 0 (x) = ~y0 = (0, 1) ,
φ
Z x
~
φ1 (x) = (0, 1) + (1, 0) ds = (x, 1) ,
0
Z x Z x
~ ~ ~ x2 x2
φ2 (x) = (0, 1) + f (s, φ1 (s)) ds = (0, 1) + (1, −s) ds = (0, 1) + (x, − ) = (x, 1 − ) ,
2 2
Z0 x 2 3
0
2
~ 3 (x) = (0, 1) + s x x
φ (1 − , −s) ds = (x − , 1 − ) ,
2 3! 2
Z0 x 2 3 3
~ s s x x2 x4
φ4 (x) = (0, 1) + (1 − , −s + ) ds = (x − , 1 − + ).
0 2 3! 3! 2 4!
~ k exist for all real x and φ
It is not difficult to show that all the φ ~ k (x) → (sin(x), cos(x)).
Thus, the unique solution of the the given IVP is φ(x) ~ = (sin(x), cos(x)).
Theorem 3.2 (Non-local existence). Let f~ be a continuous vector valued function defined
on
S = |x − x0 | ≤ a, |~y | < ∞, a > 0
and satisfies there a Lipschitz condition. Then the successive approximation {φ ~ k }∞ for
k=0
the ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 exist on |x − x0 | ≤ a, and converges there to a solution φ
~
of the IVP.
20 A. K. MAJEE
Corollary 3.3. Let f~ be a continuous vector valued function defined on |x| < ∞, |~y | < ∞,
and satisfies there a Lipschitz condition on each strip
Sa = {|x| ≤ a, |~y | < ∞, a > 0}.
Then every initial value probblem ~y 0 = f~(x, ~y ); ~y (x0 ) = ~y0 has a solution which exists
for all x ∈ R.
Example 3.3. Consider the system
y10 = 3y1 + xy3 , y20 = y2 + x3 y3 , y30 = 2xy1 − y2 + ex y3 .
this system of equation can be written in the compact form ~y 0 = f~(x, ~y ) where
y1 3y1 + 3y3
~y = y2 , and f~(x, ~y ) = y2 + x3 y3 .
x
y3 2xy1 − y2 + e y3
Note that f~ is a continuous vector-valued function defined on |x| < ∞, |~y | < ∞. It
is lipschitz continuous on the strip Sa = {|x| ≤ a, |~y | < ∞, a > 0}, since for
(x, ~y ), (x, ~ỹ) ∈ Sa ,
f~(x, ~y ) − f~(x, ~ỹ) = |3(y1 − ỹ1 ) + x(y3 − ỹ3 )| + |(y2 − ỹ2 ) + x3 (y3 − ỹ3 )|
+ |2x(y1 − ỹ1 ) − (y2 − ỹ2 ) + ex (y3 − ỹ3 )|
≤ (3 + 2|x|)|y1 − ỹ1 | + |x| + ex + |x|3 |y3 − ỹ3 | + 2|y2 − ỹ2 |
≤ 5 + 3a + ea + a3 |~y − ~ỹ| .
Therefore, every initial value problem for this system has a solution which exists for all
real x. Moreover, solution is unique.
Example 3.4. For any Lipschitz continuous function on R, consider the IVP
y 00 (x) = f (y), y(0) = 0, y 0 (0) = 0 .
Then solution of the above IVP is even. Since f is Lipschitz continuous, by writing the
above IVP in vector form, one can show that above problem has a solution y(x) defined
on whole real line. Let z(x) = y(−x). Note that z(x) satisfies the above IVP. Hence by
uniqueness, z(x) = y(−x) for all x ∈ R. In other words, y(x) is even in x.
Exercise 3.1. Let f be a Lipschitz continuous function on R such that it is a odd function.
Then the solution of the IVP
y 000 (x) = f (y), y(0) = 0, y 000 (0) = 0
is odd in x.
Like in the 1st order ODE (scalar valued), we have the following continuous dependence
estimate and uniqueness theorem.
Theorem 3.4 (Continuous dependence estimate ). Let f~, ~g be two continuous vector-
valued function defined on a rectangle
R = |x − x0 | ≤ a, |~y − ~y0 | ≤ b, a, b > 0
ODES AND PDES 21
3.1. Existence and uniqueness for linear systems: Consider the linear system ~y 0 =
f~(x, ~y ), where the component of f~ are given by
n
X
fj (x, ~y ) = ajk yk + bj (x), j = 1, 2, . . . , n
k=1
and the functions ajk , bj are continuous on an interval I containing x0 . Now consider the
strip Sa = {|x − x0 | ≤ a, |~y | < ∞}. Suppose ajk , bj are continuous on |x − x0 | ≤ a. Then
22 A. K. MAJEE
there exists a constant K > 0 such that nj=1 |ajk (x)| ≤ K for all k = 1, 2, . . . , n, and for
P
where ajk are continuous on some interval I. Then it isPeasy to see that ψ ~ = ~0 is a solution
n
. This is called a trivial solution. Let K be such that j=1 |ajk (x)| ≤ K. Let x0 ∈ I, and
~ be any solution of the linear homogeneous system. Now consider two IVP
φ
~y 0 = f~(x, ~y ), ~y (x0 ) = ~0; ~y 0 = f~(x, ~y ), ~y (x0 ) = φ(x
~ 0)
where the component of f~ is fj (x, ~y ) = nk=1 ajk yk . Then according to continuous depen-
P
dence estimate theorem, we have
~ ~ ~ 0 ) − ~0|eK|x−x0 | + ε = 0 eK|x−x0 | − 1 = |φ(x
− ~0| ≤ |φ(x ~ 0 )|eK|x−x0 | ∀x ∈ I .
|φ(x)| = |φ(x)
K
For linear equations of order n, we have non-local existence.
Theorem 3.6. Let a0 , a1 , . . . , an−1 and b be continuous real valued functions on an interval
I containing x0 . If α0 , α1 , . . . , αn−1 are any n constant, there exists one and only one
solution φ of the ODE
y (n) + an−1 (x)y (n−1) (x) + . . . + a0 (x)y = b(x) on I ,
φ(x0 ) = α0 , φ0 (x0 ) = α1 , . . . , φ(n−1) (x0 ) = αn−1 .
Proof. Let ~y0 = (α0 , α1 , . . . , αn−1 ). Given ODE can be writen as a system of linear
equation
yj0 = yj+1 (j = 1, 2, . . . , n − 1); yn0 = b(x) − an−1 (x)yn − . . . − a0 (x)y1 .
~ = φ1 , φ2 , . . . , φn
Then, according to Theorem 3.5, above problem has a unique solution φ
on I satisfying φ1 (x0 ) = α0 , φ2 (x0 ) = α1 , . . . , φn (x0 ) = αn−1 . But, since
(n−1)
φ2 = φ01 , φ3 = φ02 = φ001 , . . . , φn = φ1 ,
the function φ1 is the required solution on I.
ODES AND PDES 23
Theorem 4.1. Suppose that W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 ∈ I. Then
u1 , u2 , . . . , um are linearly independent.
Proof. Suppose that u1 , u2 , . . . , um arePmlinearly dependent. Then there exist constants
c1 , c2 , . . . , cm not all zero such that i=1 ci ui (t) = 0 ∀ t ∈ I. Taking differentiation
(m − 1)-times, we have
c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.. .. ..
. . .
(m−1) (m−1)
c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m−1)
m (t0 ) = 0 .
24 A. K. MAJEE
cm 0
u1 (t0 ) u2 (t0 ) ... um (t0 )
u01 (t0 ) u02 (t0 ) ... u0m (t0 )
A := .. .. .. .
...
. . .
(m−1) (m−1) (m−1)
u1 (t0 ) u2 (t0 ) . . . um (t0 )
ODES AND PDES 25
Pm
Now, define v(t) = i=1 ci ui (t), t ∈ I. Then
v(t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm um (t0 ) = 0 ,
v 0 (t0 ) =c1 u01 (t0 ) + c2 u02 (t0 ) + . . . + cm u0m (t0 ) = 0 ,
.... .. ..
.. . .
(m−1) (m−1)
v (m−1) (t0 ) =c1 u1 (t0 ) + c2 u2 (t0 ) + . . . + cm u(m−1)
m (t0 ) = 0 .
Since ui , i = 1, 2, . . . , m solves the linear homogeneous ODE (1.2), v satisfies the following
initial value problem:
v (m) (t) + am−1 (t)v (m−1) (t) + . . . a1 (t)v 0 (t) + a0 (t)u(t) = 0 t ∈ I ,
v(t0 ) = v 0 (t0 ) = . . . = v (m−1) (t0 ) = 0 .
Thus, v(t) = 0 for all t ∈ I. In other words, m
P
i=1 ci ui (t) = 0 for all t ∈ I with the above
choice of the constants c1 , c2 , . . . , cm , which is a contradiction. This finishes the proof.
Corollary 4.3. Let u1 , u2 , . . . , um be m-solutions of the linear homogeneous ODE (1.2).
Then u1 , u2 , . . . , um are linearly independent if and only if
W [u1 (t0 ), u2 (t0 ), . . . , um (t0 )] 6= 0 for some t0 ∈ I .
Example 4.4. sin(x) and cos(x) are two linearly independent solution of the homogeneous
ODE y 00 (x) + y(x) = 0. Note that sin(x) and cos(x) solves the ODE. Moreover,
W [sin(x), cos(x)] = −1 ∀x ∈ R.
Therefore, they are two linearly independent solution of the homogeneous ODE y 00 (x) +
y(x) = 0.
Theorem 4.4. The real vector space X defined in Theorem 1.1 is of finite dimensional
and dim X = m.
Corollary 4.5. If u1 , u2 , . . . , um are any independent solutions of the linear homogeneous
ODE (1.2), then any solution u of (1.2) can be written as
u(t) = c1 u1 (t) + c2 u2 (t) + . . . + cm um (t) t ∈ I .
Example 4.5. General solution of the ODE y 00 (x) + y(x) = 0 is given by
y(x) = c1 sin(x) + c2 cos(x), c1 , c2 ∈ R.
Theorem 4.6 (Abel’s Theorem). Let u1 , u2 , . . . , um be m-solutions of the linear homoge-
neous ODE (1.2) on an interval I, and let t0 be any point in I. Then
Z t
W [u1 , u2 , . . . , um ](t) = exp − am−1 (s) ds W [u1 , u2 , . . . , um ](t0 )
t0
Thus
W 0 [u1 , u2 ] = −a1 u1 u02 − u01 u2 = −a1 W [u1 , u2 ] .
We see that W [u1 , u2 ] satisfies the first order linear homogeneous equation u0 (t)+a1 (t)u(t) =
0, and hence
Z t
W [u1 (t), u2 (t)] = C exp − a1 (s) ds .
t0
Putting t = t0 in the above expression, we obtain C = W [u1 (t0 ), u2 (t0 )], and hence
Z t
W [u1 (t), u2 (t)] = exp − a1 (s) ds W [u1 (t0 ), u2 (t0 )].
t0
For general m, one needs to make use of some general properties of the determinant. From
the definition of W = W [u1 , u2 , . . . , um ] as a determinant, it follows that its derivative W 0
is a sum of m determinants
W 0 = V1 + V2 + . . . + Vm ,
where Vk differs from W only its k-th row, and the k-th row of Vk is obtained by differ-
entiating the k-th row of W . The first m − 1 determinant are all zero as they each have
two identical rows. Hence
u1 u2 . . . um u1 u2 ... um
u01 u02 . . . u0m u01 u02 ... u0m
0
W = det .. .. .. = det .. .. .. .. .
... .
. . . . . .
(m) (m) (m) Pm−1 (j) Pm−1 (j) Pm−1 (j)
u1 u2 . . . um − j=0 aj u1 − j=0 aj u2 . . . − j=0 aj um
The value of the determinant is unchanged if we multiply any row by a number and add
to the last row. we multiply the first row by a0 , the second row by a1 , . . ., the (m − 1)
row by am−2 , and add these to the last row, we have
u1 u2 ... um
u01 u02 ... u0m
W 0 = det .. .. .. = −am−1 W
..
. . . .
(m−1) (m−1) (m−1)
−am−1 u1 −am−1 u2 ... −am−1 um
Thus, W satisfies the linear first order equation u0 (t) + am−1 (t)u(t) = 0, and hence
Z t
W [u1 , u2 , . . . , um ](t) = exp − am−1 (s) ds W [u1 , u2 , . . . , um ](t0 ) .
t0
Theorem 4.7. Let up be a particular solution of the non-homogeneous ODE
Proof. Let u be any solution and up be its particular solution. Then u − up is a solution of
the homogeneous ODE. Hence u−up = c1 v1 (t)+c2 v2 (t)+. . .+cm vm (t), where v1 , v2 , . . . , vm
are any linearly independent solutions of the linear homogeneous ODE . Since any solution
v of the homogeneous ODE can be written in the form c1 v1 (t) + c2 v2 (t) + . . . + cm vm (t),
the result follows easily.
Later, we use variation of parameters method to find the particular solution up .
Example 4.6. Let x1 , x2 , x3 and x4 be solutions of the linear homogeneous ODE x(4) (t) −
3x(3) (t) + 2x0 (t) − 5x(t) = 0 such that W [x1 , x2 , x3 , x4 ](0) = 5. Then by Abel’s theorem
Z 6
−3 ds W [x1 , x2 , x3 , x4 ](0) = 5e18 .
W [x1 , x2 , x3 , x4 ](6) = exp −
0
Example 4.7. The functions u1 (t) = sin t and u2 (t) = t2 can not be solutions of a differ-
ential equation u00 (t) + a1 (t)u0 (t) + a0 (t)u(t) = 0, where a0 , a1 are continuous functions.
To see this, we first consider the Wronskian of u1 and u2 . Note that W [u1 (t), u2 (t)] =
2t sin t − t2 cos t. Thus W [u1 ( π2 ), u2 ( π2 )] = π 6= 0, and W [u1 (0), u2 (0)] = 0. Thus, in view
of the previous theorem, u1 and u2 can be a solution.
Example 4.8. Let us explain why et , sin(t) and t cannot be solutions of a third order
homogeneous equation with continuous coefficients. Notice that
π π π
W [et , sin(t), t](0) = 0; W [et , sin(t), t]( ) = (2 − )e 2 6= 0.
2 2
If they are solutions of a third order homogeneous equation with continuous coefficients,
then by Abel’s theorem, Wronskian be either identically zero or nonzero. Therefore,
et , sin(t) and t cannot be solutions of a third order homogeneous equation with contin-
uous coefficients.
4.1. Linear homogeneous equation with constant coefficients. We are interested
in the ODE
u(m) (t) + am−1 u(m−1) (t) + . . . + a0 u(t) = 0 , where ai ∈ R . (4.1)
Define the differential operator L with constant coefficients as
m
X di
L≡ ai i , ai ∈ R with am = 1 .
i=0
dt
For u : R → R which is m-times differentiable, we define
m
X di u(t)
Lu(t) = ai .
i=0
dti
In this notation, we are interested in finding u such that Lu(t) = 0 for all t ∈ R. Define
a polynomial
p(λ) = λm + am−1 λm−1 + . . . + a1 λ + a0 = 0 . (4.2)
The polynomial p is called the characteristic polynomial of the operator L.
Remark 4.2. We observe the followings:
a) For a given polynomial p of degree m, we can associate a differential operator Lp
such that p is the characteristic polynomial of Lp .
28 A. K. MAJEE
Lqj u = 0 gives that its real part and imaginary part are zero. Again, if u = u1 + iu2 , then
Lu = L(u1 ) + iL(u2 ) and hence L(u) = 0 iff L(u1 ) = 0 and L(u2 ) = 0. This implies that
exj t cos(yj t), exj t sin(yj t) are solutions of Lqj u = 0, and they are linearly independent.
ODES AND PDES 29
Note that
y(0) = 0 =⇒ c1 + c2 = 0
√
y 0 (0) = 1 =⇒ −2c1 + c2 + 3c3 = 2
√
y 00 (0) = 0 =⇒ c1 − c2 + 3c3 = 0 .
Solving the above equations, we get c1 = − 52 , c2 = 25 and c3 = 5√4 3 . Thus, the solution is
√ √
2 −x x 2 3 4 3
y(x) = − e + e 2 cos( x) + √ sin( x) .
5 5 2 5 3 2
4.2. Finding particular solution to non-homogeneous ODE (Method of Varia-
tion of parameters). Let ui (1 ≤ i ≤ m) be m-linearly independent solutions to the
linear
Pm homogeneous ODE (1.2). We want to find functions ci (t) suchPthat up (t) =
m
c
i=1 i (t)u (t)
i P is a solution to the non-homogeneous ODE. Let u(t) = i=1 i (t)ui (t).
c
0 m 0 0
Pm 0
Then u (t) = i=1 ci (t)ui (t) + ci (t)ui (t) . Assume that i=1 ci (t)ui (t) = 0. Then
m
X m
X
u0 (t) = ci (t)u0i (t) and hence u00 (t) =
0
ci (t)u0i (t) + ci (t)u00 (t) .
i=1 i=1
Pm 0 0
Pm
Again assume that i=1 ci (t)ui (t) = 0. Then u00 (t) = 00
i=1 ci (t)ui (t). Therefore by
assuming
m
X m
X m
X (m−2)
c0i (t)ui (t) = 0, c0i (t)u0i (t) = 0, . . . , c0i (t)ui (t) = 0 ,
i=1 i=1 i=1
we get
m
X
(j) (j)
u (t) = ci (t)ui (t) = 0 j = 0, 1, . . . , m − 1 .
i=1
(m−1) (m)
Then, u(m) (t) = m
P 0
i=1 ci (t)ui (t)+ci (t)ui (t) . Now u satisfies the non-homogeneous
equation iff
m
X m
X m
X
0 (m−1) (m) (m−1)
ci (t)ui (t) + ci (t)ui (t) + am−1 (t) ci (t)ui (t) + . . . + a0 (t) ci (t)ui (t) = b(t)
i=1 i=1 i=1
m
X m
X h i
(m−1) (m) (m−1)
⇔ c0i (t)ui (t) + ci (t) ui (t) + am−1 (t)ui (t) + . . . + a0 (t)ui (t) = b(t)
i=1 i=1
m
X (m−1)
⇔ c0i (t)ui (t) = b(t) .
i=1
Thus,
t
t2 1 t −s
Z Z
1
se ds = 1 − e−t − te−t ,
c1 (t) = − s ds = , c2 (t) =
0 2 2 0 2
Z t
1 1 t2 1
ses ds = 1 − et + tet , and hence up = − 1 + (et − e−t ) .
c3 (t) =
2 0 2 2 2
Therefore, a general solution to the non-homogeneous ODE is given by
t2
u(t) = c1 + c2 et + c3 e−t + − 1,
2
where c1 , c2 and c3 are arbitrary constants.
4.2.1. Euler’s Equation: Consider the equation
tm u(m) (t) + am−1 tm−1 u(m−1) (t) + . . . + a1 tu0 (t) + a0 u(t) = b(t) (4.3)
where a0 , a1 , . . . , am−1 are constants. Consider t > 0. Let s = log(t) (for t < 0, we must
use s = log(−t)). Then
du du 1
=
dt ds t
2
du d2 u du 1
= −
dt2 ds2 ds t2
3 3
du du d2 u du 1
= − 3 + 2
dt3 ds3 ds2 ds t3
.. .. ..
. . .
m m m−1
d u d u d u du 1
= + C m−1 + . . . + C 1 ,
dtm dsm dsm−1 ds tm
for some constants C1 , C2 , . . . , Cm−1 . Substituting these in (4.3), we obtain a non-homogeneous
ODE with constant coefficients
dm u dm−1 u du
m
+ Bm−1 m−1
+ . . . + B 1 + B0 u = b(es )
ds ds ds
for some constants B0 , B1 , . . . , Bm−1 . One can solve this ODE and then substitute s =
log(t) to get the solution of the Euler’s equation (4.3).
Example 4.14. Solve
x2 y 00 + xy 0 − y = x3 x > 0.
dy d2 y dy
Let x = eu . Then xy 0 = du
and x2 y 00 = du2
− du
. Therefore, we have
d2 y
− y = e3u u ∈ R .
du2
The characteristic polynomial corresponding to the homogeneous ODE is given by
p(λ) = λ2 − 1 = (λ + 1)(λ − 1) .
Therefore, y1 = eu and y2 = e−u are two linearly independent solutions. Note that
W [y1 , y2 ](u) = −2. Moreover
0 e−u
u
−u e 0
W1 (u) = det = −e , W2 (u) = det u = eu .
1 −e−u e 1
34 A. K. MAJEE
Hence
Z u Z u
1 1 1 1
e dr = e2u − 1 ,
2r
e4r dr = 1 − e4u ,
c1 (u) = c2 = −
2 0 4 2 0 8
and therefore the particular solution yp is given by
1 1 1
yp = e3u − eu + e−u .
8 4 8
Therefore the general solution is
1 C 2 x3
y = C1 eu + C2 e−u + e3u = C1 x + + ,
8 x 8
where C1 , C2 are arbitrary constants.
4.3. On Comparison theorems of Sturns: Consider a general 2nd order linear homo-
geneous ODE
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0 , p, q ∈ C(I) . (4.4)
We know that, if y1 (x) is a solution of (4.4), then cy1 (x) is also a solution, where c is a
constant. Is c(x)y1 (x) a solution? Answer is yes, and it is given by the following theorem.
Theorem 4.11. Let y1 (x) be a solution of (4.4) with y1 (x) 6= 0 on I. Then
Z − R p(x) dx
e
y2 (x) = y1 (x) dx
y12 (x)
is a solution of (4.4). Moreover, y1 and y2 are linearly independent.
Proof. Let y2 (x) = v(x)y1 (x). We would like to find v(x) such that y2 (x) satisfies (4.4).
To do so, suppose y2 (x) satisfies (4.4). Then by calculating y200 (x) + p(x)y20 (x) + q(x)y2 (x),
we see that
v 0 (x)[2y10 (x) + p(x)y1 (x)] + y1 (x)v 00 (x) = 0
where we have used the fact that y1 is a solution of (4.4). Let w = v 0 . Then w satisfies a
first oder ODE given by
2y10 (x)
w0 (x) + [ + p(x)]w(x) = 0.
y1 (x)
Hence by the method of variation of parameters, we obtan
R
e− p(x) dx
w(x) = c 2 .
y1 (x)
Since we only needR one function v(x) so that v(x)y1 (x) is a solution, we can let c = 1 and
R e− p(x) dx R e− R p(x) dx
hence v(x) = y12 (x)
dx. Thus, y2 (x) = y1 (x) y12 (x)
dx is a solution of (4.4).
Let
R us calculate the Wronskian of y1 and y2 . It is easy to see that W [y1 , y2 ](x) =
− p(x) dx
e 6= 0. Therefore, y1 and y2 are linearly independent.
Example 4.15. Find a general solution of
y0 1
y 00 − + 2 y = 0 , x > 0.
x x
ODES AND PDES 35
Note that y1 = x is a solution and y1 (x) 6=R 0. To find another independent solution, using
R − − 1 dx
above theorem, we obtain y2 (x) = x e x2x dx = x ln(x). Therefore, general solution
is given by
y(x) = c1 x + c2 x ln(x) .
Example 4.16. Find general solution of the ODE
1
(3t − 1)2 y 00 (t) + 3(3t − 1)y 0 (t) − 9y(t) = 0 , t >
.
3
Note that y1 (t) = (3t − 1) is a solution, and y1 (t) =
6 0. To find another independent
solution, using Theorem 4.11, we obtain
Z − R 3 dt
e 3t−1 1
y2 (t) = (3t − 1) dt = − .
(3t − 1)2 6(3t − 1)
Therefore, general solution is given by
1 1
y(t) = c1 (3t − 1) − c2 , t> .
6(3t − 1) 3
1
R
One can easily check that the substitution x(t) = y(t)e− 2 p(t) dt transform the equation
x + p(t)x0 + q(t)x = 0 into the form y 00 + P (t)y(t) = 0, where p, q are continuous
00
functions such that p0 is continuous. Therefore, instead of studing the equation of the
form x00 + p(t)x0 + q(t)x = 0, we will study the equation of the form
y 00 + α(x)y(x) = 0.
Oscillatory behavior of solutions: Consider the second order linear homogeneous
equation
y 00 + p(x)y(x) = 0 . (4.5)
For simplicity, we assume that p(x) is continuous everywhere.
Definition 4.3. We say that a nontrivial solution y(x) of (4.5) is oscillatory (or it os-
cillates) if for any number T , y(x) has infinitely many zeros in the interval (T, ∞); or
equivalently, for any number τ , there exists a number ξ > τ such that y(ξ) = 0. We also
call the equation (4.5) oscillatory if it has an oscillatory solution.
Consider the equation y 00 + 4y = 0. Two independent solutions are y1 (x) = sin(2x)
and y2 (x) = cos(2x). Note that y1 (x) three zeros on (0, 2π). Moreover, between two
consecutive zeros of y1 (x), there is only one zero of y2 (x). We have the following general
result.
Theorem 4.12 (Sturm Separation Theorem). Let y1 (x) and y2 (x) be two linearly inde-
pendent solutions, and suppose a and b are two consecutive zeros of y1 (x) with a < b.
Then y2 (x) has exactly one zero in the interval (a, b).
Proof. Notice that y2 (a) 6= 0 6= y2 (b)( otherwise y1 and y2 would have common zero
and hence their Wronskian would be zero, contradicting the fact that they are linearly
independent). Suppose y2 (x) 6= 0 on (a, b). Then y2 (x) 6= 0 on [a, b]. Define h(x) = yy21 (x)(x)
.
Then h satisfies all the conditions of Rolle’s theorem. Hence there exists c ∈ (a, b) such
that h0 (c) = 0. In other words, W [yy12,y(c)2 ](c) = 0. Since y2 (c) 6= 0, W [y1 , y2 ](c) = 0, a
2
36 A. K. MAJEE
contradiction as y1 and y2 are linearly independent. Thus, there exists c ∈ (a, b) such that
y2 (c) = 0.
We now show the uniqueness. Suppose there exist c1 , c2 ∈ (a, b) such that y2 (c1 ) =
y2 (c2 ) = 0. Then, by what we have just proved, there would exist a number d between c1
and c2 such that y1 (d) = 0, contradicting the fact that a and b are consecutive zeros of
y1 (x).
Example 4.17. Show that between any two consecutive zeros of sin(t), there exists only
one zero of a1 sin(t) + a2 cos(t), where a1 , a2 ∈ R with a2 6= 0. To see this, we apply
Sturm Separation Theorem. Note that y1 (t) := sin(t) and y2 (t) := a1 sin(t) + a2 cos(t) are
two solutions of the ODE y 00 (t) + y(t) = 0. Since W [y1 , y2 ](t) = −a2 6= 0, y1 and y2 are
linearly independent. Therefore, by Theorem 4.12, between any two consecutive zeros of
sin(t), there exists only one zero of a1 sin(t) + a2 cos(t).
In view of Theorem 4.12, we arrive at the following corollary.
Corollary 4.13. If (4.5) has one oscillatory solution, then all of its solutions are oscil-
latory.
Theorem 4.14 (Sturm Comparison Theorem). Consider the equations
y 00 (x) + α(x)y(x) = 0 , (4.6)
y 00 (x) + β(x)y(x) = 0 . (4.7)
Suppose that yα (x) is a nontrivial solution of (4.6) with consecutive zeros at x = a and
x = b. Assume further that α, β ∈ C[a, b] and α(x) ≤ β(x), with strict inequality holding
at least at one point in the interval [a, b]. If yβ (x) is any nontrivial solution of (4.7) such
that yβ (a) = 0, then there exists a number c with a < c < b such that yβ (c) = 0.
Proof. Suppose that yβ (x) 6= 0 on (a, b). W.L.O.G, we assume that yβ (x) > 0 and
yα (x) > 0 on the interval (a, b). Multiplying (4.6) by yβ (x) and (4.7) by yα (x), and then
subtracting the resulting equations, we obtain
yβ (x)yα00 − yα (x)yβ00 + (α(x) − β(x))yα (x)yβ (x) = 0
0
=⇒ yβ yα0 − yα yβ0 = (β(x) − α(x))yα (x)yβ (x)
Z b Z b
0 0 0
=⇒ yβ yα − yα yβ dx = (β(x) − α(x))yα (x)yβ (x) dx
a a
Z b
0
=⇒ yβ (b)yα (b) = (β(x) − α(x))yα (x)yβ (x) dx .
a
Note that, since α, β ∈ C[a, b] and α(x0 ) < β(x0 ), for some x0 [a, b], in a nbd. of x0 ,
β(x) − α(x) > 0, and hence by positivity of yα and yβ on (a, b), we see that
Z b
(β(x) − α(x))yα (x)yβ (x) dx > 0 .
a
On the other hand, since yα > 0 on (a, b) and yα (b) = 0, we must have yα0 (b) ≤ 0, and
yβ (b) ≥ 0. Therefore,
Z b
0
0 ≥ yβ (b)yα (b) = (β(x) − α(x))yα (x)yβ (x) dx > 0 —a contradiction !
a
ODES AND PDES 37
Example 5.5. Estimate the first eigen value of x00 + λ(1 + t)x = 0 x(0) = x(1) = 0. Note
here that q(t) = 1+t, and hence 1 ≤ q(t) ≤ 2 for all t ∈ [0, 1]. Thus, λ1 [2] ≤ λ1 [q] ≤ λ1 [1].
In other words,
π2
≤ λ1 [q] ≤ π 2 .
2
Remark 5.1. We have considered only the Dirichlet boundary conditions x(a) = 0 =
x(b). One could also consider the Neumann boundary conditions x0 (a) = 0 = x0 (b), or
the mixed boundary conditions
α1 x(a) + β1 x0 (a) = 0 , α2 x(b) + β2 x0 (b) = 0 ,
α1 β1
where the matrix is non-singular.
α2 β2
6. Phase-plane analysis:
Consider the nonlinear system
x0 = y ,
(6.1)
y 0 = f (x)
where f is a smooth function on R. We assume that solution of the above problem exists
for all t ∈ R. The plane (x, y) is called phase plane and study of the system (6.1) is called
phase plane analysis. Note that the system (6.1) can be written as
x0 = Hy (x, y); y 0 = −Hx (x, y)
where
1
H(x, y) = y 2 − F (x), with F 0 (x) = f (x).
2
Definition 6.1. Let H(x, y) be a differentiable function on R2 . The autonomous system
(
x0 = Hy (x, y)
(6.2)
y 0 = −Hx (x, y)
is called a Hamiltonian system and H is called Hamiltonian.
Lemma 6.1. If (x(t), y(t)) is a solution of (6.2), then there exists c ∈ R such that
H(x(t), y(t)) = c.
Proof. Let (x(t), y(t)) be a solution of (6.2). Then by chain rule
d
H(x(t), y(t)) = Hx (x, y)x0 + Hy (x, y)y 0 = Hx Hy − Hy Hx = 0 .
dt
=⇒ H(x(t), y(t)) = c .
Define
Λc = {(x, y) ∈ R2 : H(x, y) = c}.
Note that if (x(t), y(t)) solves (6.2), then (x(t), y(t)) ∈ Λc for all t.
Example 6.1. Let H(x, y) = Ax2 + Bxy + Cy 2 . Then (0, 0) is the only equilibrium point.
• If c 6= 0, then the curve Λc is a conic. Precisely
i) If B 2 − 4AC < 0, and c > 0, then Λc is an ellipse.
ODES AND PDES 41
we have c = 18 . Thus the curve is defined by 2y 2 + 2x2 − x4 = 12 . Note that the curve does
not contain any zeros of f , which are 0, 1, −1. It is a closed curve surrounding the origin,
and hence the corresponding solution is periodic.
Example 6.9. Consider the IVP
x00 + x + 6x5 = 0 , x(0) = 0 , x0 (0) = a 6= 0 .
Then this is Hamiltonian system with H(x, y) = 21 y 2 + 12 x2 + x6 . Hence equation of Λc is
1 2
2
y + 12 x2 + x6 = c. From the initial conditions, we obtain c = 12 a2 , and hence the equation
of the curve is given by
x2 + 2x6 + y 2 = a2 .
Note that it is a compact curve and does not contain the zeros of f , which is 0. Thus, the
solution is periodic.
ODES AND PDES 43
7. Stability Analysis:
Consider the initial value problem
(
~u0 = f~(t, ~u(t)) , t ∈ [t0 , ∞)
(7.1)
~u(t0 ) = ~x ,
where we assume that f~ : Ω → Rn is continuous and locally Lipschitz in second argument.
Moreover, we assume that (7.1) has a solution defined in [t0 , ∞). The unique solution is
denoted by ~u(t, t0 , ~x).
Definition 7.1. Let ~u(·, t0 , ~x) be a solution of (7.1).
i) It is said to be stable if for every ε > 0, there exists δ = δ(ε, t0 , ~x) > 0 such that
|~x − ~y | < δ =⇒ |~u(t, t0 , ~x) − ~u(t, t0 , ~y )| < ε .
ii) It is called asymptotically stable if it is stable and there exists δ > 0 such that
for all ~y ∈ B(~x, δ), there holds
lim |~u(t, t0 , ~x) − ~u(t, t0 , ~y )| = 0.
t→∞
Note that eigenvalues of A are 0 and −2. Hence all solutions of the linear system are
stable.
Example 7.4. Consider the linear system of equation
u01 = −u1 , u02 = u1 − 2u2 , u03 = u1 + 2u2 − 5u3 .
We rewrite the above system as
−1 0 0
~u0 (t) = A~u(t) , where A = 1 −2 0 .
1 2 −5
It is easy to show that the eigenvalues of A are −1, −2 and −5. Thus, the solution of the
given system is asymptotically stable.
Theorem 7.2. Let t → 7 A (t) be continuous function from [t0 , ∞) 7→ M(n, R). Then all
solutions ~u(·, t0 , ~x) of the linear system
~u0 = A (t)~u(t) , ~u(t0 ) = ~x
are stable if and only if all solutions are bounded.
Theorem 7.3. Let t 7→ A (t) be continuous function from [t0 , ∞) 7→ M(n, R), where
A (t) = A + B (t). Then
a) If the real part of all multiple eigenvaluesR of A are negative and the real part of
∞
simple eigenvalues are non-negative, and t0 kB B (t)k dt < +∞, then any solution
0
of ~u = A (t)~u(t) is stable.
b) If real part of any eigenvalue of A is negative, and kB B (t)k → 0 as t → ∞, then
0
any solution of ~u = A (t)~u(t) is asymptotically stable.
Example 7.5. Consider the linear system of equation
1
u01 = −u1 − u2 + e−t u3 , u02 = −u2 + u4 ,
1+t
u03 = e−2t u2 − 3u3 − 2u4 , u04 = u3 − u4 .
This can be written as ~u0 = A (t)~u(t) with A (t) = A + B (t), where
0 0 e−t
−1 −1 0 0 0
0 −1 0 1
0 , and B (t) = 0 −2t 0 0
A= 1+t
.
0 0 −3 −2 0 e 0 0
0 0 1 −1 0 0 0 0
A − λII ) = 0, and solve it. It is
To find the eigen values of A , consider the equation det(A
easy to check that eigenvalues are −1, −1, −2 ± i. Moreover, kBB (t)k1 = e−t + e−2t + 1+t1
,
and hence kBB (t)k → 0 as t → ∞. Thus, in view of Theorem 7.3, any solution of the
given system is asymptotically stable.
Remark 7.1. For nonlinear systems, boundedness and stability are distinct concepts. for
example, consider the scalar first order ODE
(
y 0 (t) = tp , p ≥ 1 ,
y(t0 ) = y0 .
ODES AND PDES 45
1
tp+1 − tp+1
Then solution is given by y(t, t0 , y0 ) = y0 + p+1 0 . Hence
|y(t, t0 , y0 ) − y(t, t0 , y0 + δy0 )| = |∆y0 | < δ , if |∆y0 | < δ .
Thus, it is stable. But it is NOT bounded.
7.1. Critical points and their stability: Consider the system ~u0 (t) = f~(~u(t)). If
~x0 ∈ Ω is an equilibrium point,i.e., f~(~x0 ) = ~0, then ~u(t) = ~x0 is a solution of the ODE
~u0 (t) = f~(~u(t)), t > 0
~u(0) = ~x0 .
Conversely, if ~u(t) ≡ ~x0 is a constant solution, then f~(~x0 ) = ~0.
Definition 7.2. We say that ~x0 is stable/asymptotically stable/ unstable if this solution
is stable/asymptotically stable/unstable.
For any f~ ∈ C 1 (Ω), where Ω is an open subset of Rn , we denote by Df~(~x0 ) is the
matrix
∂f1 ∂f1 ∂f1
∂u1
(~x0 ) ∂u 2
(~x0 ) . . . ∂u n
(~x0 )
∂f2 (~x0 ) ∂f2 (~x0 ) . . . ∂f2 (~x0 )
Df~(x0 ) = 1 .
∂u ∂u 2 ∂u n
.
. .
. . . .
.
. . . .
∂fn ∂fn ∂fn
∂u1
(~x0 ) ∂u2
(~x0 ) ... ∂un
(~x0 )
Definition 7.3. A critical point ~x0 ∈ Ω is said to be hyperbolic if none of the eigenvalues
of Df~(~x0 ) are purely imaginary.
Example 7.6. Consider the nonlinear system
u01 = −u1 , u02 = −u2 + u21 , u03 = u3 + u21 .
The only equilibrium point of this system is ~0. The matrix Df~(~0) is given by
−1 0 0
Df~(~0) = 0 −1 0 .
0 0 1
Eigenvalues of Df~(~0) are −1, −1 and 1. Hence the equilibrium point ~0 is hyperbolic.
Theorem 7.4. A hyperbolic equilibrium point ~x0 is asymptotically stable if and only if
all eigenvalues of Df~(~x0 ) have negative real part.
Example 7.7. Consider the nonlinear system
u01 = −u1 + u23 , u02 = u21 − 2u2 , u03 = u21 + u32 − 4u3 .
Note that ~0 is an equilibria of the given system. The matrix Df~(~0) is given by
−1 0 0
Df~(~0) = 0 −2 0 .
0 0 −4
Eigenvalues of Df~(~0) are −1, −2 and −4, none of them are purely imaginary. Hence ~0
is a hyperbolic equilibrium point. Since all eigenvalues of Df~(~x0 ) have negative real part,
we conclude that ~0 is asymptotically stable.
46 A. K. MAJEE
Theorem 7.5. If ~x0 is a stable equilibrium point of the system ~u0 (t) = f~(~u(t)), the no
eigenvalues of Df~(~x0 ) has positive real part.
Remark 7.2. Hyperbolic equilibrium points are either asymptotically stable or unstable.
The stability of non-hyperbolic equilibrium points is typically more difficult to de-
termine. A method, due to Liapunov, that is very useful for deciding the stability of
non-hyperbolic equilibrium points.
7.2. Liapunov functions and stability.
Definition 7.4. Let f~ ∈ C 1 (Ω), V ∈ C 1 (Ω; R), and φ ~ t (~x) is the flow of the system
~u0 (t) = f~(~u(t)), i.e., φ
~ t (~x) = ~u(t, ~x). Then, for any ~x ∈ Ω, the derivative of the function
V along the solution ~u(t, ~x) is given by
. d ~
V (~x) = V (φ t (~
x)) = ∇V (~x) · ~u0 (0, ~x) = ∇V (~x) · f~(~x).
dt t=0
trajectories of this system lie on the circle x21 +x22 = c2 . Hence (0, 0) is NOT asymptotically
stable equilibrium point.
Example 7.10. Consider the second order differential equation x00 + q(x) = 0, where
q : R → R is a continuous function such that xq(x) > 0 ∀x 6= 0. This can be written as a
system
x01 = x2 , x02 = −q(x1 ) where x1 = x .
The total energy of the system (sum of kinetic energy 21 (x01 )2 and the potential energy)
Z x1
x22
V (~x) = + q(s) ds .
2 0
Note that (0, 0) is an equilibrium point, and V (0, 0) = 0. Moreover, since xq(x) > 0 ∀x 6=
0, it is easy to check that V (x1 , x2 ) > 0 for all (x1 , x2 ) ∈ R2 \ {~0}. Therefore, V is a
Liapunov function. Now
.
V (x1 , x2 ) = (q(x1 ), x2 ) · (x2 , −q(x1 )) = 0 .
The solution curves are given by V (~x) = c, i.e., the energy is constant on the solution
curves or trajectories of this system. Hence the origin is a stable equilibrium point.
Example 7.11. Consider the nonlinear system
x01 = −x2 + x31 + x1 x22 , x02 = x1 + x32 + x2 x21 .
Note that (0, 0) is an equilibrium point. Consider the Liapunov function V (x1 , x2 ) =
.
x21 + x22 . Then V (x1 , x2 ) = (2x1 , 2x2 ) · (−x2 + x31 + x1 x22 , x1 + x32 + x2 x21 ) = 2(x21 + x22 )2 >
0 , ∀(x1 , x2 ) ∈ R2 \ {~0}. Thus, by Theorem 7.6, we conclude that (0, 0) is unstable
equilibrium point.
Example 7.12. Let f (x) resp. g(x) be an even polynomial resp. odd polynomial in x.
Consider the 2nd order ODE y 00 + f (y)y 0 + g(y) = 0. This can be written as
Z x
0 0
x1 = x2 − F (x1 ) , x2 = −g(x1 ) , where F (x) = f (s) ds .
0
To see this, let x1 = y. Then From the equation, we have
d d 0
x001 + F (x1 ) + g(x1 ) = 0 =⇒ [x + F (x1 )] = −g(x1 ) .
dt dt 1
Set x2 = x01 + F (x1 ). Then we have x01 = x2 − F (x1 ) , x02 = −g(x1 ).
Rx
Let G(x) = 0 g(s) ds, and suppose that G(x) > 0 and g(x)F (x) > 0 in a deleted
neighborhood of the origin. Then the origin is a asymptotically stable equilibrium point.
Indeed, consider the Liapunov function in a nbd. of (0, 0) as
Z x1
1
V (x1 , x2 ) = g(s) ds + x22 .
0 2
Note that V (0, 0) = 0 and V (x1 , x2 ) > 0 in a deleted nbd. of (0, 0). Moreover,
.
V (x1 , x2 ) = (g(x1 ), x2 ) · (x2 − F (x1 ), −g(x1 )) = −g(x1 )F (x1 ) < 0 .
Hence origin is a asymptotically stable equilibrium point. Note that if we assume that
G(x) > 0 and g(x)F (x) < 0 in a deleted neighborhood of the origin,
then origin will be a unstable equilibrium point.
48 A. K. MAJEE
On the other hand, F (0) − F (−t) = u(t, x) − u(0, x − at) = u(t, x) − h(x − at). Therefore
Z t
u(t, x) = h(x − at) + f (s, x + a(s − t)) ds .
0
50 A. K. MAJEE
infinite at the positive time y = − h01(s) . Thus, if h0 (s0 ) < 0, at any point s0 , then the
solution does not exist globally. We can interpret the above as follows:
• If the initial velocity u(x, 0) of the fluid flow form a non-decreasing function of
position, then the fluid moves out in a smooth fashion.
• If the initial velocity is decreasing function, then the fluid flow undergo a shock
that correspond to collision of particles i.e., the integral surface folds itself.
Example 8.7. Consider the Burger’s equation as in Example 8.6 with initial condition
h(x) given by
1 , x < 0
h(x) = 1 − x , x ∈ [0, 1)
0 , x > 1
In this case, the characteristic lines are z = h(s), x = h(s)t + s and y = t. So,
s + t , s < 0
x(s, t) = s + t(1 − s) , s ∈ [0, 1]
s , s > 1
For y < 1, the characteristic lines do not intersect. So, given a point (x, y) with y < 1,
we can draw the backward through characteristics
x − y , x < y < 1 ,
s = x−y1−y
, y ≤ x ≤ 1,
x, x > 1.
8.2. General solution of quasilinear 1st order PDE:. In ODEs, an IVP is often
solved by finding a general solution that depends on an arbitrary constant and then using
the initial condition to evaluate the constant. For 1st order quasilinear PDE, a similar
process may be achieved by the method of Lagrange.
Definition 8.7 (General solution). F (φ, ψ) = 0, where φ = φ(x, y, z), ψ = ψ(x, y, z) and
F is an arbitrary smooth function, is called a general solution of f (x, y, z, p, q) = 0 if z, p
and q as determined by the relation F (φ, ψ) = 0 satisfies the PDE f (x, y, z, p, q) = 0.
Theorem 8.2. Suppose there exist two functions φ and ψ such that they are constant along
the characteristic equations of the quasilinear PDE aux + buy = c. Then F (φ, ψ) = 0 is a
general solution of the PDE, where F is such that Fφ2 + Fψ2 6= 0.
Remark 8.2. Since F should satisfy only condition, Fφ2 + Fψ2 6= 0, one may choose F of
the form:
F (φ, ψ) = φ + g(ψ),
where g is a smooth arbitrary function.
Example 8.8. Find a general solution of uux + yuy = x.
Solution: The characteristic equation in the non-parametric form can be written as
dx dy dz
= = (8.7)
z y x
From (8.7), we have xdx−zdz = 0 i.e., d(x2 −z 2 ) = 0. Therefore, take φ(x, y, z) = x2 −z 2 .
Then φ(x, y, z) is constant along (8.7). Now by using (8.7), we see that
y
xdy − ydx = ydz − zdy =⇒ (x + z)dy − yd(x + z) = 0 =⇒ d( ) = 0.
x+z
y
Therefore, take φ(x, y, z) = x+z . Then ψ(x, y, z) is constant along the characteristic
equations. Thus, the general solution is F (φ, ψ) = φ + g(ψ) = 0, i.e.,
y
u2 = x2 + g( ).
x+u
Remark 8.3. For nonlinear equations, the term general solution need not mean that all
solutions are of this form. This phenomenon should be familiar from ODEs. For example,
√ 2
the general solution of ux + uy = u is given by u(x, y) = (x+f (x−y))
4
for arbitrary smooth
function f . But the trivial solution u ≡ 0 is not covered by the general solution.
8.3. Nonlinear equation: A general nonlinear 1st order PDE in x, y takes of the form
F (x, y, u, ux , uy ) = 0. Let p = ux and q = uy . Suppose F has a quasilinear form
F ≡ a(x, y, z)p + b(x, y, z)q − c(x, y, z) = 0.
Then the characteristic equations are
dx dy dz
Fp = a = , Fq = b = , pFp + qFq = ap + bq = c = .
dt dt dt
Taking this as motivation, we write three equations , for general nonlinear 1st order PDE
dx dy dz
= Fp , = Fq , = pFp + qFq .
dt dt dt
54 A. K. MAJEE
8.4. Complete integral and general solutions: We have considered general solution
solution for quasilinear problem. Do such general solutions exist for fully nonlinear equa-
tions? The answer is yes but the process is more complicated than the quasilinear case.
Let us first consider so called complete integrals. Let A ⊂ R2 be an open set which is the
parameter set. For any C 2 -function u, we denote
2 ua1 uxa1 uya1
(Da u, Dxa u) = .
ua2 uxa2 uya2
Definition 8.8 (Complete integral). A C 2 - function u(x, a) is said to be a complete
integral in U × A of the equation F (x, y, u, ux , uy ) = 0 in U if u(x, a) solves the PDE
2
F (x, y, u, ux , uy ) = 0 and rank of (Da u, Dxa u) is equal to 2.
Example 8.11. Find a complete integral of ux uy = u.
Solution: From the given equation, we have F (x, y, z, p, q) = pq − z. The characteristic
equations are
dx dy dz dp dq
= q, = p . = 2z = p, = q.
dt dt dt dt dt
From last two equations, we have p = c1 et and p = c2 et . Thus from third equation,
we have z = c1 c2 e2t + c3 . From the first equation, we have x = c2 et + a and from
the second equation, we have y = c1 et + b. Thus, (x − a)y − b = c1 c2 e2t , and hence
u(x, y, a, b) = z = (x − a)(y − b) + c3 . So, u(x, y, a, b) will be a solution if c3 = 0. Then
we get u(x, y, a, b) = (x − a)(y − b). It is easy to check that
2 b−y 0 −1
(Da u, Dxa u) = .
−x + a −1 0
whose rank is 2. Therefore, u(x, y, a, b) = (x − a)(y − b) is a complete integral.
56 A. K. MAJEE
∂u
= ux (x, y, φ(x, y), ψ(x, y)) , as (x, y, φ(x, y), ψ(x, y)) = 0 .
∂ai
Similarly, vy (x, y) = uy (x, y, φ(x, y), ψ(x, y)). Thus, F (x, y, v(x, y), vx (x, y), vy (x, y)) = 0.
This completes the proof.
Definition 8.10 (Singular solution). The solution v described in Theorem 8.4 is called
singular solution of the nonlinear 1st order PDE F (x, y, u, ux , uy ) = 0.
Example 8.13. Find the singular solution of u2x + u2y = 1 + 2u.
Solution: To find singular solution, we first need to find complete integral of the given
PDE. Here F (x, y, z, p, q) = p2 + q 2 − 1 − 2z. The characteristic equations are
dx dy dp dq
= 2p , = 2q , = 2p , = 2q .
dt dt dt dt
Solving these, we have p = c1 e2t , q = c2 e2t . Hence pq = a. Since p2 + q 2 − 1 − 2z = 0, we
have r r
1 + 2z 1 + 2z
p = ±a , q = ±a .
1 + a2 1 + a2
Now from the strip condition
r r r
dz dx dy 1 + 2z dx 1 + 2z dy 1 + 2z dx dy
=p +q = ±a ± = ± a +
dt dt dt 1 + a2 dt 1 + a2 dt 1 + a2 dt dt
√ 2
Integrating, we have 1 + 2z = ± √ax+y 1+a2
+ b, and hence u(x, y, a, b) = 1 √ax+y
2 1+a2
+ b − 12 .
2
One can check that rank of (Da u, Dxa 2
u) is 2. Hence u(x, y, a, b) = 12 √ax+y 1+a2
+ b − 21 is
a complete integral. Now ua = 0 and ub = 0 gives √ax+y + b = 0. Thus, v(x, y) = − 21 is a
1+a2
singular solution.
To generate more solutions from the complete integrals, we vary the above construction.
Choose an open set à ⊂ R and a C 1 - function h : à → R so that the graph (ã, h(ã)) lies
with in A ⊂ R2 .
Definition 8.11 (General integral). The general integral (depending on h) is the enve-
lope ṽ(x, y) of the functions {u(·, ã)}ã∈Ã provided this envelope exists and is C 1 , where
u(x, y, ã) = u(x, y, ã, h(ã)).
Example 8.14. Find a general integral of ux uy = u.
Solution: We have shown that a complete integral of the above PDE is given by
u(x, y, a, b) = (x − a)(y − b).
Let h : R → R be a function such that h(a) = a. Then u(x, y, a) = (x − a)(y − a), and
2
hence ua = 0 gives a = x+y
2
. Therefore, the general integral is v(x, y) = − x−y
2
.
Remark 9.2. Consider a general 2nd order PDE in two independent variables x, y as
F (x, y, u, ux , uy , uxx , uxy , uyy ) = 0.
Let
∂F 1 ∂F ∂F
a= , b= , c= .
∂uxx 2 ∂ux,y ∂uyy
Then the PDE F (x, y, u, ux , uy , uxx , uxy , uyy ) = 0 is hyperbolic, elliptic, and parabolic, if
ac − b2 < 0 , ac − b2 > 0 , ac − b2 = 0
respectively.
Example 9.5. Consider the Monge-Ampère equation
uxx uyy − u2xy = f (x).
Here, a = uyy , b = −uxy and c = uxx . Thus,
i) equation is elliptic for a solution u exactly when f (x) > 0.
ii) equation is hyperbolic for a solution u exactly when f (x) < 0.
iii) equation is parabolic for a solution u exactly when f (x) = 0.
9.1. Canonical form of 2nd order principally linear PDE. Consider a 2nd order
principally linear PDE
a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy = d(x, y, ux , uy ) . (9.4)
We use a change of coordinates from (x, y) to (ξ, η) so that equation (9.4) may transformed
into a equation so that its principal part takes the form of wave, heat or Laplace equation.
The Canonical forms may sometimes be used to find the general solution.
Theorem 9.1 (For hyperbolic equation). Let the equation (9.4) by hyperbolic in a region
Ω of the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→
(ξ, η) in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the
form
uξη = D̃(ξ, η, u, uξ , uη ) . (9.5)
ξ resp. η is constant along the characteristic curve determined by the ODE
√ √
dy b(x, y) − b2 − ac dy b(x, y) + b2 − ac
= , resp. = .
dx a(x, y) dx a(x, y)
Remark 9.3. Taking further change of variables x̃ = ξ + η and ỹ = ξ − η, the equation
(9.7) transformed into the PDE
uỹỹ − ux̃x̃ = D̃(x̃, ỹ, u, ux̃ , uỹ ).
Example 9.6. Find the canonical form of the PDE: x2 uxx − 2xyuxy − 3y 2 uyy + uy = 0.
Solution: In this case, a = x2 , b = −xy and c = −3y 2 . Thus, b2 − ac = 4 x2 y 2 .
Therefore, the equation is hyperbolic at every point (x, y) such that xy 6= 0. At the
point on the coordinate axes , the equation is of parabolic type. Let us consider the case
x > 0, y > 0. Then equation is of hyperbolic type there. The equation of characteristic
dy −y ± 2y
curves are = . Thus, the solutions are x−1 y = c and x3 y = c. Define
dx x
ξ(x, y) = x−1 y and η(x, y) = x3 y. Then we have
−16x2 y 2 uξη + 5x−1 uξ + x3 uη = 0 .
ODES AND PDES 61
1
The above equation can be written in the variable ξ and η completely. We see that x = ( ηξ ) 4
1
and y = (ξ 3 η) 4 . Substituting the values of x and y, we have
5 1 1 1
uξη − 1 uξ − uη = 0 .
16 (ξ 3 η 5 ) 4 16 (ξ 7 η) 14
This is required canonical form.
Example 9.7. Find the general solution of the 2nd order PDE: xuxx + 2x2 uxy = ux − 1.
Solution: Here a = x, b = x2 and c = 0. So, b2 − ac = x4 > 0 for x 6= 0. Hence the
equation is hyperbolic provided x 6= 0. The characteristic curves are found by solving
√ (
dy x2 ± x4 2x
= =
dx x 0.
Hence we have y = x2 +c and y = c. Therefore, ξ(x, y) = x2 −y and η(x, y) = y. Then the
−3
equation reduces to 4x3 uξη = −1 and hence uξη = − 41 (ξ + η) 2 . This is desired canonical
−1
form. Now integrating with respect to η, we have uξ = 12 (ξ + η) 2 + f (ξ). Integrating
again with respect to ξ, we obtain
1
u = (ξ + η) 2 + F (ξ) + G(η).
Inverting to the variables x and y, we obtain our general solution as
u(x, y) = x + F (x2 − y) + G(y) ,
where F and G are arbitrary functions.
Theorem 9.2 (For parabolic equation). Let the equation (9.4) be parabolic in a region Ω
of the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→ (ξ, η)
in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the form
uηη = D̃(ξ, η, u, uξ , uη ) . (9.6)
dy b(x,y)
ξ is constant along the characteristic curve determined by the ODE dx = a(x,y) , and η is
chosen such a way that (ξ, η) defines a new coordinate system near (x0 , y0 ).
Example 9.8. Find the canonical form of the PDE
x2 uxx − 2xyuxy + y 2 uyy = 0.
Solution: Observe that the PDE is of parabolic type at every point (x, y) ∈ R2 . Note that
at (0, 0), the equation deduces to 0 = 0, and thus, we can determine canonical form in
any domain NOT containing the origin. In order to find the new coordinate system (ξ, η),
dy
we need to solve the ODE dx = − xy to find ξ and then choose η so that (ξ, η) represents
a coordinate system. Note here that ξ(x, y) = xy. Take η(x, y) = y so that the Jacobian
J = −x 6= 0. For this coordinate system, we have
ux = yuξ + uη ; uxx = y 2 uξξ + 2yuξη + uηη ;
uy = xuξ ; uxy = xyuξξ + xuξη + uξ ; uyy = x2 uξξ .
Thus, the PDE reduces to
ξ
x2 uηη − 2xyuξ = 0 , i.e, uηη = 2 uξ .
η2
62 A. K. MAJEE
Theorem 9.3 (For elliptic equation). Let the equation (9.4) be elliptic in a region Ω of
the xy-plane. Let (x0 , y0 ) ∈ Ω. Then there exists a change of coordinates (x, y) 7→ (ξ, η)
in a nbd. of (x0 , y0 ) such that (9.4) reduces to a 2nd order hyperbolic PDE of the form
uηη + uξξ = D̃(ξ, η, u, uξ , uη ) . (9.7)
In order to√ find the new coordinate (ξ, η), one needs to solve the characteristic ODE
b2 −ac
dy
dx
= b(x,y)−
a(x,y)
in the complex plane. Let Φ is constant along the characteristic. Then
(ξ, η) is given by ξ(x, y) = Re Φ(x, y) and η(x, y) = Im Φ(x, y).
Example 9.9. Find the canonical form of the PDE uxx + x2 uyy = 0.
Solution: Observe that the equation is of elliptic type at every point execpt on the y-
dy
axis. Let us solve the characteristic equation dx = ±ix in the complex plane. Note that
x2
Φ = y + i 2 is constant along the characteristic. Set ξ(x, y) = Re Φ(x, y) = y and
2
η(x, y) = Im Φ(x, y) = x2 . For this coordinate system, we have
uxx = uη = x2 uηη , uyy = uξξ .
Therefore, required canonical form is
1
uξξ + uηη +uη = 0 .
2η
Here ξ = const. lines represents a family of straight lines parallel to x- axis and η =
const. lines represents family of parabolas.
References
[1] Shair Ahmad, Antonio Ambrosetti. A textbook on Ordinary Differential Equations. Unitext-La
Matematica per il 3+2, 73 (2013).
[2] Earl A. Coddington. An Introduction to ordinary Differential Equations Prentice-Hall Mathematics
Series, Prentice-Hall of India (2003).