Eigenvalue Problems For Odes 1
Eigenvalue Problems For Odes 1
Initializations
O restart
O with LinearAlgebra :
with Student Calculus1
with RootFinding :
Introduction
Some boundary value problems for partial differential equations are amenable to analytic techniques.
For example, the constant-coefficient, second-order linear equations called the heat, wave, and
potential equations are solved with some type of Fourier series representation obtained from the
Sturm-Liouville eigenvalue problem that arises upon separating variables.
In this, and the next two articles in this series, we examine the role of Maple in the solution of such
boundary value problems. We will show efficient techniques for separating variables, then show
how to guide Maple through the solution of the resulting Sturm-Liouville eigenvalue problems. In
four previous columns we have demonstrated how to implement the Fourier series calculations in
Maple. Together, these articles represent the essence of the typical undergraduate course in boundary
value problems. In fact, the author's Advanced Engineering Mathematics electronic book includes a
complete course in the subject area.
In subsequent articles, we will explore separation of variables for Laplace's equation in cylindrical
coordinates, a process that leads to Bessel's equation as the ODE in the Sturm-Liouville eigenvalue
problem, which will now be singular and not regular as in this article. Finally, we will separate
variables for Laplace's equation in spherical coordinates so that the resulting ODE in the SturmLiouville eigenvalue problem will be Legendre's equation, again a singular problem. The difference
between these two problems is that for Bessel's equation, the resulting Bessel functions themselves
become eigenfunctions, whereas for Legendre's equation, it is far more difficult to pass from
Legendre functions in a general solution to the Legendre polynomials that are the eigenfunctions.
v2
2
vt
u x, t = c2
v2
2
vx
u x, t
v2
v2
2
u
x,
t
=
c
u x, t
2
2
vt
vx
separates under the assumption that u x, t can be written as the product
(3.1)
O U := X x T t
U := X x T t
(3.2)
Substitution leads to
O (3.1)
u x, t = U
2
d
d
T t = c2
X x
2
dt
dx2
whereupon division by u x, t in its separated form yields
X x
O simplify
T t
(3.3)
(3.3)
U c2
d2
2
T t
dt
T t c2
d2
X x
2
dx
=
X x
(3.4)
Note that we have also divided through by c2, the wave speed. The Bernoulli separation constant is
now introduced, resulting in the two ordinary differential equations
O q1 d lhs (3.4) =
q2 d rhs (3.4) =
d2
T t
dt2
q1 :=
T t c2
d2
X x
dx2
q2 :=
X x
more commonly written as
O DE1 d numer normal lhs q2 Krhs q2
=0
=0
(3.5)
DE1 :=
d2
dx2
X x K X x = 0
d
2
T t K T t c = 0
(3.6)
2
dt
The first equation governing X x is a formally self-adjoint ODE that will become part of the SturmLiouville eigenvalue problems considered in the following section.
DE2 :=
(4.1.1)
DE1, X 0 = 0, X = 0 , X x
X x =0
Unfortunately, Maple returns the zero-solution, and does not even begin to solve the eigenvalue
problem. That is why we are writing this series of articles. Maple can actually be of great help in
solving such problems, but it must be guided by the user.
The characteristic equation for (4.1.1) is obtained by making the exponential guess
O EG := em x
mx
EG := e
Substitution and the obvious algebra then give
O
1
collect DE1
EG
, exp
X x = EG
m2 K = 0
It should be obvious that the characteristic roots are m =G , and that = 0 is the "repeated
root" case. Since the form of the general solution of (4.1.1) changes for this case, we consider it
first as a special case. The differential equation is then
O DE1a d DE1
=0
d2
DE1a :=
dx2
X x =0
and a solution of this equation with just the left-hand boundary condition is
O dsolve
DE1a, X 0 = 0 , X x
X x = _C1 x
(4.1.2)
Applying the right-hand boundary condition to this solution leads to the equation
O rhs (4.1.2)
=0
x=
_C1 = 0
from which it is clear that the multiplicative constant has to be zero. Since this leads to the trivial
solution X x = 0, we conclude that = 0 is not an eigenvalue.
We next consider the case s 0, and again use the strategy of applying just the left-hand
boundary condition. The resulting solution is
O X1 d rhs dsolve
DE1, X 0 = 0 , X x
X1 := K_C2 e
K x
C_C2 e
Because Maple could return this solution with _C1 as the arbitrary constant, we use the following
device to rewrite the solution in terms of the constant c. No matter what name Maple returns for
the arbitrary constant, our construction will always be in terms of c.
O param d remove has, indets X1 , , x
X2 d X1
1 :
param = c
X2 := Kc e
Cc eK
=0
x=
Kc e
Cc eK
=0
Without intervention, Maple will provide just the zero solution for , as we confirm with
O solve (4.1.3),
0
The requisite tool is the environment variable
O _EnvAllSolutions := true
_EnvAllSolutions := true
which, if set to the value true, enables Maple to return the solution
(4.1.3)
O solve (4.1.3),
2
K_Z1
(4.1.4)
Maple uses the notation _Zk, k = 1, 2,, to indicate an integer. Unfortunately, each time this
solve command is executed, the index k will increment by 1, so again, we cannot reference this
symbol in a fail-safe way. Thus, we resort to the following device
O temp d indets (4.1.4) 1 :
(4.1.4)
temp = n
2
Kn
to write the eigenvalues with a unique notation. The eigenvalues are the negative numbers
2
Kn , n = 1, 2,.
It is also possible to lead Maple through the steps resulting in this solution for the eigenvalues.
We begin by writing (4.1.3) in the form
O factor (4.1.3)
Kc e KeK = 0
Since c = 0 would result in the trivial solution X x = 0, we must have c s 0, so that
(4.1.5)
(4.1.6)
=0
(4.1.7)
from which we realize that if is real, the only solution of this equation will be = 0, again
leading to the trivial solution. This inspires the substitution
O simplify evalc (4.1.7)
assuming O 0
2
= K
K2 I sin = 0
(4.1.8)
O DE1
2
d
X x K X x = 0
2
dx
will have a different set of eigenvalues and eigenfunctions than we obtained with the Dirichlet
conditions. Since we already know that = 0 must be handled as a special case, we begin by
obtaining the general solution of the ODE
O DE1a
2
d
X x =0
2
dx
We again take precautions to write the solution with unique arbitrary constants, obtaining
O temp := rhs dsolve DE1a, X x
O X0 '
=0
x=0
b =0
=0
x=
b =0
and
Cc2 eK
To this general solution we apply the boundary conditions, obtaining the homogeneous algebraic
equations
O Eq1 d X1 '
O Eq2 d X1 '
=0
and
x=0
Eq1 := c1
Kc2
=0
=0
x=
Eq2 := c1
Kc2
eK
Maple's solve command cannot determine values of for which this system has nontrivial
solutions. We guide Maple by writing the system matrix as
=0
O GenerateMatrix
1
K
(4.2.1)
e
K e
and setting its determinant to zero, the condition under which the system will have nontrivial
solutions for c1 and c2.
O Determinant (4.2.1) = 0
K
K e
C e
=0
(4.2.2)
Clearly, = 0 is a solution, but that belongs to the first case. "Canceling" and solving, gives
O solve
(4.2.2)
,
K_Z32
(4.2.3)
=Kn
assuming n T posint
Eq3 := I c1 n KI c2 n = 0
Eq4 := I n K1
c1 Kc2 = 0
assuming n T posint
2 C cos n x
1
, or as
2
d
y x
dx
dx
bc1 d 3 D y 0 C5 y 0 = 0
O deq :=
y x C2
bc2 d 2 D y
C y x = 0
Cy = 0
2
deq :=
d
d
y x C2
y x C y x = 0
2
dx
dx
bc1 := 3 D y 0 C5 y 0 = 0
bc2 := 2 D y
Cy = 0
To obtain solutions of the differential equation, we seek a fundamental set of linearly independent
members. Making the exponential guess leads to the characteristic equation
O
1
collect deq
, exp
EG
y x = EG
2
m C2 m C = 0
(5.1)
1 K , K1 K
1 K
(5.2)
From these characteristic roots, we can see that = 1 is the repeated-root case for which the solution
of the ODE assumes the special form
O temp := dsolve deq
,y x
=1
Y1 d unapply eval rhs temp , _C1 = r1, _C2 = s1 , x :
'Y1 ' x = Y1 x
Y1 x = r1 eKx Cs1 eKx x
For s 1, the general solution of the ODE can be written as
O temp := dsolve deq, y x
K1 C
K C 1 x
Cs2 e
K1 K
K C 1 x
Even though Maple defaults to an exponential form for the members of its fundamental set, we
should not assume that is characterized by anything other than the condition s 1. In fact, at this
point we don't even know if the eigenvalues are real. Consequently, we avoid separating the
calculation into the cases ! 1 and O 1 because that would imply that we knew was not a
complex number.
We let the boundary conditions determine by considering just the cases = 1 and s 1. In the first
case, we get the algebraic equations
O for k to 2 do
qk d bck
y = Y1
end do
q1 := 2 r1 C3 s1 = 0
K
q2 := Kr1 e Ks1 e
C2 s1 e
=0
K1 C
K C1
K C 1
K C1
e
K1 C
Cs2 e
C3 s2 K1 K
K C 1
K1 K
K C1
C2 s2 K1 K
K C 1
C5 r2 C5 s2 = 0
K C1
K1 K
K C 1
=0
q1, q2 , r1, s1
Q1 := K2 eK C7 eK
(5.3)
and
O Q2 := collect Determinant GenerateMatrix
Q2 := 8 K7
K C1 K6 e
C6 eK 1 C
K1 C
K C 1
q3, q4 , r2, s2
C K8 K7
K C1
, exp
(5.4)
K C 1
respectively. Since the determinant in (5.3) is nonzero, we must have r1 = s1 = 0, so the corresponding
(5.5)
The Analytic command in the RootFinding package finds all real and complex roots in a region of the
complex plane. We already know that = 1 is not an eigenvalue, so we have the real eigenvalues
O remove w/is w K1 ! 0.001 , (5.5)
0.4558571180, 1.116356583, 4.243407308, 9.251574985
(5.6)
While the Analytic command provides a guarantee that all the solutions in a fixed region have been
found, it is slow. If we accept (5.5) as evidence that the eigenvalues are positive and real, then we
can find them slightly faster by changing the form of the determinant in (5.4) to
O simplify evalc Q2
K2 I eK 7
assuming O 1
K1 cos
K1 K8 sin
K1 C6 sin
K1
(5.7)
(5.7)
r2 2 C3
K C1
K2 C3
K C1
For any eigenvalue , the corresponding eigenfunction is then given (up to a multiplicative constant)
by
O Yd
1
collect Y2 x
r2
Y := e
K1 C
, r2, exp
s2 = S
K C 1 x
2 C3
K C1
K2 C3
K1 K
K C 1 x
K C1
K1 x C6
K1 sin
K1 x
(5.9)
K4 I sin
K1 x C6 I
K1 cos
K1 x
and then
O collect (5.9), sin, cos
Kx
2e
K1 K4 I sin
K5 C9
6
Kx
2e
K1 x
9 K9 C6 I K1
K5 C9
(5.10)
K1 x
cos
12 eKx
assuming O 1;
assuming O 1
2 eKx 9 K9 cos K1 x
K
K5 C9
K1 sin
K1 x
K5 C9
8 eKx sin
K1 x
Yb :=
K5 C9
12 eKx
K1 cos K1 x
K5 C9
In fact, each of the real and imaginary parts is separately a solution. But they cannot be independent
solutions - they must be multiples of each other. This common multiple is
Ya
, x, 3 , polynom
Yb
3
f := K
K1
2
and we can verify that indeed, the real and imaginary parts of Y x are linearly dependent by
showing the linear combination f Yb KYa vanishes.
O f := simplify convert taylor
O simplify Yb f KYa
0
The common multiple can be found with
O g := Ya
x=0
g := K
2 9 K9
K5 C9
Ya
, exp, cos, sin
g
Yc := cos
K1 x C
K1 sin
K1 x
9 K9
eKx
O unassign ''
1 := simplify expand convert Yc
, exp
assuming x O 0;
= L1
for k from 2 to nops L do
k d Yc
= Lk
end do
K0.2623395890 x
K1.737660411 x
1 := 0.0481208852 e
C0.9518791148 e
Kx
eKx
Kx
Kx
Kx
A graph of the first six eigenfunctions is given in Figure 1, where our normalization has all
eigenfunctions passing through 0, 1 . The eigenfunctions k x , k = 1,, 6, are given in dotted
black, then solid black, blue, red, green, and magenta.
O plot
seq k, k = 1 ..6
1.0
0.5
0
1
K0.5
Figure 1 Eigenfunctions k, k = 2. , 6 are in solid black, blue, red, green, and
magenta, respectively, whereas 0 is in dotted black
Since the differential equation is not self-adjoint, we are not assured of orthogonal eigenfunctions.
The inner products i, j , 1 % i, j % 6, are computed and displayed in Table 1. If these
eigenfunctions were orthogonal, the off-diagonal entries would be zero. That they are not is our
evidence the eigenfunctions are not orthogonal.
O Matrix 6, 6, i, j / i j dx
0
0.3099959961
0.2770259909
0.2770259909
0.2727275218
0.1891121866
0.2129038605
0.1283061895
0.1415643917
0.1302398216
0.2024635603
0.2505873914
K1 x
q5 := 3 r3
q6 := Kr3 eK sin
K
K2 s3 e
sin
K1 C2 r3 eK cos
K1
K1
K1 Ks3 eK cos
K1
K1 = 0
2
3
s3
(5.11)
K1
1
collect eval Y3 x , (5.11) , exp, s3
s3
Yd := K
2 sin
3
K1 x
K1
Ccos
K1 x
eKx