0% found this document useful (0 votes)
32 views18 pages

BVPs

This document discusses boundary value problems and partial differential equations. It begins by introducing boundary value problems and differentiating them from initial value problems. It then provides the 1D heat equation and 1D wave equation as examples of partial differential equations that lead to boundary value problems for ordinary differential equations when solved using separation of variables. The document focuses on solving the 1D heat equation as an example to demonstrate this process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views18 pages

BVPs

This document discusses boundary value problems and partial differential equations. It begins by introducing boundary value problems and differentiating them from initial value problems. It then provides the 1D heat equation and 1D wave equation as examples of partial differential equations that lead to boundary value problems for ordinary differential equations when solved using separation of variables. The document focuses on solving the 1D heat equation as an example to demonstrate this process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

4

Boundary Value Problems

4.1 Introduction
Until this point we have solved initial value problems. For an initial value
problem one has to solve a differential equation subject to conditions on the
unknown function and its derivatives at one value of the independent variable.
For example, for x = x(t) we could have the initial value problem

x′′ + x = 2, x(0) = 1, x′ (0) = 0. (4.1)

In the next chapters we will study boundary value problems and various
tools for solving such problems. In this chapter we will motivate our interest
in boundary value problems by looking into solving the one-dimensional heat
equation, which is a partial differential equation. for the rest of the section,
we will use this solution to show that in the background of our solution of
boundary value problems is a structure based upon linear algebra and analysis
leading to the study of inner product spaces. Though technically, we should
be lead to Hilbert spaces, which are complete inner product spaces.
For an initial value problem one has to solve a differential equation subject
to conditions on the unknown function or its derivatives at more than one
value of the independent variable. As an example, we have a slight modification
of the above problem: Find the solution x = x(t) for 0 ≤ t ≤ 1 that satisfies
the problem
x′′ + x = 2, x(0) = 1, x(1) = 0. (4.2)
Typically, initial value problems involve time dependent functions and
boundary value problems are spatial. So, with an initial value problem one
knows how a system evolves in terms of the differential equation and the state
of the system at some fixed time. Then one seeks to determine the state of
the system at a later time.
For boundary values problems, one knows how each point responds to its
neighbors, but there are conditions that have to be satisfied at the endpoints.
An example would be a horizontal beam supported at the ends, like a bridge.
132 4 Boundary Value Problems

The shape of the beam under the influence of gravity, or other forces, would
lead to a differential equation and the boundary conditions at the beam ends
would affect the solution of the problem. There are also a variety of other
types of boundary conditions. In the case of a beam, one end could be fixed
and the other end could be free to move. We will explore the effects of different
boundary value conditions in our discussions and exercises.
Let’s solve the above boundary value problem. As with initial value prob-
lems, we need to find the general solution and then apply any conditions that
we may have. This is a nonhomogeneous differential equation, so we have that
the solution is a sum of a solution of the homogeneous equation and a par-
ticular solution of the nonhomogeneous equation, x(t) = xh (t) + xp (t). The
solution of x′′ + x = 0 is easily found as

xh (t) = c1 cos t + c2 sin t.

The particular solution is easily found using the Method of Undetermined


Coefficients,
xp (t) = 2.
Thus, the general solution is

x(t) = 2 + c1 cos t + c2 sin t.

We now apply the boundary conditions and see if there are values of c1
and c2 that yield a solution to our problem. The first condition, x(0) = 0,
gives
0 = 2 + c1 .
Thus, c1 = −2. Using this value for c1 , the second condition, x(1) = 1, gives

0 = 2 − 2 cos 1 + c2 sin 1.

This yields
2(cos 1 − 1)
c2 = .
sin 1
We have found that there is a solution to the boundary value problem and
it is given by  
(cos 1 − 1)
x(t) = 2 1 − cos t sin t .
sin 1
Boundary value problems arise in many physical systems, just as many of
the initial values problems we have seen. We will see in the next section that
boundary value problems for ordinary differential equations often appear in
the solution of partial differential equations.
4.2 Partial Differential Equations 133

4.2 Partial Differential Equations

In this section we will introduce some generic partial differential equations


and see how the discussion of such equations leads naturally to the study
of boundary value problems for ordinary differential equations. However, we
will not derive the particular equations, leaving that to courses in differential
equations, mathematical physics, etc.
For ordinary differential equations, the unknown functions are functions
of a single variable, e.g., y = y(x). Partial differential equations are equations
involving an unknown function of several variables, such as u = u(x, y), u =
u(x, y), u = u(x, y, z, t), and its (partial) derivatives. Therefore, the derivatives
∂2u
are partial derivatives. We will use the standard notations ux = ∂u ∂x , uxx = ∂x2 ,
etc.
There are a few standard equations that one encounters. These can be
studied in one to three dimensions and are all linear differential equations. A
list is provided in Table 4.1. Here we have introduced the Laplacian operator,
∇2 u = uxx + uyy + uzz . Depending on the types of boundary conditions im-
posed and on the geometry of the system (rectangular, cylindrical, spherical,
etc.), one encounters many interesting boundary value problems for ordinary
differential equations.

Name 2 Vars 3D
Heat Equation ut = kuxx ut = k∇2 u
Wave Equation utt = c2 uxx utt = c2 ∇2 u
Laplace’s Equation uxx + uyy = 0 ∇2 u = 0
Poisson’s Equation uxx + uyy = F (x, y) ∇2 u = F (x, y, z)
Schrödinger’s Equation iut = uxx + F (x, t)u iut = ∇2 u + F (x, y, z, t)u
Table 4.1. List of generic partial differential equations.

Let’s look at the heat equation in one dimension. This could describe the
heat conduction in a thin insulated rod of length L. It could also describe the
diffusion of pollutant in a long narrow stream, or the flow of traffic down a
road. In problems involving diffusion processes, one instead calls this equation
the diffusion equation.
A typical initial-boundary value problem for the heat equation would be
that initially one has a temperature distribution u(x, 0) = f (x). Placing the
bar in an ice bath and assuming the heat flow is only through the ends of the
bar, one has the boundary conditions u(0, t) = 0 and u(L, t) = 0. Of course,
we are dealing with Celsius temperatures and we assume there is plenty of ice
to keep that temperature fixed at each end for all time. So, the problem one
would need to solve is given as
134 4 Boundary Value Problems

1D Heat Equation

PDE ut = kuxx 0 < t, 0 ≤ x ≤ L


IC u(x, 0) = f (x) 0<x<L
(4.3)
BC u(0, t) = 0 t>0
u(L, t) = 0 t>0
Here, k is the heat conduction constant and is determined using
properties of the bar.
Another problem that will come up in later discussions is that of the vi-
brating string. A string of length L is stretched out horizontally with both ends
fixed. Think of a violin string or a guitar string. Then the string is plucked,
giving the string an initial profile. Let u(x, t) be the vertical displacement of
the string at position x and time t. The motion of the string is governed by
the one dimensional wave equation. The initial-boundary value problem for
this problem is given as

1D Wave Equation

PDE utt = c2 uxx 0 < t, 0 ≤ x ≤ L


IC u(x, 0) = f (x) 0<x<L
(4.4)
BC u(0, t) = 0 t>0
u(L, t) = 0 t>0
In this problem c is the wave speed in the string. It depends on the
mass per unit length of the string and the tension placed on the
string.

4.2.1 Solving the Heat Equation

We would like to see how the solution of such problems involving partial
differential equations will lead naturally to studying boundary value problems
for ordinary differential equations. We will see this as we attempt the solution
of the heat equation problem 4.3. We will employ a method typically used in
studying linear partial differential equations, called the method of separation
of variables.
We assume that u can be written as a product of single variable functions
of each independent variable,

u(x, t) = X(x)T (t).

Substituting this guess into the heat equation, we find that

XT ′ = kX ′′ T.

Dividing both sides by k and u = XT, we then get


4.2 Partial Differential Equations 135

1 T′ X ′′
= .
k T X
We have separated the functions of time on one side and space on the other
side. The only way that a function of t equals a function of x is if the functions
are constant functions. Therefore, we set each function equal to a constant,
λ:
1 T′ X ′′
= = |{z}λ .
|k{zT} X
|{z} constant
function of t function of x
This leads to two equations:

T ′ = kλT, (4.5)

X ′′ = λX. (4.6)
These are ordinary differential equations. The general solutions to these equa-
tions are readily found as
T (t) = Aekλt , (4.7)
√ √
λx −λx
X(x) = c1 e + c2 e . (4.8)
We need to be a little careful at this point. The aim is to force our product
solutions to satisfy both the boundary conditions and initial conditions. Also,
we should note that λ is arbitrary and may be positive, zero, or negative. We
first look at how the boundary conditions on u lead to conditions on X.
The first condition is u(0, t) = 0. This implies that

X(0)T (t) = 0

for all t. The only way that this is true is if X(0) = 0. Similarly, u(L, t) = 0
implies that X(L) = 0. So, we have to solve the boundary value problem

X ′′ − λX = 0, X(0) = 0 = X(L). (4.9)

We are seeking nonzero solutions, as X ≡ 0 is an obvious and uninteresting


solution. We call such solutions trivial solutions.
There are three cases to consider, depending on the sign of λ.
I. λ > 0
In this case we have the exponential solutions
√ √
λx −λx
X(x) = c1 e + c2 e . (4.10)

For X(0) = 0, we have


0 = c1 + c2 .
√ √ √
We will take c2 = −c1 . Then, X(x) = c1 (e λx − e −λx
) = 2c1 sinh λx.
Applying the second condition, X(L) = 0 yields
136 4 Boundary Value Problems

c1 sinh λL = 0.

This will be true only if c1 = 0, since λ > 0. Thus, the only solution in
this case is X(x) = 0. This leads to a trivial solution, u(x, t) = 0.
II. λ = 0
For this case it is easier to set λ to zero in the differential equation. So,
X ′′ = 0. Integrating twice, one finds

X(x) = c1 x + c2 .

Setting x = 0, we have c2 = 0, leaving X(x) = c1 x. Setting x = L, we find


c1 L = 0. So, c1 = 0 and we are once again left with a trivial solution.
III. λ < 0
In this case is would be simpler to write λ = −µ2 . Then the differential
equation is
X ′′ + µ2 X = 0.
The general solution is

X(x) = c1 cos µx + c2 sin µx.

At x = 0 we get 0 = c1 . This leaves X(x) = c2 sin µx. At x = L, we find

0 = c2 sin µL.

So, either c2 = 0 or sin µL = 0. c2 = 0 leads to a trivial solution again.


But, there are cases when the sine is zero. Namely,

µL == nπ, n = 1, 2, . . . .

Note that n = 0 is not included since this leads to a trivial solution.


Also, negative values of n are redundant, since the sine function is an odd
function.
In summary, we can find solutions to the boundary value problem (4.9)
for particular values of λ. The solutions are
nπx
Xn (x) = sin , n = 1, 2, 3, . . .
L
for  nπ 
λn = −µ2n = − , n = 1, 2, 3, . . . .
L
Product solutions of the heat equation (4.3) satisfying the boundary con-
ditions are therefore
nπx
un (x, t) = bn ekλn t sin , n = 1, 2, 3, . . . , (4.11)
L
4.3 Connections to Linear Algebra 137

where bn is an arbitrary constant. However, these do not necessarily satisfy


the initial condition u(x, 0) = f (x). What we do get is
nπx
un (x, 0) = sin , n = 1, 2, 3, . . . .
L
So, if our initial condition is in one of these forms, we can pick out the right
n and we are done.
For other initial conditions, we have to do more work. Note, since the heat
equation is linear, we can write a linear combination of our product solutions
and obtain the general solution satisfying the given boundary conditions as

X nπx
u(x, t) = bn ekλn t sin . (4.12)
n=1
L

The only thing to impose is the initial condition:



X nπx
f (x) = u(x, 0) = bn sin .
n=1
L

So, if we are given f (x), can we find the constants bn ? If we can, then we will
have the solution to the full initial-boundary value problem. This will be the
subject of the next chapter. However, first we will look at the general form of
our boundary value problem and relate what we have done to the theory of
infinite dimensional vector spaces.

4.3 Connections to Linear Algebra


We have already seen in earlier chapters that ideas from linear algebra crop up
in our studies of differential equations. Namely, we solved eigenvalue problems
associated with our systems of differential equations in order to determine
the local behavior of dynamical systems near fixed points. In our study of
boundary value problems we will find more connections with the theory of
vector spaces. However, we will find that our problems lie in the realm of
infinite dimensional vector spaces. In this section we will begin to see these
connections.

4.3.1 Eigenfunction Expansions for PDEs

In the last section we sought solutions of the heat equation. Let’s formally
write the heat equation in the form
1
ut = L[u], (4.13)
k
where
138 4 Boundary Value Problems

∂2
L= .
∂x2
L is another example of a linear differential operator. [See Section 1.1.2.] It is
a differential operator because it involves derivative operators. We sometimes

define Dx = ∂x , so that L = Dx2 . It is linear, because for functions f (x) and
g(x) and constants α, β we have

L[αf + βg] = αL[f ] + βL[g]

When solving the heat equation, using the method of separation of vari-
ables, we found an infinite number of product solutions un (x, t) = Tn (t)Xn (x).
We did this by solving the boundary value problem

L[X] = λX, X(0) = 0 = X(L). (4.14)

Here we see that an operator acts on an unknown function and spits out an
unknown constant times that unknown. Where have we done this before? This
is the same form as Av = λv. So, we see that Equation (4.14) is really an
eigenvalue problem for the operator L and given boundary conditions. When
we solved the heat equation in the last section, we found the eigenvalues
 nπ 2
λn = −
L
and the eigenfunctions
nπx
Xn (x) = sin .
L
We used these to construct the general solution that is essentially a linear
combination over the eigenfunctions,

X
u(x, t) = Tn (t)Xn (x).
n=1

Note that these eigenfunctions live in an infinite dimensional function space.


We would like to generalize this method to problems in which L comes
from an assortment of linear differential operators. So, we consider the more
general partial differential equation

ut = L[u], a ≤ x ≤ b, t > 0,

satisfying the boundary conditions

B[u](a, t) = 0, B[u](b, t) = 0, t > 0,

and initial condition

u(x, 0) = f (x), a ≤ x ≤ b.
4.3 Connections to Linear Algebra 139

The form of the allowed boundary conditions B[u] will be taken up later.
Also, we will later see specific examples and properties of linear differential
operators that will allow for this procedure to work.
We assume product solutions of the form un (x, t) = bn (t)φn (x), where the
φn ’s are the eigenfunctions of the operator L,

Lφn = λn φn , n = 1, 2, . . . , (4.15)

satisfying the boundary conditions

B[φn ](a) = 0, B[φn ](b) = 0. (4.16)

Inserting the general solution



X
u(x, t) = bn (t)φn (x)
n=1

into the partial differential equation, we have

ut = L[u],

"∞ #
∂ X X
bn (t)φn (x) = L bn (t)φn (x) (4.17)
∂t n=1 n=1

On the left we differentiate term by term1 and on the right side we use the
linearity of L:
∞ ∞
X dbn (t) X
φn (x) = bn (t)L[φn (x)] (4.18)
n=1
dt n=1

Now, we make use of the result of applying L to the eigenfunction φn :


∞ ∞
X dbn (t) X
φn (x) = bn (t)λn φn (x). (4.19)
n=1
dt n=1

Comparing both sides, or using the linear independence of the eigenfunctions,


we see that
dbn (t)
= λn bn (t),
dt
whose solution is
bn (t) = bn (0)eλn t .
So, the general solution becomes

1
Infinite series cannot always be differentiated, so one must be careful. When we
ignore such details for the time being, we say that we formally differentiate the
series and formally apply the differential operator to the series. Such operations
need to be justified later.
140 4 Boundary Value Problems

X
u(x, t) = bn (0)eλn t φn (x).
n=1

This solution satisfies, at least formally, the partial differential equation and
satisfies the boundary conditions.
Finally, we need to determine the bn (0)’s, which are so far arbitrary. We
use the initial condition u(x, 0) = f (x) to find that

X
f (x) = bn (0)φn (x).
n=1

So, given f (x), we are left with the problem of extracting the coefficients bn (0)
in an expansion of f in the eigenfunctions φn . We will see that this is related
to Fourier series expansions, which we will take up in the next chapter.

4.3.2 Eigenfunction Expansions for Nonhomogeneous ODEs

Partial differential equations are not the only applications of the method of
eigenfunction expansions, as seen in the last section. We can apply these
method to nonhomogeneous two point boundary value problems for ordinary
differential equations assuming that we can solve the associated eigenvalue
problem.
Let’s begin with the nonhomogeneous boundary value problem:

L[u] = f (x), a ≤ x ≤ b
B[u](a) = 0, B[u](b) = 0. (4.20)

We first solve the eigenvalue problem,

L[φ] = λφ, a≤x≤b


B[φ](a) = 0, B[φ](b) = 0, (4.21)

and obtain a family of eigenfunctions, {φn (x)}∞


n=1 . Then we assume that u(x)
can be represented as a linear combination of these eigenfunctions:

X
u(x) = bn φn (x).
n=1

Inserting this into the differential equation, we have

f (x) = L[u]
"∞ #
X
=L bn φn (x)
n=1

X
= bn L [φn (x)]
n=1
4.3 Connections to Linear Algebra 141

X
= λn bn φn (x)
n=1
X∞
≡ cn φn (x). (4.22)
n=1

Therefore, we have to find the expansion coefficients cn = λn bn of the


given f (x) in a series expansion over the eigenfunctions. This is similar to
what we had found for the heat equation problem and its generalization in
the last section.
There are a lot of questions and details that have been glossed over in
our formal derivations. Can we always find such eigenfunctions for a given
operator? Do the infinite series expansions converge? Can we differentiate
our expansions terms by term? Can one find expansions that converge to
given functions like f (x) above? We will begin to explore these questions in
the case that the eigenfunctions are simple trigonometric functions like the
φn (x) = sin nπx
L in the solution of the heat equation.

4.3.3 Linear Vector Spaces

Much of the discussion and terminology that we will use comes from the theory
of vector spaces. Until now you may only have dealt with finite dimensional
vector spaces in your classes. Even then, you might only be comfortable with
two and three dimensions. We will review a little of what we know about finite
dimensional spaces so that we can deal with the more general function spaces,
which is where our eigenfunctions live.
The notion of a vector space is a generalization of our three dimensional
vector spaces. In three dimensions, we have things called vectors, which are
arrows of a specific length and pointing in a given direction. To each vector,
we can associate a point in a three dimensional Cartesian system. We just
attach the tail of the vector v to the origin and the head lands at (x, y, z).
We then use unit vectors i, j and k along the coordinate axes to write

v = xi + yj + zk.

Having defined vectors, we then learned how to add vectors and multiply
vectors by numbers, or scalars. Under these operations, we expected to get
back new vectors. Then we learned that there were two types of multiplication
of vectors. We could multiply then to get a scalar or a vector. This lead
to the dot and cross products, respectively. The dot product was useful for
determining the length of a vector, the angle between two vectors, or if the
vectors were orthogonal.
These notions were later generalized to spaces of more than three dimen-
sions in your linear algebra class. The properties outlined roughly above need
to be preserved. So, we have to start with a space of vectors and the opera-
tions between them. We also need a set of scalars, which generally come from
142 4 Boundary Value Problems

some field. However, in our applications the field will either be the set of real
numbers or the set of complex numbers.
Definition 4.1. A vector space V over a field F is a set that is closed under
addition and scalar multiplication and satisfies the following conditions: For
any u, v, w ∈ V and a, b ∈ F
1. u + v = v + u.
2. (u + v) + w = u + (v + w).
3. There exists a 0 such that 0 + v= v.
4. There exists a −v such that v + (−v) = 0.
5. a(bv) = (ab)v.
6. (a + b)v = av + bv.
7. a(u + v) = au + bv.
8. 1(v) = v.
Now, for an n-dimensional vector space, we have the idea that any vector
in the space can be represented as the sum over n linearly independent vectors.
Recall that a linearly independent set of vectors {vj }nj=1 satisfies
n
X
cj vj = 0 ⇔ cj = 0.
j=1

This leads to the idea of a basis set. The standard basis in an n-dimensional
vector space is a generalization of the standard basis in three dimensions (i, j
and k). We define

ek = (0, . . . , 0, 1
|{z} , 0, . . . , 0), k = 1, . . . , n. (4.23)
kth space

Then, we can expand any v ∈ V as


n
X
v= vk ek , (4.24)
k=1

where the vk ’s are called the components of the vector in this basis and one
can write v as an n-tuple (v1 , v2 , . . . , vn ).
The only other thing we will need at this point is to generalize the dot
product, or scalar product. Recall that there are two forms for the dot product
in three dimensions. First, one has that

u · v = uv cos θ, (4.25)

where u and v denote the length of the vectors. The other form, is the com-
ponent form:
3
X
u · v = u1 v1 + u2 v2 + u3 v3 = uk vk . (4.26)
k=1
4.3 Connections to Linear Algebra 143

Of course, this form is easier to generalize. So, we define the scalar product
between to n-dimensional vectors as
n
X
< u, v >= uk vk . (4.27)
k=1

Actually, there are a number of notations that are used in other texts. One
can write the scalar product as (u, v) or even use the Dirac notation < u|v >
for applications in quantum mechanics.
While it does not always make sense to talk about angles between general
vectors in higher dimensional vector spaces, there is one concept that is useful.
It is that of orthogonality, which in three dimensions another way of say
vectors are perpendicular to each other. So, we also say that vectors u and v
are orthogonal if and only if < u, v >= 0. If {ak }nk=1 , is a set of basis vectors
such that
< aj , ak >= 0, k 6= j,
then it is called an orthogonal basis. If in addition each basis vector is a unit
vector, then one has an orthonormal basis
Let {ak }nk=1 , be a set of basis vectors for vector space VP. We know that
any vector v can be represented in terms of this basis, v = nk=1 vk ak . If we
know the basis and vector, can we find the components? The answer is, yes.
We can use the scalar product of v with each basis element aj . So, we have
for j = 1, . . . , n
n
X
< aj , v > = < aj , vk ak >
k=1
n
X
= vk < aj , ak > . (4.28)
k=1

Since we know the basis elements, we can easily compute the numbers

Ajk ≡< aj , ak >

and
bj ≡< aj , v > .
Therefore, the system (4.28) for the vk ’s is a linear algebraic system, which
takes the form Av = b. However, if the basis is orthogonal, then the matrix
A is diagonal and the system is easily solvable. We have that

< aj , v >= vj < aj , aj >, (4.29)

or
< aj , v >
vj = . (4.30)
< aj , aj >
144 4 Boundary Value Problems

In fact, if the basis is orthonormal, A is the identity matrix and the solution
is simpler:
vj =< aj , v > . (4.31)
We spent some time looking at this simple case of extracting the compo-
nents of a vector in a finite dimensional space. The keys to doing this simply
were to have a scalar product and an orthogonal basis set. These are the key
ingredients that we will need in the infinite dimensional case. Recall that when
we solved the heat equation, we had a function (vector) that we wanted to
expand in a set of eigenfunctions (basis) and we needed to find the expansion
coefficients (components). As you can see, we need to extend the concepts for
finite dimensional spaces to their analogs in infinite dimensional spaces. Lin-
ear algebra will provide some of the backdrop for what is to follow: The study
of many boundary value problems amounts to the solution of eigenvalue prob-
lems over infinite dimensional vector spaces (complete inner product spaces,
the space of square integrable functions, or Hilbert spaces).
We will consider the space of functions of a certain type. They could
be the space of continuous functions on [0,1], or the space of differentiably
continuous functions, or the set of functions integrable from a to b. Later, we
will specify the types of functions needed. We will further need to be able to
add functions and multiply them by scalars. So, we can easily obtain a vector
space of functions.
We will also need a scalar product defined on this space of functions. There
are several types of scalar products, or inner products, that we can define. For
a real vector space, we define

Definition 4.2. An inner product <, > on a real vector space V is a mapping
from V × V into R such that for u, v, w ∈ V and α ∈ R one has
1. < u + v, w >=< u, w > + < v, w > .
2. < αv, w >= α < v, w > .
3. < v, w >=< w, v > .
4. < v, v >≥ 0 and < v, v >= 0 iff v = 0.

A real vector space equipped with the above inner product leads to a real
inner product space. A more general definition with the third item replaced
with < v, w >= < w, v > is needed for complex inner product spaces.
For the time being, we are dealing just with real valued functions. We
need an inner product appropriate for such spaces. One such definition is the
following. Let f (x) and g(x) be functions defined on [a, b]. Then, we define
the inner product, if the integral exists, as
Z b
< f, g >= f (x)g(x) dx. (4.32)
a

So far, we have functions spaces equipped with an inner product. Can we


find a basis for the space? For an n-dimensional space we need n basis vectors.
4.3 Connections to Linear Algebra 145

For an infinite dimensional space, how many will we need? How do we know
when we have enough? We will think about those things later.
Let’s assume that we have a basis of functions {φn (x)}∞
n=1 . Given a func-
tion f (x), how can we go about finding the components of f in this basis? In
other words, let

X
f (x) = cn φn (x).
n=1

How do we find the cn ’s? Does this remind you of the problem we had earlier?
Formally, we take the inner product of f with each φj , to find

X
< φj , f > = < φj , cn φn >
n=1

X
= cn < φj , φn > . (4.33)
n=1

If our basis is an orthogonal basis, then we have

< φj , φn >= Nj δjn , (4.34)

where δij is the Kronecker delta defined as



0, i 6= j
δij = (4.35)
1, i = j.

Thus, we have

X
< φj , f > = cn < φj , φn >
n=1
X∞
= cn Nj δjn
n=1
= c1 Nj δj1 + c2 Nj δj2 + . . . + cj Nj δjj + . . .
= cj N j . (4.36)

So, the expansion coefficient is


< φj , f > < φj , f >
cj = = .
Nj < φj , φj >

We summarize this important result:


146 4 Boundary Value Problems

Generalized Basis Expansion


Let f (x) be represented by an expansion over a basis of orthogonal
functions, {φn (x)}∞
n=1 ,


X
f (x) = cn φn (x).
n=1

Then, the expansion coefficients are formally determined as


< φn , f >
cn = .
< φn , φn >
In our preparation for later sections, let’s determine if the set of functions
φn (x) = sin nx for n = 1, 2, . . . is orthogonal on the interval [−π, π]. We need
to show that < φn , φm >= 0 for n 6= m. Thus, we have for n 6= m

Z π
< φn , φm > = sin nx sin mx dx
−π
Z π
1
= [cos(n − m)x − cos(n + m)x] dx
2 −π
 π
1 sin(n − m)x sin(n + m)x
= − = 0. (4.37)
2 n−m n+m −π

Here we have made use of a trigonometric identity for the product of two
sines. We recall how this identity is derived. Recall the addition formulae for
cosines:
cos(A + B) = cos A cos B − sin A sin B,
cos(A − B) = cos A cos B + sin A sin B.
Adding, or subtracting, these equations gives

2 cos A cos B = cos(A + B) + cos(A − B),

2 sin A sin B = cos(A − B) − cos(A + B).


So, we have determined that the set φn (x) = sin nx for n = 1, 2, . . . is
an orthogonal set of functions on the interval [= π, π]. Just as with vectors
in three dimensions, we can normalize our basis functions to arrive at an
orthonormal basis, < φn , φm >= δnm , m, n = 1, 2, . . . . This is simply done by
dividing by the √
length of the vector. Recall that the length of a vector was
obtained as v = v · v In the same way, we define the norm of our functions
by p
kf k = < f, f >.
Note, there are many types of norms, but this will be sufficient for us.
4.3 Connections to Linear Algebra 147

For the above basis of sine functions, we want to first compute the norm
of each function. Then we would like to find a new basis from this one such
that each basis eigenfunction has unit length and is therefore an orthonormal
basis. We first compute

Z π
2
kφn k = sin2 nx dx
−π
Z π
1
= [1 − cos 2nx] dx
2 −π
 π
1 sin 2nx
= x− = π. (4.38)
2 2n −π

We have found for our example that

< φn , φm >= πδnm (4.39)



and that kφn k = π. Defining ψn (x) = √1π φn (x), we have normalized the
φn ’s and have obtained an orthonormal basis of functions on [−π, π].
Expansions of functions in trigonometric bases occur often and originally
resulted from the study of partial differential equations. They have been
named Fourier series and will be the topic of the next chapter.

Problems
4.1. Solve the following problem:

x′′ + x = 2, x(0) = 0, x′ (1) = 0.

4.2. Find product solutions, u(x, t) = b(t)φ(x), to the heat equation satisfying
the boundary conditions ux (0, t) = 0 and u(L, t) = 0. Use these solutions to
find a general solution of the heat equation satisfying these boundary condi-
tions.

4.3. Consider the following boundary value problems. Determine the eigen-
values, λ, and eigenfunctions, y(x) for each problem.2
a. y ′′ + λy = 0, y(0) = 0, y ′ (1) = 0.
b. y ′′ − λy = 0, y(−π) = 0, y ′ (π) = 0.
c. x2 y ′′ + xy ′ + λy = 0, y(1) = 0, y(2) = 0.
d. (x2 y ′ )′ + λy = 0, y(1) = 0, y ′ (e) = 0.

2
In problem d you will not get exact eigenvalues. Show that you obtain a transcen-
dental equation for the eigenvalues in the form tan z = 2z. Find the first three
eigenvalues numerically.
148 4 Boundary Value Problems

4.4. For the following sets of functions: i) show that each is orthogonal on the
given interval, and ii) determine the corresponding orthonormal set.
a. {sin 2nx}, n = 1, 2, 3, . . . , 0 ≤ x ≤ π.
b. {cos nπx}, n = 0, 1, 2, . . . , 0 ≤ x ≤ 2.
c. {sin nπx
L }, n = 1, 2, 3, . . . , x ∈ [−L, L].

4.5. Consider the boundary value problem for the deflection of a horizontal
beam fixed at one end,

d4 y
= C, y(0) = 0, y ′ (0) = 0, y ′′ (L) = 0, y ′′′ (L) = 0.
dx4
Solve this problem assuming that C is a constant.

You might also like