0% found this document useful (0 votes)
55 views

Distribution

This document provides an introduction to the theory of distributions and distributional derivatives. It discusses: 1) Defining distributions as linear functionals on the space of test functions, allowing generalized solutions of PDEs like the Dirac delta function. 2) The space D'(Ω) of distributions on an open set Ω, which contains both regular distributions defined by integrable functions and irregular distributions. 3) Defining distributional derivatives of distributions in terms of their action on test functions, and relating this to weak derivatives of functions. 4) Mollification, which approximates functions by smooth ones, useful for integration by parts arguments.

Uploaded by

ahmetyergenuly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Distribution

This document provides an introduction to the theory of distributions and distributional derivatives. It discusses: 1) Defining distributions as linear functionals on the space of test functions, allowing generalized solutions of PDEs like the Dirac delta function. 2) The space D'(Ω) of distributions on an open set Ω, which contains both regular distributions defined by integrable functions and irregular distributions. 3) Defining distributional derivatives of distributions in terms of their action on test functions, and relating this to weak derivatives of functions. 4) Mollification, which approximates functions by smooth ones, useful for integration by parts arguments.

Uploaded by

ahmetyergenuly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Partial Differential Equations

Basic Distribution theory


Gustav Holzegel
March 13, 2019

1 Introduction
We have touched the idea of distributional solutions when we discussed Burger’s
equation, and also when we discussed Holmgren’s theorem. The key in proving
the latter was the observation that for a linear partial differential operator P =
α n
P
|α|≤m aα (x) D and a classical solution of P u = w in Ω (open subset of R )
with Dβ u = 0 on ∂Ω for |β| ≤ m − 1, we have the integration by parts formula
Z Z X
|α|
w · v dx = (−1) Dα (aα (x) v) · u dx (1)
Ω Ω |α|≤m

valid for all v ∈ C0∞ (Ω). The formula (1) makes sense for u merely continuous
or even u ∈ L1loc (Ω) and it is natural to declare u ∈ L1loc (Ω) a “weak” or a
“distributional” solution of P u = w if it satisfies (1) for all v ∈ C0∞ (Ω).
Generally, with any (real- or) complex valued function f : Ω → C, continuous
on Ω ⊂ Rn open (or even u ∈ L1loc (Ω)), we can associate its integrals against
test functions by defining
Z
f [φ] := f (x) φ (x) dx (2)

for φ ∈ D := C0∞ (Ω). Note that

• f [φ] is a linear functional on the space of test functions D


• f [φ] is well defined for f ∈ L1loc (Ω)
• the definition supports the idea of “smeared averages” from physics: If f
is an observable like a velocity or a temperature, you will never be able
to determine its value at a point but only averaged over a small interval
(finite detector size).

• If f is continuous, then f [φ] determines f uniquely (so in some sense we


don’t lose anything by considering f [φ] instead of f )

1
• One can differentiate f in the sense of distributions by defining

Dk f [φ] := −f [Dk φ]

which agrees with the usual derivative if f ∈ C 1 by the standard integra-


tion by parts formula. Therefore, linear partial differential operators act
naturally on the functionals f [φ].

2 The space D′ (Ω)


We broaden our view further and consider general linear functionals on the
space D of test-functions (of which those arising by integration against an L1
function, the f [φ] above, are a particular example). We shall introduce a notion
of continuity of such functionals below (which is desirable if we would like to
keep the interpretation as physical observables). This notion of continuity ist
most easily formulated via sequential continuity.
Definition 2.1. We say that φn ∈ D converges to φ in D if
• there is a compact set K such that all φn vanish outside K
• there is a φ ∈ D such that for all α ∈ Nd we have ∂ α φn → ∂ α φ uniformly
in x.
Definition 2.2. A distribution is a linear functional ℓ : D (Ω) → C, which is
continuous in the sense that if φn converges to φ in D, the ℓ (φn ) → ℓ (φ). The
vectorspace of distributions in denoted D′ (Ω).
Example 2.3. Each continuous (or L1loc ) function generates a distribution via
(2). Such distributions are called regular distributions. Not every distribution
is regular, as the next example shows.

RExample 2.4. The distribution δξ [φ] = φ (ξ) is not regular. Indeed, the formula
dx g (x) φ (x) = φ (ξ) would imply that g ∈ L1loc vanishes everywhere (modulo a
set of measure 0). This example also makes it intuitive to talk about the support
of a distribution: If f [φ] = g [φ] for all φ with support in ω ⊂ Ω, we’ll say that
the two distributions agree in ω.
The above notion of continuity may be cumbersome to check in practical
applications. However, we have the following
Proposition 2.5. The function ℓ : D (Ω) → C belong to D′ (Ω) if and only if
for every compact subset K ⊂ Ω there is an integer n (K, ℓ) and a c ∈ R such
that for all φ ∈ D (Ω) with support in K we have
X
| ℓ [φ] | ≤ ckφkC n with kφkC n = max |∂ α φ| (3)
x
|α|≤n

2
Proof. The “if” follows immediately from the estimate (3). For “only if” suppose
that (3) was violated for some compact set K. Then we can find for this K a
sequence φn with kφn kC n = 1 and |ℓ [φn ] | ≥ n (otherwise the estimate (3) would
hold with c = N ). But then ψn = n−1/2 φn is a sequence converging to zero in
D, while |ℓ [ψn ] | ≥ n1/2 does not go to zero. Contradiction.
If there is a c such that (3) holds, ℓ is said to be of order n on K. IF ℓ is of
order n on every compact subset K ⊂ Ω, the ℓ is of order n on Ω.
Example 2.6. Any regular distribution (Example 2.3) is of order 0. The Dirac
delta of Example 2.4 is also of order zero.
Example 2.7. The principal value distribution

φ (x) φ (x)
Z Z
ℓ [φ] := lim = P.V.
ǫ→0 |x|>ǫ x x

is a distribution of order 1 (near 0 at least; away from zero it is order 0). The
proof is an exercise. Hint: Taylor-expand φ near 0 and use the symmetry of the
integral.

3 Distributional Derivatives
We can define the distributional derivative Dk f as the distribution
|α|
Dk f [φ] = −f [Dk φ] or more generally Dα f [φ] = (−1) f [Dα φ] . (4)

You should check that this indeed defines a distribution.


Exercise 3.1. Compute Dk δξ [φ].
Therefore, we can apply a linear operator P of order m to a distribution
u [φ] via
P u [φ] = u P t φ .
 

Below we will be particularly interested in distributional solutions of

P u = δξ . (5)

A distribution u satisfying (5) is called a fundamental solution with pole ξ for


the operator P .

4 Relation with weak derivatives


If f is a regular distribution, i.e.
Z
f [φ] = φ (x) f (x) dx

3
for some f ∈ L1loc (Ω) then it may be that the distributional derivative is again
a regular distribution. In other words, there could be a g ∈ L1loc such that
Z Z
Dk f [φ] := − Dk φ (x) f (x) dx = φ (x) g (x) dx

holds for any φ in D. In this case, we say that f has g = Dk f as its weak
derivative. Using an argument similar to one already used above, it is easy to
show that the weak derivative, if it exists, is unique.
To see that not every function (≡ regular distribution) has a weak derivative
consider the example of the step function H : R → R defined as

1 for x ≥ 0
H (x) = (6)
0 for x < 0 .
This is clearly in L1loc but the distributional derivative is easily seen to be the
delta distribution in view of the following computation:
Z ∞ Z ∞
Dx H [φ] = − Dx φ (x) H (x) dx = −Dx φ (x) H (x) = φ (0) .
−∞ 0

Using the notion of a weak derivatives one can define various notions of “weak
solutions” to a PDE, which will typically require some number of weak deriva-
tives to exist.

5 Exercises
1. Show that the distributional derivative of log |x| is P.V. x1 .
2. (Fritz John, 3.6 (3)) Show that the function

1 for x1 > ξ1 , x2 > ξ2
u (x1 , x2 ) = (7)
0 for all other x1 , x2
defines a fundamental solution with pole (ξ1 , ξ2 ) of the operator L =
∂2
∂x1 ∂x2 in the x1 x2 -plane.

6 Mollification (see Appendix of Evans)


Let Ω ⊂ Rn be open. The goal is to approximate functions in L1loc (Ω) or C 0 (Ω)
by functions which are smooth. (Obviously, this can be very convenient in
arguments involving lots of integration by parts.)

Ωǫ

4
We write Ωǫ ⊂ Ω for

Ωǫ := {x ∈ Ω | dist (x, ∂Ω) > ǫ}

Let η ∈ C ∞ (Rn ) be the function (“standard mollifier”)


(  
C exp |x|21−1 |x| < 1
η (x) = (8)
0 |x| ≥ 1
R
with C chosen such that Rn η dx = 1. For ǫ > 0 we define

1 x
ηǫ (x) := η (9)
ǫn ǫ

RNote that ηǫ is smooth, supported on the closed unit ball B (0, ǫ) and that
ηǫ (x) dx = 1.
Definition 6.1. The mollification of an L1loc (Ω) function f : Ω → R is defined
as

f ǫ := ηǫ ⋆ f in Ωǫ . (10)

We claim that on Ωǫ , the function f ǫ is smooth and approximates f in an


appropriate sense as ǫ → 0:
Exercise 6.2. Show that
1. f ǫ ∈ C ∞ (Ωǫ )
2. f ǫ → f almost everywhere as ǫ → 0
3. If f ∈ C 0 (Ω) (=continuous), then f ǫ → f uniformly on compact subsets
of Ω
4. If f ∈ Lploc (Ω) with 1 ≤ p < ∞, then f ǫ → f in Lploc (Ω)
Here is a guideline:
1. Write out the difference quotients
f ǫ (x + hei ) + f ǫ (x)
h
with h sufficiently small so that x + hei is still in Ωǫ . Then use uniform
convergence to interchange limit and integral.
2. Start with Lebesgue’s Differentiation theorem
1
Z
lim |f (x) − f (y) |dy = 0 ,
r→0 vol (B (x, r)) B(x,r)

which holds at almost every x for f in L1 , to estimate |f ǫ (x) − f (x) |.

5
3. Follows from 2. using uniform continuity on compact subsets.
4. Apply Hölder to the definition of f ǫ to estalish

kf ǫ kLp (U) ≤ kf kLp(V ) (11)

for U ⊂⊂ V ⊂⊂ Ω. To show that f ǫ → f in U ⊂⊂ V ⊂⊂ Ω, suppose


δ > 0 is given. Choose a continuous g ∈ C 0 (V ) such that

kf − gkLp(V ) < δ

which is possible by density. Then show that kf ǫ − f kLp(U) ≤ 3δ for ǫ


sufficiently small using the triangle inequality and (11).

7 More on Distributions (non examinable)


This material is for background only.

7.1 Convergence of Distributions


Definition 7.1. A sequence of distributions ℓn ∈ D′ (Ω) converges to ℓ ∈ D′ (Ω)
if and only if for every test function φ ∈ D (Ω) we have

ℓn [φ] → ℓ [φ]

with the usual notion of convergence in C. We will write ℓn ⇀ ℓ to denote


this convergence and say that ℓn convergences “weakly” or “in the sense of
distributions” to ℓ.
Exercise 7.2. Show that the sequence of (regular) distributions n2 einx con-
verges weakly to zero as n → ∞.
Exercise 7.3. Let j ∈ D Rd with Rd j (x) dd x = 1. Define jǫ (x) = ǫ−d j xǫ .
 R 
Show that jǫ ⇀ δ0 .

The previous example is remarkable as it shows that the non-regular delta


distribution can be approximated by function in D. In fact, any element in
D′ can be approximated in this way: The space D is dense in D′ . This can
be used to extend (uniquely) the usual operations of calculus (differentiation,
translation, convolution) to D′ , which is very useful for PDE.

7.2 Extending Calculus from D(Ω) to D ′(Ω)


We will run in a rather informal way through the main ideas of extending
various operations of calculus from test functions to distributions. The key
is Proposition 2 in the Appendix of Rauch’s book, which we repeat below.
Remember we already have a notion of convergence in D, Definition 2.1.

6
Proposition 7.4. Suppose L : D (Ω1 ) → D (Ω2 ) is a linear, sequentially con-
tinuous map. Suppose in addition that there is a linear, sequentially continuous
map Lt : D (Ω2 ) → D (Ω1 ) which is the transpose of L in the sense that
Z Z
Lφ · ψ = φ · Lt ψ holds for all φ ∈ D (Ω1 ) , ψ ∈ D (Ω2 ) .
Ω2 Ω1

Then the operator L extends to a sequentially continuous map of D′ (Ω1 ) →


D′ (Ω2 ) given by

L (ℓ) [ψ] = ℓ Lt ψ for all ℓ ∈ D′ (Ω1 ) , ψ ∈ D (Ω2 ) .


 

You can find the (very easy) proof in Rauch’s book or do it yourself. This
simple proposition allows us to define the following operations on distributions
• multiplication of a distribution with a C ∞ function f ∈ C ∞ (Ω). Since on
test functions the transpose is itself (multiplication by f ), we have

(f · ℓ) [ψ] = ℓ [f · ψ]

• translation of a distribution (say Ω = Rd so that we don’t have to keep


track of domains). Since (τy f ) (x) = f (x − y) on test functions has trans-
pose τ−y we have τy (ℓ) [ψ] = ℓ [τ−y ψ] on distributions.
• reflection of a distribution: R (ℓ) [ψ] = ℓ [Rψ], as the transpose of (Rf ) (x) =
f (−x) on test functions is itself.
• derivative of a distribution (we already did that!)
• convolution of a distribution with a C ∞ function (see below)
The remarkable point of convolving a distribution with a smooth function is
that the result is actually a smooth function. You will prove this below. This
is very useful and can be usedto show that we can approximate any element in
D′ Rd by elements in D Rd .
Let Ω = Rd and f ∈ D Rd . For g ∈ D (Rn ), the convolution of g with f is
defined as
Z
(f ⋆ g) (x) := f (x − y) g (y) dy = (g ⋆ f ) (x) . (12)
Rd

Exercise 7.5. Show that (on test functions) the transpose of convolution with
f is convolution with Rf

Therefore we define

(f ⋆ ℓ) [ψ] = ℓ [Rf ⋆ ψ] (13)

Exercise 7.6. Compute f ⋆ δ0 .

7
We can now show that the convolution (13) is actually smooth. This is
suggested interpreting the convolution (12) itself as the regular distribution
τ−x Rf acting on a test function g:

(f ⋆ g) (x) = τ−x Rf [g] = g [τ−x Rf ] .

Now the right hand side (which is a complex-valued function of x) actually makes
sense for distributional g providing an alternative definition of the convolution
of a distribution with a smooth function. Fortunately, the two “definitions”
agree:

Exercise 7.7. With ℓ ∈ D′ Rd and f ∈ D Rd given, show that


 

• the function x 7→ ℓ [τ−x Rf ] is C ∞ Rd




• ℓ [Rf ⋆ ψ] = ℓ [τ−x Rf ] · ψ holds for all test functions (where the right hand
side is to be understood as integrating the C ∞ function again the test
function ψ,)
The second statement is precisely the statement that the two definitions
agree as distributions. You can prove it expressing the right hand side in terms
of Riemann sums....
Exercise 7.8.

You might also like