Lecture Notes On Numerical Analysis of Partial Differential Equations
Lecture Notes On Numerical Analysis of Partial Differential Equations
Numerical Analysis of
Partial Differential Equations
Adérito Araújo
Coimbra, January 2019
These lecture notes follow to a large extent Endre Sülli”s notes1 , but with
things reordered and often expanded. The point of these notes is just to
serve as an outline of the actual lectures which I will give and should not
be used outside that context.
1
E. Sülli, Lecture Notes on Finite Element Methods for Partial Differential Equations, Mathematical
Institute. University of Oxford, 2012; E. Sülli, An Introduction to the Numerical Analysis of Partial
Differential Equations, Mathematical Institute. University of Oxford, 2005.
Chapter 1
Numerical solution of PDEs is a rich and active field of modern applied mathematics. The
steady growth of the subject is stimulated by ever-increasing demands from the natural
sciences, engineering and economics to provide accurate and reliable approximations to
mathematical models involving partial differential equations (PDEs) whose exact solutions
are either too complicated to determine in closed form or, in many cases, are not known
to exist.
While the history of numerical solution of ordinary differential equations is firmly rooted
in 18th and 19th century mathematics, the mathematical foundations of the field of numer-
ical solution of PDEs are much more recent: they were first formulated in the landmark
paper Über die partiellen Differenzengleichungen der mathematischen Physik (On the par-
tial difference equations of mathematical physics) by Richard Courant, Karl Friedrichs, and
Hans Lewy, published in 1928. There is a vast array of powerful numerical techniques for
specific PDEs: level set and fast-marching methods for front-tracking and interface prob-
lems; numerical methods for PDEs on, possibly evolving, manifolds; immersed boundary
methods; mesh-free methods; particle methods; vortex methods; various numerical homog-
enization methods and specialized numerical techniques for multiscale problems; wavelet-
based multiresolution methods; sparse finite difference/finite element methods, greedy algo-
rithms and tensorial methods for high-dimensional PDEs; domain-decomposition methods
for geometrically complex problems, and numerical methods for PDEs with stochastic co-
efficients that feature in a number of applications, including uncertainty quantification
problems.
These notes do justice to this huge and rapidly evolving subject. We shall therefore
confine ourselves to the most standard and well-established techniques for the numerical
solution of PDEs: finite difference methods, finite element methods and a small reference
to finite volume methods. Before embarking on our survey, it is appropriate to take a brief
excursion into the theory of PDEs in order to fix the relevant notational conventions and
to describe some typical model problems.
3
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 4
The divergence theorem (or Gauss’ theorem) says that the integral of a vector-field di-
vergence over a domain is equal to the integral of the normal component of the field along
the boundaries: Z Z
r · w dx = w · ⌫ ds,
⌦ @⌦
where ⌫ = (⌫1 , ..., ⌫n ) is the outward unit normal of @⌦. This theorem holds for functions
w and boundaries @⌦ that are sufficiently smooth (to be specified in the forthcoming
sections). Applying this to the product wv we obtain the Green’s formula
Z Z Z
w · rv dx = (w · ⌫)v ds (r · w)v dx.
⌦ @⌦ ⌦
where @u/@⌫ = ⌫ · ru is the exterior normal derivative of u on @⌦. From this, we see that
Green’s formula is nothing else than a generalisation of the integration-by-parts formula to
higher dimensions. If a(·) is a sufficiently smooth real-valued function, we may conclude
from the Green’s formula that
Z Z Z
@u
r · (a(x)ru)v dx = a(x) v ds a(x)ru · rv dx,
⌦ @⌦ @⌫ ⌦
↵ = (↵1 , ..., ↵n ) 2 Nn
@ |↵| u
@↵u = .
@x↵1 1 · · · @x↵nn
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 5
For the sake of simplicity we also use subscripts to denote partial derivatives, i.e., @xi =
@/@xi , i = 1, ..., n and @ ↵ u = @x↵11 · · · @x↵nn . For example,
@u @2u @2u
ut = @ t u = , uxx = @x @x u = @x2 u = , 2
uxy = @x @y u = @xy u= .
@t @x2 @x@y
L(u) = f (x).
The linear operator L is called elliptic if, for every x = (x1 , ..., xn ) 2 ⌦ and every nonzero
⇠ = (⇠1 , ..., ⇠n ) 2 Rn , X
Qk (x, ⇠) = a↵ (x)⇠ ↵ 6= 0.
|↵|=k
Parabolic and hyperbolic PDEs typically arise in mathematical models where one of the
independent physical variables is time, t. For example,
@t u + Lu = f and @t2 u + Lu = f,
where L is a uniformly elliptic partial differential operator of order 2m and u and f are
functions of (t, x1 , ..., xn ), are uniformly parabolic and uniformly hyperbolic PDEs, respec-
tively. The simplest examples are the (uniformly parabolic) unsteady heat equation and
the (uniformly hyperbolic) second-order wave equation, where
n
X
Lu = @xj (aij (t, x)@xi u),
i,j=1
and aij (t, x) = aij (t, x1 , ..., xn ), i, j = 1, ..., n, are the entries of a n ⇥ n matrix, which is
positive definite, uniformly with respect to (t, x1 , ..., xn ).
Not all PDEs are of a certain fixed type. For example, the following PDEs are mixed
elliptic-hyperbolic; they are elliptic for x > 0 and hyperbolic for x < 0:
In early 1930, S.L. Sobolev came across with a similar situation while he was dealing
with the following first order hyperbolic equation
Here, a is a real, positive constant and u0 is the initial profile. It is well know the exact
solution u(x, t) = u0 (x at) (this equation is called the first order advection equation).
In this case, the solution preserves the shape of the initial profile. However, if u0 is not
differentiable (say it has a kink) the solution u is still meaningful physically, but not in
the sense of classical solution. These observations were instrumental for the development
of the modern partial differential equations.
Again coming back to our vibration of drum problem (1.1)–(1.2), we note that in
mechanics or in physics, the same model is described through a minimisation of total
energy (Dirichlet principle), say J(v), i.e., minimisation of J(v) subjects to the set V of all
possible admissible displacements, where
Z Z
1
J(v) = |rv|2 dx dy f v dx dx (1.5)
2 ⌦ ⌦
| {z } | {z }
Kinetic Energy (unit mass) Potential Energy
and V is a set of all possible displacements v such that the above integrals are meaningful
and v = 0 on @⌦. More precisely, we cast the above problem as: find u 2 V such that
u = 0 on @⌦ and u minimize J, that is,
The advantage of the second formulation is that the displacement u may be once con-
tinuously differentiable and the external force f may be of general form, i.e., f may be
square integrable and may allow discontinuities. Further, it is observed that every solution
u of (1.1)–(1.2) satisfies (1.6). However, the converse need not be true, since in (1.6) the
solution u is only once continuously differentiable. We shall see subsequently that the
variational formulation (1.6) is equivalent to a weak formulation of (1.1)–(1.2), and it allows
physically more relevant external forces, say even point force.
The main concern of next sections will be the choice of the admissible space V . At this
stage, it is worth to analyse the space V , which will motivate the introduction of Sobolev
spaces in Section 1.5.3. With f 2 L2 (⌦) (the space of all square integrable functions), V
may be considered as:
Z Z
V = {v 2 C 1 (⌦) \ C(⌦)}¯ : |v|2 dx dy < 1, |rv|2 dx dy < 1 and v = 0 on @⌦},
⌦ ⌦
where C 1 (⌦), the space of one time continuously differentiable functions in ⌦ is such that
¯ is the space of continuous functions defined
C 1 (⌦) = {v : v, @x v, @y v 2 C(⌦)} and C(⌦)
on ⌦ ¯ with ⌦
¯ = ⌦ [ @⌦. Hence, u|@⌦ is properly defined for u 2 C(⌦).
¯ Unfortunately, this
V is not complete with measurement (norm) given by
✓Z ◆1/2
kuk1 = |u|2 + |ru|2 .
⌦
Roughly speaking, completeness means that all possible Cauchy sequences should find
their limits inside that space (we will specify these terms in the next section). In fact,
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 8
if V is not complete, we add the limits to make it complete. One is curious to know
“why do we require completeness?” In practice, we need to solve the above problem by
using some approximation schemes, i.e., we should like to approximate u by a sequence of
approximate solutions {uh }. Many times {uh } forms a Cauchy sequence (that is a part
of convergence analysis). Unless the space is complete, the limit may not be inside that
space. Therefore, a more desirable space is the completion of V . Subsequently, we shall
see that the completion of V is H01 (⌦). This is a Hilbert Sobolev space and is stated as:
H01 (⌦) = {v 2 L2 (⌦) : @x v, @y v 2 L2 (⌦) and v = 0 on @⌦}.
Obviously, the square integrable function may not have partial derivatives in the usual
sense. In order to attach a meaning, we shall generalize the concept of differentiation and
that we shall discuss prior to the introduction of Sobolev spaces. One should note that the
meaning of the H 1 -function v on ⌦ which satisfies v = 0 has to be understood in a general
sense.
If we accept (1.6) as a more general formulation, and the equation (1.1)–(1.2) is its
Euler form, then it is natural to ask: “Does every PDE have a variational form which is of
the form (1.6)?” The answer is simply in negative. Say, for a flow problem with a transport
or convective term:
u + b · ru = f,
it does not have an energy formulation like (1.6). So, next question would be: “Under what
conditions on PDE, such a minimisation form exists?” More over, “Is it possible to have a
more general weak formulation which in a particular situation coincides with minimisation
of the energy?”. This is what we shall explore in the course of these lectures.
If formally we multiply (1.1) by v 2 V (space of admissible displacements) and apply
Gauss divergence theorem, the contribution due to boundary terms becomes zero as v = 0
on @⌦. Then we obtain the weak formulation of (1.1)–(1.2): find u 2 V such that
Z Z
ru · rv dx dy = f v dx dy, 8v 2 V. (1.7)
⌦
R
For flow problem with a transport term b · ru we have an extra term ⌦ b · ruv dx dy
added to the left hand side of (1.7). This is a more general weak formulation and we shall
also examine the relation between (1.6) and (1.7). Given such a weak formulation, “Is it
possible to establish its well-posedness1 ?” Subsequently in the next sections, we shall settle
this issue by using Lax-Milgram theorem.
Very often problem like (1.1)–(1.2) doesn’t admit exact or analytic solutions. For the
problem (1.1)–(1.2), if the boundary is irregular, i.e., ⌦ need not be a square or a circle,
it is difficult to obtain an analytic solution. Even when analytic solution is known, it may
contain complicated terms or may be an infinite series. In both the cases, one resorts to
numerical approximations. One of the objectives of the numerical procedures for solving
differential equations is to cut down the degrees of freedom (the solutions lie in some
infinite dimensional spaces like the Hilbert Sobolev spaces) to a finite one so that the
discrete problem can be solved by using computers.
In these notes we will just consider deterministic linear PDEs for the real case. The
background material from linear functional analysis and the theory of function spaces
discussed herein is intentionally sketchy in order to enable understanding of some of the key
concepts, such as stability and convergence of finite difference and finite element methods,
with the bare minimum of analytical prerequisites.
1
The problem is said to be well-posed (in the sense of Hadamard) if it has a solution, the solution is
unique and it depends continuously on the data.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 9
kw + vk kwk + kvk.
Two elements w, v 2 V for which (w, v) = 0 are said to be orthogonal. In that case, we
have the equality kw + vk2 = kwk2 + kvk2 , also called the Pythagorean theorem.
A sequence {vi }1
i=1 in V is said to converge to v 2 V , also written vi ! v as i ! 1 or
v = limi!1 vi , if
kv vi kV ! 0, as i ! 1.
The sequence {vi }1
i=1 is called a Cauchy sequence in V if
kvi vj kV ! 0, as i, j ! 1.
The inner product space V is said to be complete if every Cauchy sequence in V is con-
vergent, i.e., if every Cauchy sequence {vi }1
i=1 has a limit v = limi!1 vi 2 V . A complete
inner product space is called a Hilbert space.
More generally, a norm in a linear space V is a function k · k : V ! R+ such that (we
just consider the real case)
kvk 0, 8v 2 V, (positivity)
kvk = 0, if and only if v = 0, (definiteness)
k↵vk = |↵|kvk, 8↵ 2 R, v 2 V, (homogeneity)
kv + wk kvk + kwk, 8v, w 2 V. (triangle inequality)
A function | · | is called a seminorm in V if these conditions hold with exception that the
second one, i.e., if it is only positive semidefinite, and thus can vanish for some v 6= 0. A
linear space with a norm is called a normed linear space. As we have seen, an inner product
space is a normed space, but not all the normed linear spaces are inner product spaces. A
complete normed space is called a Banach space.
Remark 1.5. For two elements v, w in a normed space V , the norm kv wk is called the
distance between v and w.
Two norms on a linear space are called equivalent if they have the same convergent
sequences.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 11
Exercise 1.6. Prove that two norms k · k and |||·||| on a linear space V are equivalent
if and only if it there exist positive constants c and C such that
Exercise 1.7. Prove that on a finite-dimensional linear space all norms are equiv-
alent.
Thus
kAvkW kAkkvkV , 8v 2 V,
and, by definition, kAk is the smallest constant C such that (1.9) holds.
Exercise 1.9. Prove that a linear operator is continuous if and only if it is bounded.
In the special case of W = R the definition of a linear operator reduced that of a linear
functional. The set of all bounded linear functionals on V is called the dual space of V ,
denoted by V ⇤ . By (1.10) the norm of a linear functional l 2 V ⇤ is
|l(v)|
klkV ⇤ = sup . (1.11)
v2V \{0} kvkV
Note that V ⇤ is itself a linear space and, with the norm defined by (1.11), V ⇤ is a normed
linear space. It can be proved that V ⇤ with the norm defined by (1.11) is complete, i.e., is
a Banach space.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 12
kv v0 k = inf kv uk,
u2V0
kv v0 k = min kv uk.
u2V0
Theorem 1.10 (Projection theorem). For any closed subspace V0 of a Hilbert space
V , a necessary and sufficient condition for v0 2 V to be the best approximation of
v 2 V with respect to V0 is that
(v v0 , u) = 0, 8u 2 V0 , (1.12)
The projection theorem is a basic result in Hilbert space theory. One useful consequence
of the projection theorem is that if the closed linear subspace V0 is not equal to the whole
V , then it has a normal vector, i.e., there exists a nonzero vector w 2 V which is orthogonal
to V0 .
Exercise 1.11. Prove that the operator PV0 : V ! V0 mapping v 2 V onto its
best approximation, i.e., such that PV0 v = v0 , is a bounded linear operator with the
properties
PV20 = PV0 , and kPV0 k = 1.
It is called the orthogonal projection from V onto V0 and the best approximation v0
is called the orthogonal projection of v onto V0 .
The normal equations for the best approximation in Hilbert spaces provide an example
of a system of linear equations. The solution becomes trivial if the basis { 1 , ..., n }
is orthonormal. In fact, from the previous corollary, if { 1 , ..., N } is orthonormal, the
orthogonal projection is given by
N
X
PV 0 v = (v, i) i, v 2 V,
i=1
i.e., the coordinates of the orthogonal projection in the orthonormal basis { 1 , ..., N} are
given by ((v, 1 ), ..., (v, N )).
X
kukC k (⌦)¯ = sup |@ ↵ u(x)|.
x2⌦
|↵|k
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 14
Similarly, if k = 1,
X
kukC 1 (⌦)
¯ = sup |@ ↵ u(x)|
x2⌦
|↵|1
n
X
= sup |u(x)| + sup |@xj u(x)|.
x2⌦ j=1 x2⌦
Exemple 1.13. Consider the open interval ⌦ = (0, 1) ⇢ R. The function u(x) =
1/x belongs to C k (⌦) for each k 0. As ⌦¯ = [0, 1] and limx!0 u(x) = 1, it is
¯
clear that u is not continuous on ⌦; the same is true of its derivatives. Therefore
¯ for any k 0.
u 62 C k (⌦)
where |x| = (x21 + · · · + x2n )1/2 . Clearly, the support of w is the closed unit ball
{x 2 Rn : |x| 1}.
We denote by C0k (⌦) the set of all u contained in C k (⌦) whose support is a bounded
subset of ⌦. Let \
C01 (⌦) = C0k (⌦).
k 0
Exemple 1.15. The function w defined in the previous example belongs to the space
C01 (Rn ). In fact, it is enough to check the continuity and differentiability properties
only at the points x = ±1. For n =, apply L’Hopsital rule to conclude the result; for
n > 1, since the function is radial, we can prove the result in a similarly.
Any two functions which are equal almost everywhere (i.e. equal, except on a set of
measure zero) on ⌦ are identified with each other. Thus, strictly speaking, Lp (⌦) consists
of equivalence classes of functions; still, we shall not insist on this technicality. Lp (⌦) is
equipped with the norm
✓Z ◆1/p
p
kukLp (⌦) = |u(x)| dx .
⌦
We shall also consider the space L1 (⌦) consisting of functions u defined on ⌦ such that |u|
has finite essential supremum on ⌦ (namely, there exists a positive constant M such that
|u(x)| M for almost every2 x in ⌦; the smallest such number M is called the essential
supremum of |u|, and we write M = ess.sup x2⌦ |u(x)|). L1 (⌦) is equipped with the norm
Remark 1.16. The space Lp (⌦) with p 2 [1, 1] is a Banach space. In particular, L2 (⌦) is
a Hilbert space: it has an inner product (·, ·) and, when equipped with the associated norm
k · kL2 (⌦) , defined by kukL2 (⌦) = (u, u)1/2 , it is a Banach space.
To conclude this section, we note the the following generalisation of the Cauchy-Schwarz
inequality, known as Hölder’s inequality, that is valid for any two functions u 2 Lp (⌦) and
v 2 Lq (⌦) with 1/p + 1/q = 1:
Z
u(x)v(x) dx kukLp (⌦) kvkLq (⌦) .
⌦
Note that all terms involving integrals over the boundary of ⌦, which arise in the course of
integrating by parts, have disappeared because v and all of its derivatives are identically
2
We shall say that a property P (x) is true for almost every x in ⌦, if P (x) is true for all x 2 ⌦ \ where
is a subset of ⌦ with zero Lebesgue measure.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 16
zero on the boundary of ⌦. This identity represents the starting point for defining the
concept of weak derivative.
Now suppose that u is a locally integrable function defined on ⌦ (i.e. u 2 L1 (!) for
each bounded open set !, with ! ¯ ⇢ ⌦). Suppose also that there exists a function w↵ ,
locally integrable on ⌦ and such that
Z Z
|↵|
w↵ (x)v(x) dx = ( 1) u(x)@ ↵ v(x) dx, 8v 2 C01 (⌦);
⌦ ⌦
then we say that w↵ is a weak derivative of the function u of order |↵| = ↵1 + · · · + ↵n , and
we write w↵ = @ ↵ u. In order to see that this definition is correct it has to be shown that
if a locally integrable function has a weak derivative then this must be unique; we remark
that this is a straightforward consequence of DuBois Reymond’s lemma3 . Clearly, if u is a
sufficiently smooth function, say u 2 C k (⌦), then its weak derivative @ ↵ u of order |↵| k
coincides with the corresponding partial derivative in the classical pointwise sense.
Exemple 1.17. Let ⌦ = R, and suppose that we wish to determine the weak first
derivative of the function u(x) = (1 |x|)+ defined on ⌦. Clearly u is not differen-
tiable at the points 0 and ±1. However, because u is locally integrable on ⌦, it may,
nevertheless, have a weak derivative. Indeed, for any v 2 C01 (⌦),
Z +1 Z +1 Z 1
u(x)v 0 (x) dx = (1 |x|)+ v 0 (x) dx = (1 |x|)v 0 (x) dx
1 1 1
Z 0 Z 1
= (1 + x)v 0 (x) dx + (1 x)v 0 (x) dx
1 0
Z 0 Z 1
0
= v(x) dx + [(1 + x)v(x)] 1 + v(x) dx + [(1 x)v(x)]10
1 0
Z +1
= w(x)v(x) dx
1
where 8
>
> 0, x < 1,
<
1, x 2 ( 1, 0),
w(x) =
>
> 1 x 2 (0, 1),
:
0, x > 1.
Thus, the piecewise constant function w is the first (weak) derivative of the contin-
uous piecewise linear function u, i.e. w = u0 .
Now we are ready to give a precise definition of a Sobolev space. Let k be a non-negative
integer and suppose that p 2 [1, 1]. We define (with @ ↵ denoting a weak derivative of
order |↵|)
Wpk (⌦) = {u 2 Lp (⌦) : @ ↵ u 2 Lp (⌦), |↵| k}.
3
DuBois Reymond’s lemma: Suppose that w is a locally integrable function defined on an open set
⌦ ⇢ Rn . If Z
w(x)v(x) dx = 0, 8v 2 C01 (⌦)
⌦
then w(x) = 0 for almost every x 2 ⌦.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 17
The space Wpk (⌦) is called a Sobolev space of order k; it is equipped with the (Sobolev)
norm 0 11/p
X
kukWpk (⌦) = @ k@ ↵ u(x)kpLp (⌦) A , when 1 p < 1
|↵|k
and, X
kukW1
k (⌦) = k@ ↵ u(x)kL1 (⌦) , when p = 1.
|↵|k
Letting,
0 11/p
X
|u|Wpk (⌦) = @ k@ ↵ u(x)kpLp (⌦) A , when 1 p < 1,
|↵|=k
Similarly, letting X
|u|W1
k (⌦) = k@ ↵ u(x)kL1 (⌦) ,
|↵|=k
we have that
k
X
kukW1
k (⌦) = |u|W1
j
(⌦)
.
j=0
For this reason, we shall usually write H k (⌦) instead of W2k (⌦).
Throughout these notes we shall frequently refer to the Hilbert Sobolev spaces H 1 (⌦)
and H 2 (⌦). Our definitions of Wpk (⌦) and its norm and semi-norm, for p = 2, k = 1, give:
4
When k 1, | · |Wpk (⌦) is only a semi-norm rather than a norm because if |u|Wpk (⌦) = 0 for u 2 Wpk (⌦)
it does not necessarily follow that u(x) = 0 for almost every x 2 ⌦ (all that is known is that D↵ u(x) = 0
for almost every x 2 ⌦, |↵| = k), so | · |Wpk (⌦) does not satisfy the first axiom of norm.
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 18
Exercise 1.18. Given that L2 (⌦) is complete, prove that H 1 (⌦) is complete.
Hint: Assume that kvj vkH 1 (⌦) ! 0 as i, j ! 0. Show that there are v, wk such
that kvj vkL2 (⌦) ! 0, k@xk vj wk kL2 (⌦) ! 0, and that wk = @xk v in the sense of
weak derivative.
Finally, we define the special Sobolev space H01 (⌦) as the closure of C01 (⌦) in the norm
of k · kH 1 (⌦) ; in other words, H01 (⌦) is the set of all u 2 H 1 (⌦) such that u is the limit in
H 1 (⌦) of a sequence {um }1 m=1 with um 2 C0 (⌦). It can be shown (assuming that @⌦ is
1
• Define a weak derivative for elements of L2 (⌦) and what we understand by saying
that that derivative is again in L2 (⌦). Then you move to give a meaning to that
restriction of a function in H 1 (⌦) to one part of its boundary.
• Go deeper and take time to browse a book on distribution theory and Sobolev spaces.
It takes a while but you end up with a pretty good intuition of what this all is about.
• Take a shortcut. You first consider the space of functions C 1 (⌦)¯ and then you close
it with the norm k · kH 1 (⌦) . To do that you have to know what closing or completing a
space is. Then you have to prove that restricting to boundary still makes sense after
this completion procedure.
My recommendation at this point is to simply go on. You can take later on some time with a
good simple book on elliptic PDEs and will see that it is not that complicated. Nevertheless,
if you keep on doing research related to finite elements, you should really know something
more about this. In due time you will have to find any of the dozens of books on PDE books,
and read the details. But this is only an opinion.
Proof: As any function u 2 H01 (⌦) is the limit in H 1 (⌦) of a sequence {um }1
m=1 ⇢
C01 (⌦), it is sufficient to prove this inequality for u 2 C01 (⌦).
In fact, to simplify matters, we shall restrict ourselves to considering the special case
of a rectangular domain ⌦ = (a, b)⇥(c, d) in R2 . The proof for general ⌦ is analogous.
Evidently
Z x Z x
u(x, y) = u(a, y) + @x u(⇠, y) d⇠ = @x u(⇠, y) d⇠, c < y < d.
a a
Analogously,
Z Z
1
|u(x, y)|2 dx dy (d c)2 |@y u(x, y)|2 dx dy.
⌦ 2 ⌦
CHAPTER 1. BASIC FUNCTIONAL ANALYSIS 20
and, according to this result, the equivalence of the norms | · |H1 (⌦) and k · kH1 (⌦) on H01 (⌦)
follows from (|v|H1 (⌦) = krvkL2 (⌦) )
kvk2L2 (⌦) kvk2H 1 (⌦) = kvk2L2 (⌦) + krvk2L2 (⌦) (1 + c⇤ )krvkL2 (⌦) .
Remark 1.22. Note that the extension of the proof of the Poincaré-Friedrichs inequality
for v 2 H01 (⌦) may be done using the density. In fact, by assumption there exists a sequence
of functions vm 2 C01 (⌦) such that kvm vkH1 (⌦) ! 0. Then, once (1.14) holds for all
vm 2 C01 (⌦), this implies that for v 2 H01 (⌦)
kvkL2 (⌦) = lim kvm kL2 (⌦) lim c⇤ krvn kL2 (⌦) = lim c⇤ krvn kL2 (⌦) .
m!1 m!1 m!1