Second Order Elliptic Equations
Second Order Elliptic Equations
Leonardo Abbrescia
Advised by Daniela De Silva, PhD and Ovidiu Savin, PhD.
Contents
1 Introduction and Acknowledgments 2
1
1 Introduction and Acknowledgments
Second order elliptic partial differential equations are fundamentally modeled by Laplace’s equation ∆u = 0.
This thesis begins with trying to prove existence of a solution u that solves ∆u = f using variational methods.
In doing so, we introduce the theory of Sobolev spaces and their embeddings into Lp and C k,α . We then
move on to applying our techniques to a non-linear elliptic equation on a compact Riemannian manifold. We
introduce the method of continuity along the way to provide another way of solving the equation. We move
onto proving Schauder estimates for general elliptic equations in divergence form: ∂i (aij ∂j u) + c(x)u = f
with various assumptions on a, c, and f . We conclude our study of equations in divergence form by proving
the Harnack Inequality using Moser iteration. Personally I would have liked to have proved the Harnack
inequality in my own flavor, but due to lack of time, I had to follow very closely the proof given in [1].
The second half of the thesis revolves around equations in non-divergence form: aij uij = 0. As a
disclaimer, I wrote the second half separately from the first and so my notations change heavily. We first
start with the proof of the ABP maximum principle which is used heavily in, not only the proof for the
Harnack inequality for non-divergence equations, but for the section on curved C 1,α domains. In that
section I go over a completely new way of getting regularity estimates: approximation by polynomials. We
first show how this can be used for ∆u = f and then for general elliptic operators. We conclude the paper by
introducing Krylov’s regularity results for flat domains and generalize it to curved domains whose boundaries
look locally like the graph of a C 1,α function.
As one more disclaimer, I was not able to prove every single detail in this book due to lack of time. I
leave tiny bits and pieces as exercises for the reader, but the overwhelming majority is proved in rigorous
detail.
The writing of this paper was a long and arduous process. It grew out of many discussions with Professor
Daniela De Silva and Professor Ovidiu Savin. I am very grateful for their insights and very very patient
guidance. I would not have been able to set my path as a mathematician without them. I’d like to give
thanks to Professor Michael Weinstein for being an exceptional guidance counselor and instructor whom I’ve
learned a lot from about singular integrals, methods of characteristics, and graduate schools. I’d like to give
special thanks to Professor Duong Phong, whose Analysis II class helped refine the details of this thesis.
2
2 Variational Methods and Sobolev Embedding Theorems
2.1 Laplacian
We begin with the simplest problem. Let Ω ⊂⊂ Rn be a bounded domain, and f a function on Ω. We wish
to find a u such that
∆u = f in Ω
u = 0 on ∂Ω
The way we will find the solution to this problem is by finding the minimum for a specific functional. This
is the idea: let g(x) be a function on Ω ⊂ R such that ∃G(x) on Ω with
G0 (x) = g(x)
and we want to find an x0 with g(x0 ) = 0. Then one way to approach this problem is to find an x0 ∈ Ω
such that G(x) ≥ G(x0 ) ∀x ∈ Ω =⇒ G0 (x0 ) = 0 =⇒ g(x0 ) = 0. By comparisons, g(x0 ) = 0 would be the
equivalent of ∆u = 0. So now we try to find the equivalent of G. Define the functional
ˆ ˆ
1
I(u) = |Du|2 + f u.
2 Ω Ω
This functional is going to be defined for u ∈ W 1,2 (Ω). Now lets go over some definitions.
Definition 2.1. Let B be a normed Banach space. Then its completion B := {uk } ⊂ B, uk is cauchy .
Example 2.2. Q = R.
Example 2.3. {C0∞ (Ω), kukp < ∞} = Lp (Ω) (modulo the equivalence that f ≡ g if f 6= g on a set of
measure zero).
Example 2.4. W01,2 (Ω) = {uk ∈ C0∞ , kuk − ul k2 → 0, kDuk − Dul k2 → 0}. Also, W01,2 (Ω) = {u ∈
L2 (Ω), L2 (Ω) 3 u = limk→∞ uk , kDuk − Dul k2 → 0, Duk → v ∈ L2 .}.
Now we go back to our question: does there exist a u0 ∈ W01,2 (Ω) such that I(u) ≥ I(u0 ) for any
u ∈ W01,2 (Ω)? We will begin to show this in two steps. Our first is to show that our functional is bounded
below. We will show that ∃C > 0 such that I(u) ≥ −C for any u ∈ W01,2 (Ω). This will at least give us a
starting point to find a minimum because we no longer have the ambiguity of I(u) exploding to −∞. Next,
assuming this, if I(u) did have a minimum, then min I(u) < ∞. Now pick a minimizing sequence {uj }
such that I(uj ) → min I(u). The reason why this can be picked will be shown later. We wish to show that
uj → u0 and I(u0 ) = min I(u).
Claim. We claim that ∃C > 0 such that I(u) ≥ −C ∀u ∈ W01,2 (Ω).
First recall the definition of I:
ˆ ˆ
1
I(u) = |Du|2 dx + f udx.
2 Ω Ω
Now note that we would be done with our claim if we show that 21 kDuk22 ≥ 2 kuk22 . This is saying that the
gradient controls our function u. But we are in luck because we choose u to have compact support, so it is
zero on the boundary. Then this implies that the gradient not only approximates values near u, but tells us
what they are.
3
Lemma 2.5. There exists = (Ω) such that 2 kuk22 ≤ 14 kDuk22 .
Proof. First note that this will imply
ˆ
1 1
I(u) ≥ |Du|2 − f 2 dx.
Ω 4 2
Now we move onto a claim:
Claim. Let Ω be convex, and bounded in Rn . Then I will show that ∀u ∈ C ∞ (Ω), ∀x ∈ Ω, then
ˆ ˆ
(diam Ω)n |Du(y)|
|u(x) − u(y)| dy ≤ dy.
Ω n Ω |x − y|n−1
Proof of Claim. Let x, y ∈ Ω be arbitrary, and denote r = |x − y|. Let ω = (y − x)/|x − y| be the unit vector
in this direction. Then we have that
ˆ r
d
u(y) − u(x) = u(x + tω) dt
0 dt
ˆ rX n
∂u
= (x + tω)ωi dt
0 i=1 ∂xi
ˆ r
|u(x) − u(y)| ≤ |Du(x + tω)||ω| dt
ˆ ˆ0 ˆ r
|u(x) − u(y)| dy ≤ |Du(x + tω)| dt dy.
Ω Ω 0
We transform the RHS of this integral inequality using the polar transformation centered at x: y 7→ (r, ω).
Let `ω denote the distance from x to ∂Ω.
ˆ ˆ ˆ `ω ˆ r
|u(x) − u(y)| dy ≤ |Du(x + tω)| dt rn−1 dr dσ(ω)
Ω S n−1 0 0
ˆ ˆ `ω ˆ r
= rn−1 dr |Du(x + tω)| dt dσ(ω)
S n−1 0 t
ˆ ˆ `ω
`nω
≤ |Du(x + tω)|dt dσ(ω)
S n−1 0 n
ˆ ˆ `ω
(diam Ω)n
≤ |Du(x + tω)| dt dσ(ω)
n S n−1 0
ˆ ˆ `ω
(diam Ω)n |Du(x + tω)| n−1
= t dt dσ(ω)
n S n−1 0 tn−1
ˆ
(diam Ω)n |Du(x + tω)|
= dy
n Ω tn−1
ˆ
(diam Ω)n |Du(y)|
= dy.
n Ω |x − y|n−1
With the proof of the claim out of the way, we can prove the following corollary:
Corollary 2.6. Let u ∈ C0∞ (Ω). Then for any x ∈ Rn , we have
ˆ
|Du|
|u(x)| ≤ cn dy.
R n |x − y|n−1
4
Proof of Corollary. We begin by introducing a little bit of notation:
Notice that we will be done with our corollary if we show that ūBR (0) → 0 as R → ∞. But this is where we
use the fact that u has compact support:
|Du(y)| |Du(y)|
ūBR (0) = dy = dy → 0 as R → ∞.
BR (0) |x − y|n−1 Rn |x − y|n−1
Now that we are done with this corollary, we will quote a lemma that will be proved later:
Lemma 2.7 (Estimates for Integral Operator). Assume
ˆ
|u(x)| ≤ K(x, y) · |v(y)| dy.
where ˆ ˆ
A = max sup K(x, y) dy, sup K(x, y) dx. .
x y
Applying this lemma to Corollary 2.6 gives us kukp ≤ CkDukp for u ∈ C0∞ (Ω) (and in fact for u ∈ W01,2 (Ω)
by approximations). We are finally done with our Lemma 2.5.
Now that we are done with the first part of the problem, we go back to the infimum question. Assume
we know that inf u∈W 1,2 (Ω) I(u) > −∞ and pick a minimizing sequence, i.e. uk ∈ W01,2 (Ω) such that
0
I(uk ) → inf I(u). By approximations, we may assume uk ∈ C0∞ (Ω). The question now is do the uk ’s
converge? To do this we have to go through a few things:
Claim. I claim that {uk } is a bounded sequence, i.e. ∃C > 0 that is independent of k such that kuk k2 ≤
C, kDuk2 ≤ C.
Proof of Claim. The fact that uk is a minimizing sequence of I(u) implies that ∀k,
1 1
C1 ≥ I(uk ) ≥ kDuk22 − kf k22 .
4| 2
We can bring the f term to the other side and get C2 ≥ kDuk2 . Poincaré’s inequality then implies that
C3 ≥ kuk22 .
5
Ok now we have that {uk } is a bounded sequence. Recall that in a finite dimensional vector space,
boundedness implies pre-compactness. However, our functional space W01,2 (Ω) is infinite dimensional, so we
need to find a weaker substitute called “weak compactness.”
Definition 2.8. Let B be a Banach space and B ∗ be its dual space (space of bounded linear functionals),
i.e., ` ∈ B ∗ is linear and |hl, ui| ≤ Ckuk for any u ∈ B. Let {uk } ⊂ B. Then we say that uk * u weakly if
∀` ∈ B ∗ ,
h`, uk i → h`, ui.
It is easy to see that if uk → u in the usual sense, then uk * u weakly. Let ` ∈ B ∗ so we have
|h`, uk − ui| ≤ Ckuk − uk → 0. The converse is not true. For an √easy example, let uk be an orthonormal basis
in an infinite dimensional Hilbert Space. Then kuk − ul k = 2, so we obviously do not have convergence.
On the other hand, from Parseval’s formula, we have
X
|h`, uk i|2 = k`k2 → 0.
uk
Then this implies that hl, uk i → 0 ∀` =⇒ uk * 0 but uk 6→ 0. Now we go to a result from analysis:
Theorem 2.9 (Bamach-Alaoglu). Let B be a reflexive separable Banach space. Then for any bounded
sequence uk ⊂ B, there exists a subsequence ukl ⊂ B such that ukl * u weakly.
We will apply this to our problem. We have that I(uk ) → inf I(u) and kuk k2 + kDuk2 ≤ C. By passing
through our subsequence, we have a u∞ ∈ W01,2 (Ω) such that uk * u∞ and Duk * Du∞ . Now the
question that we have to answer is if u∞ is the minimum that we seek after. However, we can’t say that
uk * u∞ , Duk * Du∞ implies I(uk ) → I(u∞ ). The problem with this is that I(u) is not continuous with
respect to weak convergence. However, it is lower semi-continuous! i.e., uk * u weakly in L2 implies that
kuk2 ≤ lim inf k→∞ kuk k2 . Here is the proof of this:
Proof. ˆ ˆ
kuk22 = uū = lim uūk ≤ lim inf kuk2 kuk k2 .
k→∞ k→∞
In particular,
1
I(u∞ ) = kDu∞ k22 + hf, u∞ i
2
1
≤ lim inf kDuk k22 + hf, uk i
k→∞ 2
= lim inf I(uk ).
k→∞
6
Proof of Lemma 2.7. Our main tool will be Hölder’s inequality. Choose p and p∗ such that 1/p + 1/p∗ = 1.
Then we have
ˆ
1 1
|u(x)| ≤ |K(x, y)| p∗ |K(x, y)| p |v(y)|dy
ˆ p1∗ ˆ p1
p
≤ |K(x, y)|dy |K(x, y)||v(y)| dy
ˆ ˆ ˆ pp∗ ˆ !
p p
|u(x)| dx ≤ |K(x, y)|dy |K(x, y)||v(y)| dy dx
ˆ pp∗ ˆ ˆ
p
≤ sup |K(x, y)|dy |K(x, y)||v(y)| dy dx.
x
´
Since |v(y)|p dy is a constant in terms of x, we can take it out of the integral of the RHS. Then switching
the order of integration and applying the same bounding trick we have
ˆ ˆ pp∗ ˆ ˆ
p
|u(x)| dx ≤ sup |K(x, y)|dy sup |K(x, y)|dx |v(y)|p dy
x y
p
+1
kukpp ≤A p∗ kvkpp
kukp ≤ Akvkp .
Notice that this lemma would be pointless if A = ∞ because then we learn nothing know with kukp ≤ ∞.
We are in luck because we can actually deduce that A is finite in our case! The reason for this is because
|x − y|n−1 yields a singularity of dimension strictly less than n when taking the supremum over the y’s, and
so it integrable. Finally since x is over a set of compact support, it is bounded by a constant. Hence for
u ∈ C0∞ (Ω) =⇒ kukp ≤ CkDukp . Now we are finally done with our result from earlier.
Now we propose a question: how do we sharpen our estimates? A better way to visualize this question
is by noticing that the Kernel is integral for any power less than n. We actually used the worst power in
our previous proof. The answer to our question comes from the following inequalities:
Theorem 2.10 (Sobolev Inequality). Let Ω ⊂⊂ Rn and u ∈ C0∞ (Ω). Then for any p < n we have
kuk n−p
np ≤ C
n,p kDukp .
Theorem 2.11 (Trudinger Ineqality). For p = n, there exists constants K, C > 0 such that
ˆ n
n−1
|u(x)|
exp ≤ C.
Ω KkDukn
7
prove this. The proof is going to follow the same theme from Lemma 2.7’s proof. From Hölder’s inequality
we have
ˆ
|u(x)| ≤ |K(x, y)|α |K(x, y)|1−α |v(y)|1−β |v(y)|β dy
where 1/a + 1/b + 1/c = 1. We need to choose our parameters wisely so that we have our desired estimates.
One obvious constraint to put is βb = p and (1 − β)c = p because we want |v(y)| on the RHS to have powers
of p. Raising everything to the power q and integrating gives us
ˆ ˆ ˆ ˆ q/a ˆ ! q/c βq
|u(x)|q dx ≤ |K(x, y)|αa dy |K(x, y)|(1−α)c |v(y)|p dy |v(y)|p dy dx
ˆ ˆ q/a ˆ q/c !
αa (1−α)c p
= |K(x, y)| dy |K(x, y)| |v(y)| dy kvkβq
p dx.
In order to make our calculations a little bit easier take q = c. Then we can do the following:
ˆ ˆ ˆ ˆ q/a !
|u(x)|q dx ≤ |K(x, y)|αa dy |K(x, y)|(1−α)c |v(y)|p dy kvkβq
p dx
ˆ q/a ˆ ˆ
≤ sup |K(x, y)|αa dy |K(x, y)|(1−α)c |v(y)|p dy kvkβq
p dx
x
ˆ q/a ˆ
≤ sup |K(x, y)|αa dy sup |K(x, y)|(1−α)c dx kvkp+βq
p
x y
Recall that so far 0 < α < 1 is arbitrary. Choose it so that αa = (1 − α)c. Now lets play around with these
parameters. Recall that we chose q = c and (1 − β)c = p =⇒ β = 1 − p/q. We can then plug this into
βb = p =⇒ b = p/β to get
p p pq
b= = = .
β 1 − pq q−p
Now that we have parameters b and c, we can plug this into 1/a + 1/b + 1/c = 1 to get the parameter a.
After skipping some steps we see that 1/a = 1 − 1/p. Finally recall that we have αa = (1 − α)c. plugging in
our value for a and c and solving for α gives us
q(p − 1) pq
α= , αa = .
p + pq − q p + pq − q
When applied to our gradient estimates, this means we need the integral
ˆ
|K(x, y)|αa dy
to be finite. Following our explanation after our proof of Lemma 2.7 tells us that we need the integral
ˆ pq
p+pq−q
1
dy
|x − y|n−1
to be finite. Comparing the powers would require
pq
(n − 1) < n.
p + pq − q
8
After playing around with this inequality we get
pn
q< .
n−p
What suffices to show the full proof of Theorem 2.10 is that our coefficient in front of kDukp must depend
only on n and p. Additionally, setting K(x, y) = |x − y|−n+1 in our generalization of Lemma 2.7 would mean
that A would only be finite when
ˆ pq
pq+p−q
1
sup dx < ∞.
x |x − y|n−1
pq 1 1 1
One can see that our kernel will be integrable ⇐⇒ (n − 1) pq+p−q < n ⇐⇒ p − q < n. We now prove the
following general lemma where u no longer has compact support:
1 1 1
Lemma 2.13. Assume p − q < n. If u satisfies
ˆ ˆ
(diam Ω)n |Du(y)|
|u(x) − u(y)|dy ≤ dy
Ω n Ω |x − y|n−1
then !1+ q1 − p1
1 1
1+ q − p (diam Ω)n
ku − uΩ kq ≤ cn 1 1 1 1 1 kDukp .
n + q − p |Ω|1− n + p
Proof. It suffices to show, after dropping some constants,
( ˆ pq
pq+p−q )1+ q1 − p1
1+ 1
− 1
!1+ q1 − p1
1 q p 1 1 1
sup dx ≤ 1 1 1 |Ω| n − p + q . (1)
x |x − y|n−1 n + q − p
Proof. Consider first p = 1 and u ≥ 0. Notice that we can write u(x) in the following way:
ˆ ∞
u(x) = χ{u>t} dt.
0
9
But the inside of this inequality is
ˆ n n−1
n
ˆ n
! n−1
n−1
n−1
kχ{u>t} (·)k n−1
n = χ{u>t} (·)k n−1
n dx = dx = (Vol{u > t}) n
Ω {u>t}
Notice that this has dimension of surface area because we can interpret Vol as having n dimensions, then we
take the nth root of it leaving one spacial dimension, and then we raise it to the n + 1th dimension again.
Then we can use the following isoperimetric inequality:
n−1
(Vol{u > t}) n
≤ CS Area(∂{u > t}).
This gives us now the inequality
ˆ ∞
kuk n−1 ≤ Cs Area(∂{u > t})dt.
n
0
Now we will apply iterated integrals using the coarea formula: let u be a real valued function that isn’t
constant. Then
1
dx = dσt dt (2)
|Du|
is the coarea formula. What we are doing is integrating the {u = t} level set with the dσt measure and then
integrating with respect to dt. Applying this gives us
ˆ ∞
kuk n−1 ≤ Cs Area(∂{u > t})dt
n
0
ˆ ∞ ˆ
= Cs dσt dt
ˆ0 u=t
= Cs |Du|dx.
Ω
and drop terms involving Ω because in the end they’re just constants. We expand the exponential as a power
series to get
ˆ n
n−1 ∞ ˆ kn
1 |u(x) − uΩ | X 1 |u(x) − uΩ | n−1
exp dx = dx
Ω Kn kDukn k! Ω Kn kDukn
k=0
kn
∞
! n−1
X 1 ku − uΩ k n−1
kn
= (4)
k! Kn kDukn
k=0
10
Now we go back to (3) and plug in p = n to get the following inequality:
1+ q1 − n1
ku − uΩ kq 1 1
≤ cn q 1 + −
kDukn q n
1
≤ cn q 1− n .
kn
This was verified on paper by playing little tricks. Anyway, we plug this into (4) with q = n−1 :
ˆ n
n−1 ∞ kn
1− n1 ! n−1
1 |u(x) − uΩ | X 1 1 kn
exp dx ≤ cn
Ω Kn kDukn k! Knn−1
k=0
∞ kn k
X 1 1 n−1 kn
= cn
k! Kn n−1
k=0
∞
!k
X kk n
= n .
k! Knn−1 (n − 1)
k=0
kukC α ≤ CkDukp .
Proof. Recall that
u(x) − u(y)
kukC α = kuk∞ + sup
x6=y |x − y|α
where 0 < α < 1. It the suffices to show that each term is bounded by kDukp i.e. kuk∞ ≤ CkDukp and
[u]C α ≤ CkDukp . Clearly the first part follows from (5) with q = ∞. Now we prove the second part. Fix
x, y, x 6= y and let δ = |x − y|. Now define Ω̃ := Bδ (x) ∩ Bδ (y). Clearly this is convex. Then we have the
following inequalities
|u(x) − u(y)| ≤ |u(x) − uΩ̃ | + |uΩ̃ − u(y)|
ˆ
(diam Ω̃)n
ˆ
|Du(z)| |Du(z)|
≤ n−1
dz + n−1
dz
|Ω̃| Ω̃ |x − z| Ω̃ |y − z|
Now notice that diam(Ω̃) ≤ 2δ and |Ω̃| ≤ |Bδ (x)|. Finally since all the terms inside the integral are positive
then we can write
ˆ ˆ !
Du(z) |Du(z)|
|u(x) − u(y)| ≤ C n−1
dz + n−1
dz .
Bδ(x) |x − z| Bδ (y) |y − z|
11
Now at this point we don’t have to reinvent the wheel so we see now that our function satisfies the correct
requirements of (5) so we can write
diam(Bδ (x))n
|u(x) − u(y)| ≤ CkDuk 1 1
|Bδ (x)|1− n + p
1 1
≤ CkDukp (2δ)n · (ωn δ n ) n − p −1
n
≤ CkDukp δ 1− p −n δ n
≤ CkDukp δ α
|u(x) − u(y)|
kuk∞ + sup = kukC α ≤ CkDukp
x6=y |x − y|α
As a consequence we have then that if u ∈ W0k,p (Ω) then u ∈ C 0 (Ω) if 1/p < k/n. The way to see this
is the following: Let {uj } ⊂ C0∞ (Ω) and uj → u with respect to k · kW k,p (Ω) . This of course exists because
W0k,p (Ω) = {C0∞ (Ω)|k · kW k,p (Ω) < ∞}. Then we apply the Sobolev embedding theorem to uj − um . Then
we have that kuj − um kC 0 ≤ Ckuj − um kW k,p (Ω) → 0. This implies that uj converges uniformly and so
lim uj is continuous. In fact, uj → u uniformly in the usual sense because uniform convergence implies that
uj → u in Lp . However, since W k,p (Ω) convergence is stronger, we have uj → u in the usual sense.
Lets summarize what we’ve done. Let Ω ⊂⊂ Rn , f ∈ L2 (Ω). Define
ˆ
1 2
I(u) = |Du| + f u dx
Ω 2
for u ∈ W01,2 (Ω). Then we showed that ∃u∞ ∈ W01,2 (Ω) with I(u∞ ) = inf I(u). We will observe that the
minimum of a functional I(u) is going to be a generalized solution of the Euler-Lagrange equation for I(u).
The basis on which we will set our ground on is that if x∞ = minx∈Ω f (x) =⇒ f 0 (x∞ ) = 0.
12
Let ϕ ∈ C0∞ (Ω) and consider for t << 1 the function R 3 t 7→ A(t) = I(u∞ + tϕ). Then we see that
t = 0 is going to be a minimum for A(t) =⇒
ˆ
dA d 1
0= = |D(u∞ + tϕ)|2 + f · (u∞ + tϕ) dx
dt t=0 dt 2
ˆ t=0
d 1 2 2 2
= (|Du∞ | + 2tDu∞ Dϕ + t |Dϕ| ) + f · (u∞ + tϕ) dx
dt 2
ˆ t=0
= (Du∞ Dϕ + f ϕ) dx
ˆ X
n
∂u ∞ ∂ϕ
= + f ϕ dx.
j=1
∂xj ∂x j
Now if we assume temporarily that u∞ ∈ C 2 (Ω) then we are allowed to integrate by parts and get
ˆ X ∂2u
!
0= − + f ϕdx.
∂x2j
Since this is true for any ϕ then we have −∆u∞ + f = 0, which is the Laplace equation that we wanted to
solve from the first page.
by definition and so uj is cauchy. But since Lp is complete we can then formally define
13
where the limit is taken over the Lp norm. Recall that we still had Ω being some some simple semi-circle in
Rn+1
+ .
Lets extend the notion of boundary values in more general Ω with Ω ∈ C ∞ . Let Ω̃ be a small subset of
Ω such that Ω̃ ∩ ∂Ω 6= ∅. Let v ∈ C0∞ (Ω̃). We will define (using norms) v|∂ Ω̃ . Let y ∈ Ω̃. Since the boundary
is smooth we can map Ω̃ into an upper half sphere as before with y 7→ x. Of course, we can go backwards.
So then we can say v(y) = v(y(x)) =: u(x) and note that via our definitions, v|∂ Ω̃ = u(x, 0) and from our
previous observations we have
∂u
ku(·, 0)kLp (Rn ) ≤ C
∂xn+1 Lp (Rn+1 )
∂
=C v(y(x))
∂xn+1 Lp (Rn+1 )
n+1
X ∂v
≤C (7)
∂y l Lp (Rn+1 )
l=1
≤ CkvkW 1,p (Rn+1 ) .
Noticed that (7) we did something very fishy that I will now justify. We did the change of variables from
integrating with respect to the x coordinates to integrating with respect to the y coordinates. The problem
with this is that the integrals might not be bounded in the correct way. Recall
∂y l
dy = det dx
∂xj
∂y l
and c ≤ det ∂xj ≤ C. And so we have for a general function f
ˆ
kf (·)kpLpy = |f (y)|p dy
ˆ
∂y l
= |(f ◦ y)(x)|P det dx
∂xj
and so we have
ckf ◦ ykpLpx ≤ kf kpLpy ≤ Ckf ◦ ykpLpx
and (7) is valid.
Claim. The following inequality holds for any v (not just those supported in a boundary neighborhood):
v ∂Ω Lp (∂Ω)
≤ CkvkW 1,p (Ω) .
Proof. Note that cdx ≤ dσ ≤ Cdx and we are able to apply the same argument as above (i.e. norms are
equivalent under change of variable). So now the problem is to deal with
P the full Ω. Since Ω is compact we
∞
cover it Ω = ∪N
α=1 Ωα and pick a partition of unity χα ∈ C0 (Ωα ) and χ = 1. Then
N
X N
X
v ∂Ω Lp (∂Ω)
≤ kχα vkLP (∂Ω) ≤ kχα vkW 1,p (Ω)
α=1 α=1
N
X
≤ kχα vkLp (Ω) + kD(χα v)kLp (Ω)
α=1
≤ CkvkW 1,p (Ω)
Where we have bounded kχα vkp ≤ kvk and expanded the second term using the Leibniz rule.
14
With this done we can now extend our previous work to more general boundary conditions. Say you
wanted to solve ∆u = f with u|∂Ω = g. Then if g ∈ Lp (∂Ω) then choose a G ∈ W 1,p (Ω) whose restriction is
equal to g and then consider v = u − G and the problem ∆v = ∆u − ∆G and v|∂Ω .
We will begin by analyzing a Non-Linear PDE. Let M be a compact Riemannian manifold of dimension
2. Let
X 2
ds2 = gij dxi dxj
i,j=1
∞
be the Riemannian metric. Let v ∈ C (M ) and let R be a given negative constant. We want to solve
∆u + λeu+v − R = 0. (8)
√
Notice that the exponential of u makes this a very non-linear equation. Let g := det gij . Then we will
define the laplacian as follows
2
1 X √ ij
∆u = √ ∂k gg ∂j u
g
j,k=1
2
X 1 √ jk
= √ gg ∂k ∂j u + first order terms
g
j,k=1
2
X
= g jk ∂k ∂j u + first order terms.
j,k=1
Example 2.17. In Euclidean space, we have ds2 = (dxi )2 =⇒ gij = g ij = δij and so
P
X ∂2u
∆u = + first order terms.
∂x2j
for any u ∈ W01,2 (Ω) and f ∈ L2 (Ω). The way we proved this was by showing that
ˆ ˆ
|u|2 dx ≤ C |Du|2 dx,
Ω Ω
15
but can we do this on a general compact manifold? No! In general we have the following Poincaré inequality
ˆ
2 2 1 √
ku − ūkL2 (M ) ≤ CkDukL2 (M ) , ū = u gdx.
V M
Now we can write our functional in the following way
ˆ ˆ ˆ
1 √ √ √
I(u) = |Du|2 gdx + R (u − ū) gdx + Rū gdx .
2 M M M
| {z } | {z }
bounded as before can blow up
16
Lemma 2.18 (Rellich’s Lemma). Let M be a compact manifold and {uj } ⊂ W 1,p (M ) with kuj kW 1p (M ) ≤ C
with C independent of j, then there exists u∞ ∈ W 1,p (M ) and a subsequent {ujk } such that ujk → u∞ in
Lp (M ).
Note. Note that this also holds for W 1,p (Rn ) if supp uj ⊂ k ⊂⊂ Rn ∀j.
Now recall a theorem from measure theory:
Theorem 2.19. If uj → u∞ in Lp for 1 ≤ p < ∞, then there exists ujk → u∞ point wise a.e.
Note that all together, we can take a subsequence of our subsequence to find a sequence kDuj k ≤ C, uj →
u∞ almost everywhere. We will see that in two dimensions these properties imply that euj +v → eu∞ +v in
L1 . Since these values are always positive, the L1 norm is just convergence of the integral, which is what we
needed. To see this,
ˆ 1
d h tu∞ +(1−t)uj +v i
euj +v − eu∞ +v = − e dt
0 dt
ˆ 1
=− (u∞ − uj )etu∞ +(1−t)uj +v dt.
0
1
Taking the L norm will give us
ˆ ˆ 1 ˆ
√ √
|euj +v − eu∞ +v | gdx ≤ |u∞ − uj |etu∞ +(1−t)uj +v gdx dt
M 0 M
ˆ 1 ˆ 1/2 ˆ 1/2
√ √
≤ |u∞ − uj |2 gdx e2(tu∞ +(1−t)uj +v) gdx dt
0 M M
ˆ 1 ˆ 1/2
2(tu∞ +(1−t)uj +v) √
= ku∞ − uj kL2 e gdx dt
0 M
and now note that we need the second integral to be uniformly bounded for the RHS of the inequality to go
to zero. ´ √
We claim that ewj gdx ≤ C (independent of j) if kwj kL2 ≤ C and kDuj kL2 ≤ C. In this case we have
wj = 2(tu∞ +(1−t)uj +v) satisfying the condition because kDuj kL2 ≤ C and kDu∞ kL2 ≤ lim inf kDuj kL2 ≤
C and similarly the L2 norm of uj and u∞ is bounded. Now lets see why this claim is true.
Recall the Trudinger inequality: u ∈ C0∞ (B) implies that
ˆ n
n−1
|u(x)|
exp ≤C
KkDukLn
and so for n = 2 we have ˆ 2
|u(x)|
exp ≤ C.
KkDukL2
Now note that we can write
2
|u(x)| 1 |u(x)| 1
|u(x)| = KkDukL2 ≤ + (KkDukL2 )2 .
KkDukL2 2 KkDukL2 2
Since the exponential function is increasing we can write
" 2 #
1 |u(x)| 1 2
exp |u(x)| ≤ exp + (KkDukL2 )
2 KkDukL2 2
ˆ ˆ 2
1 2 1 |u(x)|
exp |u(x)| ≤ e 2 (KkDukL2 ) exp
2 KkDukL2
1 2
≤ e 2 (KkDukL2 ) C
17
And this concludes the proof. A great exercise is as follows. Let M be a compact Riemannian n-manifold
and show that ˆ
√
ew gdx ≤ C exp (CkDuknLn + kwknLn ) .
´ √
As a summary, we have shown that for´a sequence satisfying I(uj ) → min I(u) and V1 euj +v gdx = 1 then
1 u∞ +v √
uj → u∞ (as explained above) and V e gdx = 1 so we have
inf I(u) ≤ I(u∞ ) ≤ inf I(u).
Now we claim that u∞ satisfies our partial differential equation in the Euler-Langrange sense. Fix
ϕ ∈ C ∞ (M ) and consider u∞ + tϕ + ct . We add the constant ct so that this function still satisfies the
constraint. To see what ct has to be note
ˆ
1 √
1= eu∞ +tϕ+ct +v gdx
V M
ˆ −1
ct 1 u∞ +tϕ+v √
e = e gdx
V M
ˆ
1 u∞ +tϕ+v √
ct = − log e gdx
V M
and thus I(u∞ + tϕ + ct ) ≥ I(u∞ ) for any t and so we leave it as an exercise to show in detail
d
I(u∞ + tϕ + ct ) = 0.
dt t=0
18
so we see that λ has to equal R. The reason the first term vanishes is because we are integrating an exact
form over a compact manifold. So then we have that our PDE is
∆u + Reu+v − R = 0 (12)
Claim. It follows easily that if u satisfies (13) in the generalized sense, then u ∈ C ∞ and actually satisfies
(12) in the standard sense.
Proof. Indeed we can let −Reu+v + R = f and f ∈ L2 = W 0,2 by the Trudinger’s inequality (it tells us that
this exponential is bounded by the L2 norm of the weak derivative) then by regularity we will get u ∈ W 2,2 .
Now recall that Morrey’s inequality tells us that if p1 < nk then W k,p is embedded in a Hölder space, but
since p = n = 2 we have u ∈ C α . But then since ∆u = f this implies f ∈ C α and we can apply regularity
to get u ∈ C 2,α . We can iterate this and find of course that u ∈ C ∞ .
We will prove these regularity statements by the method of continuity and a priori estimates. Consider
∆u + Reu+v − R = 0
on (M, gij (x)) where v is a smooth function. Note that ∆u + Reu − R = 0 admits a solution u ≡ 0 so we
see that the difficulty comes from the v term. Let t ∈ [0, 1] and introduce the family of equations
∆u + Reu+tv − R = 0 (14)
And consider the set I = {t ∈ [0, 1]|(14) admits a solution ut }. Note that I 6= ∅ because 0 ∈ I is a solution.
Then we obviously want to show I = [0, 1], so we need to show that I is open and closed. Lets discuss this
very briefly (to be made concise later).
Say we want to show I is open. We will do this by an analogue of the implicit function theorem. Recall
that it says given f (x0 , y0 ) = 0 and ∂f ∂y (x0 , y0 ) 6= 0 then ∃ > 0 such that for |x − x0 | < , then ∃!y such
that f (x, y) = 0. Now let f (t, u) = ∆u + Reu+tv − R. Then we want to solve f (x, u) = 0 where we know
f (t, u0 ) = 0. So our goal is going to be an implicit function theorem for Banach spaces and we need to check
∂f
that ∂u (t0 , u0 ) 6= 0 in a way we will define later.
Now lets briefly discuss how we will show that I is closed. Take tj ∈ I such that (14) will admit a solution
called uj , and assume tj → T . Then closeness of I is equivalent to showing that (14) will admit a solution
for T . It will suffice to have a subsequence converge in C 2 .
19
Proof. Lets start by showing that I is closed. Let tj → T with tj ∈ I and let uj be the corresponding
solution of (15). Observe that if suffices to show that ∃C independent of j so that kuj kC 3 ≤ C where in
general X
kukC 3 (Ω) = k∂ α ukC 0 (Ω) .
|α|≤3
The reason why this will help is that this would imply that ∀β ≤ 2, then ∂ β uj is an equicontinuous family.
Then the Arzela-Ascolati Theorem tells us that if we have an equicontinuous family, then by going through
a subsequence ∂ β uj we have uniform convergence to Dα uT where uT ∈ C 2 . However we don’t necessarily
have that T ∈ I because we need uT to be smooth and so far it is only C 2 . This is fixed however by our
regularity observations.
Before we can apply our regularity conditions we must show that our equation is uniformly elliptic. Recall
that a second order PDE is said to be uniformly elliptic if the leading coefficient satisfies
X
λ|ξ|2 ≤ aα ξ α ≤ Λ|ξ|2 .
|α|=2
So the symbol (the middle term of the ellipticity inequality requirement) of our Laplacian is going to be
σ∆ (x, ξ) = g ij ξi ξj
and since g is positive definite we definitely have the ellipticity requirement. So then we can apply our
regularity theorems by viewing ∆u = f ∈ C 2 ⊂ C 1,α .
So now we have to prove the a priori estimate kuj kC 3 ≤ C. We will use the maximum principle. Since
there are so many different formulations of maximum principles, it is a good idea to simply examine what
happens near a maximum. Let u ∈ C ∞ satisfy ∆u + Rtw+u − R = 0 (by denoting ut = u) and then I claim
that kukC 0 ≤ C where C is independent of t. Let x0 ∈ Ω such that u(x0 ) is a maximum. Then ∆u(x0 ) ≤ 0
and since R < 0 we have
0 ≤ −∆u = Retw+u − R
R ≤ Retw+u
−|R| ≤ −|R|etw+u
1 ≥ etw+u
0 ≥ tw + u
u ≤ −tw
≤ kwkC 0 .
But since x0 is a maximum we have that ∀x, u(x) ≤ u(x0 ) ≤ kwkC0 and applying the same argument for
the minimum we have kukC0 ≤ kwkC0 . In order to get higher derivatives we write ∆u = f ∈ C 0 and so for
f ∈ Lp ∀1 ≤ p < ∞ we have u ∈ W 2,p ⊂ C α for some α when n < kp.
Now lets show that I is open. Let t0 ∈ I i.e. there exists u0 which is a smooth solution of (14). Let
(u, t) 7→ F (u, t) = ∆u + Retw+u − R. Then we want to show that ∃δ > 0 so that |t − t0 | < δ implies
∃ut ∈ C ∞ |F (ut , t) = 0. The main tool will be the Implicit Function Theorem for Banach Spaces, which goes
as follows.
20
Let B1 and B2 be Banach spaces. Let F ∈ C 1 and consider B1 × R ⊃ Ω 3 (u, t) 7→ F (u, t) ∈ B2 . Assume
that F (u0 , t0 ) = 0. Let ∂F
∂u (u0 , t0 ) be the derivative of F at (t0 , u0 ) viewed as a linear operator B1 → B2 .
Note that if
∂F
khkB1 ≤ C (u0 , t0 )h ∀h ∈ B1 (15)
∂u B2
then ∂F
∂u (u0 , t0 ) is injective and surjective. So assume that (15) holds. Then the Implicit Function Theorem
for Banach Spaces says that ∃δ > 0, ∃V that is a neighborhood of u0 such that ∃!ut ∈ V with F (ut , t) = 0.
Before we can even apply this theorem, we need to mae sense of derivatives in terms of Banach spaces.
Let B1 3 u 7→ F (u) ∈ B2 . Then F is differentiable at u0 if ∃L : B1 → B2 that is a bounded linear operator
satisfying
F (t, u + h) = F (t, u) + Lh + E(t, u, h)
with
kE(u, h)kB2
lim = 0.
h→0 khkB1
In order to apply our theorem we need so specify our Banach spaces. We will want to have (t, u) ∈ R×C 2,α →
F (u, t) ∈ C 0,α . What we will then need to check are the assumptions of the implicit function theorem. Let
t0 , u0 ∈ R × C 2,α satisfy the conditions of the IFT. Lets determine L. Consider the expression F (t0 , u0 + h):
The main tool for this is the integral form of Taylor’s Remainder Theorem, which starts as follows
ˆ 1 ˆ 1 1
d
h (1 − t) [f 0 (u + th)] dt = f 0 (u + th)dt + h(1 − t)f 0 (u + th)
0 dt 0 0
ˆ 1
d
= f (u + th)dt − hf 0 (u)
0 dt
= f (u + h) − f (u) − hf 0 (u).
21
and since the coefficients are both strictly positive, it means that h(x0 ) ≤ 0. Since this is a maximum it
means that ∀x, h(x) ≤ h(x0 ) ≤ 0. Applying the same process to the minimum we get that h(x) ≥ 0 and so
h ≡ 0. This implies that the kernel of L is zero, and so it is injective.
In order to finally complete the proof we will show that L is onto i.e. ∀f ∈ C 0,α , we want to show
∃h ∈ C 2,α so that Lh = f . We will show this by variational methods again. Set
ˆ
√ ij √
I(h) = gg ∂i h∂j h − Ret0 w+u0 h2 + f h gdx
for some h ∈ W 1,2 (M ). We leave it as an exercise to show that I(h) attains its minimum for some h∞ . One
way to do this is by showing that
1
I(h) ≥ kDhk22 + khk22 − kf k22
and using our tricks. Assuming the exercise, we invoke the black box to make h smooth. Recall that it says
if f ∈ C k,α then
khkC k+2,α ≤ C (kLhkC k,α + khkC k,α ) .
We now improve the black box by saying that if ker L = 0, then
Now we will prove this little lemma. We will use weak compactness. If {ul } ⊂ C k,α with kul kC k,α ≤ C
with C independent of l, then either ∀k < k, ∀β or k 0 = k with 0 < β < α then there exists a convergent
0
subsequence in C k ,β by the routine application of the Arzela-Ascolti Theorem. Now assume (17) does not
hold. Then for any N, ∃hN ∈ C k+2,α such that khN kC k+2,α > N kLhn kC k,α . Set
hN
h̃N =
khN kC k+2,α
and notice that kh̃N kC k+2,α = 1 and so this implies that kLh̃kC k,α < N1 → 0. By the weak compactness,
going through a subsequence, we can assume that h̃N → h∞ in C k,α . Applying the black box gives
kh̃N − h̃M kC k+2,α ≤ C kLh̃N − h̃M kC k,α + kh̃N − h̃M kC k,α
which implies that h̃N → h∞ in C k+2,α . Thus Lh̃N → Lh∞ in C k,α and so h∞ ∈ ker L. Since
22
3 Harnack Inequality for Divergence Equations
3.1 Regularity Estimates
Suppose that u ∈ W 1,2 (Ω) solves ∂i (aij ∂j u) + cu = f in the generalized sense, where aij is uniformly elliptic
i.e. 0 < λ ≤ aij ≤ Λ. The question we want to answer is: when is u “regular” i.e. u ∈ C α , W k,2 , C ∞ , etc?
Lets look at the simplest case where aij is constant and c = f = 0. In this case u solving aij ∂i ∂j u = 0
in the generalized sense. This means that ∀v ∈ W01,2 (Ω),
ˆ
aij ∂i u∂j v = 0. (17)
Then we will show that u ∈ C ∞ (Ω) and for |α| = k, 0 < r < R, we have
ˆ ˆ
α 2 Cλ,Λ,k
|D u| ≤ 2k
|u|2 . (18)
Br (x0 ) (R − r) BR (x0 )
Note that this inequality is very powerful because we have that the derivative is being bounded by the
function, where w usually have it the other way around.
Proof. We apply (17) with v = χ2 u with 0 ≤ χ ≤ 1, χ ≡ 1 on Br (x0 ) and χ ∈ C01 (BR (x0 )). Additionally
assume
2
Dχ(x0 ) ≤ .
R−r
Applying (17) gives us ˆ ˆ
χ2 aij ∂i u∂j u = −2 aij (χ∂j χ)u∂i u
1 1
and note that ellipticity gives us |aij ui vj | ≤ (aij ui uj ) 2 (aij vi vj ) 2 . Putting absolute values gives us
ˆ ˆ 21 ˆ 12
2 2 2
χ aij ∂i u∂j u ≤ 2 aij ∂i u∂j uχ aij ∂i χ∂j χ|u|
ˆ ˆ
1
≤ aij ∂i u∂j uχ2 + 4 aij ∂i χ∂j χ|u|2
2
ˆ ˆ
1 2
aij ∂i u∂j uχ ≤ 4 aij ∂i χ∂j χ|u|2
2
which proves the inequality for k = 1. To prove |α| = k ∈ Z, we proceed by induction. Assume u ∈ C ∞ .
Then Dα u ∈ C ∞ and satisfies the same equation aij ∂i ∂j (Dα ) = 0. Applying the previous case with k = 1
gives (by induction)
ˆ ˆ
α 2 C
|D(D u)| ≤ 2 |Dα u|2
r+R
Br (x0 ) − r BR−r (x0 )
2
ˆ
C
≤ |u|2
R+r R+r 2k B (x )
2 −r R− 2 R 0
ˆ
C
= 2(k+1)
|u|2 .
(R − r) BR (x0 )
23
As one can expect, for the non-smooth case, we use mollifiers. Take η ∈ C0∞ (|x| < 1) with
ˆ
η=1
Rn
1 x
and define η = n η( ). Then define
ˆ ˆ
u (x) = u(x − y)η (y)dy = u(y)η (x − y)dy
which is well defined for dist(x, ∂Ω) > . By an exercise we leave to the reader, note that if u ∈ Lp (Ω) for
1 < p < ∞, then for any K ⊂⊂ Ω and < K , then u → u in Lp (K). Using this, and by a single exercise
that shows ˆ ˆ
aij ∂i u ∂j v = aij ∂i u∂j v
we can conclude that if u satisfies aij ∂i ∂j u = 0 in the generalized sense then so does u . Thus we have that
u ∈ C ∞ and aij ∂i ∂j u = 0 and thus we can apply our estimate (18) to find
ˆ ˆ
α 1
|D u | ≤ |u |2 .
Br (x0 ) (R − r)2k BR (x0 )
Lets try to generalize this. We will do this with the following theorem.
Theorem 3.1. Let u ∈ W 1,2 (Ω) be a weak solution to
in Ω ⊂ Rn . Assume that
i) 0 < λ ≤ aij ≤ Λ
ii) aij are continuous with modulus of continuity τ i.e. |aij (x) − aij (y)| ≤ τ (|x − y|)
n
iii) c ∈ Ln , f ∈ Lq where 2 < q < n.
n
Then for any BR (x) ⊂⊂ Ω, u ∈ C α (BR (x)) with α = 2 − q and 0 < α < 1 with the estimate
kukC α (B) ≤ Cn,λ,Λ,τ,kckp kf kLq (Ω) + kukW 1,2 (Ω) .
In order to prove this we will need a few lemmas. They are as follows.
Lemma 3.2. Assume u ∈ W 1,2 (B) and
ˆ
|u − ūx0 |2 dx ≤ M 2 rn+2α .
Br (x0 )
24
We now integrate over the ball of radius r and using the fact that the LHS is a constant and the assumption
of the lemma we have
ˆ ˆ !
rn |ūr (x0 ) − ūR (x0 )|2 ≤ 2 |ūr − u(x)|2 + |u(x) − ūR |2
Br (x0 ) Br (x0 )
ˆ !
2 n+2α 2
≤2 M r + |u(x) − ūR |
BR (x0 )
≤ CM 2 rn+2α + Rn+2α
n
2 2 2α R 2α
|ūr (x0 ) − ūR (x0 )| ≤ CM r + R
r
n
R
≤ CM 2 1 + R2α
r
n2 !
R
|ūr (x0 ) − ūR (x0 )| ≤ CM 1 + Rα . (20)
r
Let r = 2−l−1 L, R = 2−l L and plug this into (20) to see that
n2 !
2−l L
|ūr − ūR | ≤ CM 1+ (2−l L)α
2−l−1 L
n
= CM (1 + 2 2 )(2−l L)α
= CM (2−l L)α . (21)
where we have used the fact that we had a telescoping sequence and that we have a geometric sequence that
we have bounded by the highest term. This implies that {ū2−l L } is a cauchy sequence and so we can define
We will now show that this is independent of L. Take L < L0 . Then (22) implies that
0 n
L
|ū2−l L (x0 ) − ū2−l L0 | ≤ CM 1 + (2−l L0 )α
L
and taking l → ∞ shows that u∗ is independent of L.
From all of our hard work, we can say that u∗ = u almost everywhere by the Lebesgue Differentiation
Theorem which says that if u ∈ L1 , then limr→0 ūr (x0 ) = u(x0 ) for almost every x0 . Now taking m → ∞ in
(22) we are able to get
25
Letting r = 1 implies that ku∗ (x0 )kL∞ (B1 ) ≤ C(M + kukL1 (B1 ) )
Now in order to complete this theorem we need to estimate [u∗ ]C α . Let x, y ∈ Ω such that Br (x), Br (y) ⊂⊂
Ω and Br (x) ∩ Br (y) 6= ∅. Denote δ = |x − y| and let z be the point midway between x and y. By convexity
we can see that Bδ (z) ⊂⊂ Br (x) ∩ Br (y). Then we write
|u∗ (x) − u∗ (y)| ≤ |u∗ (x) − ūδ (x)| + |u∗ (y) − ūδ (y)| + |ūδ (x) − u(z)| + |ūδ (y) − u(z)|
|u∗ (x) − u∗ (y)|2 ≤ C |u∗ (x) − ūδ (x)|2 + |u∗ (y) − ūδ (y)|2 + |ūδ (x) − u(z)|2 + |ūδ (y) − u(z)|2
where we have used (23) on the first two terms. Integrating the inequality with respect to z and using the
assumptions of the lemma yields the required results.
Lemma 3.3. Assume that ˆ
|Du|2 ≤ M 2 rn−2+2α .
Br (x0 )
Then u ∈ C α .
Proof. The details of the proof will be left as an exercise. Here is a sketch of it. Recall the Poincaré
inequality: ˆ ˆ
2
|u − ūS | ≤ λS |Du|2 .
S S
Then you need to show that ˆ ˆ
2 2
|u − ūr (x0 )| ≤ Cr |Du|2
Br (x0 ) Br (x0 )
where c is independent of r. The way to do this is to apply the Poincaré Inequality with r = 1. Then consider
the rescaling ũ = u(rx). After doing this, apply the assumption of the lemma and you’ll find yourself in the
position of Lemma 3.2.
Lets recall our goals. We were proving Schauder estimates to get regularity. Assume u ∈ W 1,2 is a weak
solution of ∂i (aij ∂j u) = 0 i.e. ˆ
aij ∂j u∂i v = 0 ∀v ∈ W01,2 .
Then we will show that u ∈ C α where 0 < λ ≤ aij ≤ Λ and |aij (x) − aij (y)| ≤ τ (|x − y|) with τ ↓ 0 as R ↓ 0.
We will be using the following key estimate for 0 < r < R
ˆ h r n iˆ
2
|Du| ≤ C + τ (R) |Du|2 . (24)
Br (x0 ) R BR (x0 )
Proof. We first prove the case where we have a constant coefficient i.e. τ ≡ 0. Then we will show that
ˆ r n ˆ
|Du|2 ≤ C |Du|2 (25)
Br (x0 ) R BR (x0 )
and use a the Lemma of De Giorgi (seen later). By rescaling v(x) = u(Rx) what we need to show becomes
ˆ r n ˆ
R−2 |Dv|2 ≤ C |Dv|2 · R−2
B r (x0 ) R B1 (x)
R
m
ˆ ˆ
|Dv|2 ≤ Csn |Dv|2 .
Bs (x0 ) B1 (x0 )
26
This will follow from previous work. Take s small (s < 12 ). Then By the Sobolev Embedding Theorem and
by the first theorem proved in this section we are able to say
ˆ
|Dv|2 ≤ sup |Dv|2 · sn
Bs (x0 ) Bs
X ˆ
≤ Dα (Dv)2 sn
|α|≤k B 3 (x0 )
ˆ 4
n
≤ Cs |Dv|2
B1 (x0 )
and we are done showing (25), which corresponds to the case where aij are constants. Since this case will
follow when we prove the Lemma of De Giorgi, we move on to the non-constant case.
We will prove the non-constant case as a perturbation of the equation aij (x0 )∂i ∂j w = 0, which we have
already finished. To carry this out, consider the following Dirichlet problem
aij (x0 )∂i ∂j w = 0 weak sense
w − u ∈ W01,2 (BR (x0 )).
Set v = u − w and u = v + w. Then
ˆ "ˆ ˆ #
2 2 2
|Du| ≤ 2 |Dv| + |Dw|
Br (x0 ) Br (x0 ) Br (x0 )
"ˆ
r n ˆ
#
2 2
≤2 |Dv| + c |Dw|
Br (x0 ) R BR (x0 )
"ˆ
r n ˆ
#
2 2 2
≤2 |Dv| + c |Du| + |Dv|
Br (x0 ) R BR (x0 )
r n ˆ ˆ
" #
2 2
≤C |Du| + |Dv| (26)
R Br (x0 ) BR (x0 )
where we have used the fact that w solves our PDE with constant coefficients, saw that the integral over r
n
is ≤ than the integral over R, and saw max{1, Rr n } = 1. Now we need to control the integral of |Dv|2 , so we
use the fact that v = u − w, where u solves ∂i (aij ∂j u) = 0 and w solves aij (x0 )∂i ∂j w = 0 in the weak sense.
Now since v ∈ W01,2 , we are able to use it as a test function in the definition of weak derivatives to get
ˆ ˆ
aij (x0 )Dj vDi v = aij (x0 )(Dj u + Dj w)Di v
ˆ
= aij (x0 )Dj uDi v
ˆ ˆ
= (aij (x0 ) − aij (x))Dj uDi v + aij (x)Dj uDi v
ˆ
= (aij (x0 ) − aij (x))Dj uDi v.
We now use the fact that aij is elliptic and use its modulus of continuity to see
ˆ ˆ
λ |Dv|2 ≤ τ (R) |Du||Dv|
BR (x0 ) BR (x0 )
ˆ
1 1
≤ τ (R) |Du|2 + |Dv|2
BR (x0 ) 2 2
ˆ ˆ
|Dv|2 ≤ Cτ (R) |Du|2
BR (x0 ) BR (x0 )
27
which we can plug into (26) to see that we have finally proved (24) for the case of non-constant aij .
We still need to relate everything to get that u ∈ C α . Now we finally state the Lemma of De Girogi:
Lemma 3.4 (Lemma of De Giorgi). Let ϕ(r) ≥ 0 with ϕ(r) ↓ as r ↓, and A, B ≥ 0. Assume that 0 < r < R
and ∃α > β > 0 such that h r α i
ϕ(r) ≤ A + ϕ(R) + BRβ . (27)
R
Then ∀0 < β < γ < α, ∃0 such that < 0 implies
h r γ i
ϕ(r) ≤ C ϕ(R) + Brβ . (28)
R
We will apply the Lemma with ˆ
ϕ(r) = |Du|2
Br (x0 )
because (24) implies that ϕ(r) satisfies (27) with B = 0 and α = n. Then by the Lemma of De Giorgi we
have r γ
ϕ(r) ≤ C ϕ(R)
R
and by taking R = 1 we imply that ϕ(r) ≤ Crγ after absorbing ϕ(1) into the constant. We are now in the
case of Lemma 3.3 because we are free to choose any γ < α = n and
ˆ
|u − ūr (x0 )|2 dx ≤ Crγ+2 .
Hence u ∈ C α .
Recall that we consider weak solutions of
∂i (aij ∂j u) + c(x)u = f
where 0 < λ ≤ aij ≤ Λ and |aij (x) − aij (y)| ≤ τ (|x − y|) where τ ↓ 0 as R ↓ 0, c ∈ Ln and f ∈ Lq for
n α n
2 < q < n. We want to show that u ∈ C for α = 2 − q .
Proof. We already got the case for c = f = 0, where for 0 < r < R the main tool was showing
ˆ (
r n ˆ ˆ )
2 2 2
|Du| ≤ C |Du| + τ (R) |Dv| . (29)
Br (x0 ) R BR (x0 ) BR (x0 )
in the generalized sense. By the same argument as before, (29) can also be established for non-trivial lower
ordered terms. Lets now estimate |Dv| in (29):
ˆ ˆ
aij (x0 )Di vDj v = aij (x0 )(Di u − Di w)Dj v
ˆ
= aij (x0 )Di uDj v
ˆ ˆ
= aij (x)Di uDj v + (aij (x0 ) − aij (x))Di uDj v
ˆ ˆ ˆ
= − c(x)uv + f v + (aij (x0 ) − aij (x))Di uDj v. (30)
28
These calculations show two new terms that were not there before in the proof of the previous theorem. We
handle these terms by bounding their derivatives. Recall the Sobolev inequality:
kvk 2n ≤ CkDvkL2
L n−2
1 1
where we have to figure out which powers to use. We need Lesbegue conjugates p + q = 1 and so we let
2n n
2p = n−2 =⇒ p = n−2 , q = n2 . This implies
ˆ ˆ n−2
n
ˆ 2/n
2n n
2 2
|cv| ≤ |v| n−2 (|c| ) 2 = kvk2 2n kck2Ln
L n−2
≤ kf k 2n kDvkL2
L n+2
1
≤ kDvk2L2 + kf k2 2n .
2 2 L n+2
Once again the Dv term gets absorbed in (29). Putting everything together gives us the following estimate
ˆ (
r n ˆ
)
2
|Du| ≤ C + τ (R) |Du| + Ckuk2L2 (BR ) kck2Ln (BR ) + Ckf k2 2n .
2
(31)
Br (x0 ) R BR (x0 ) L n+2
Now recall the Lemma of De Giorgi: ϕ(r) ↓ 0 as r ↓ 0, ϕ(r) ≥ 0. Assume that ∀0 < r < R, β < α and
r α
ϕ(r) ≤ A + ϕ(R) + BRβ .
R
Then ∀0 < β < γ < α, ∃0 > 0 such that < 0 implies
r γ
ϕ(r) ≤ C ϕ(R) + Brβ .
R
29
In particular we can let R = 1 above and have
ϕ(r) ≤ C̃rβ .
The analysis tells us that in order to apply the lemma, we need to estimate the additional terms in (31) by
Rγ for γ = n − 2 + 2α. Lets begin with the f term
ˆ ! n+2
n ˆ ? ˆ ? ! n+2
n
2n
2 q
kf k 2n = |f | n+2 ≤ |f | 1 .
L n+2 (BR (x0 )) BR (x0 )
2n q(n+2) 1 1 2n
In order to figure out the exponents, we need n+2 p = q =⇒ p = 2n =⇒ m =1− p =1− (n+2)q . Thus
≤ Crmin{n−2+2α,2}
If I manage to show this, observe that we are done by the De Giorgi Lemma. Consider the triangle type
30
inequality |u|2 ≤ |u − ūR (x0 )|2 + ū2R and integrate over the ball of radius r to see
ˆ ˆ ˆ
|u|2 ≤ |u − ūR (x0 )|2 + ū2R
Br (x0 ) B (x ) B (x )
ˆ r 0 ˆ r 0
2
≤ |u − ūR (x0 )| + ū2R (r ≤ R)
BR (x0 ) Br (x0 )
ˆ ˆ
≤ R2 |Du|2 + ū2R (Poincaré)
BR (x0 ) Br (x0 )
ˆ
2+min{n−2+2α,2}
≤R + ū2R
Br (x0 )
ˆ ˆ !2
2+min{n−2+2α,2} 1
=R + u
Br (x0 ) Rn BR (x0 )
ˆ ˆ ˆ !1/2 2
!1/2
≤ R2+min{n−2+2α,2} + 1 u2 1
n
Br (x0 ) R BR (x0 ) BR (x0 )
ˆ ˆ !1/2 2
= R2+min{n−2+2α,2} + 1 u2
Br (x0 ) Rn/2 BR (x0 )
ˆ
2+min{n−2+2α,2} rn
=R + n u2 .
R BR (x0 )
for all non-negative functions v ∈ C01 (Ω). Let f i , g be locally integrable functions in Ω. Then u is a weak
solution of the inhomogeneous equation
Lu = g + Di f i
in Ω if it satisfies
ˆ ˆ
aij Dj u + bi u Di v − (ci Di u + du)v dx = f i Di v − gv dx
Ω Ω
31
3.2.1 Structural Inequalities
We rewrite Lu = g + Di f i as
Di Ai (x, u, Du) + B(x, u, Du) = 0, (32)
where
Ai (x, z, p) = aij pj + bi z − f i
B(x, z, p) = ci pi + dz − g
for (x, z, p) ∈ Ω × R × Rn . Then we say that u is a weak subsolution (supersolution, solution) of (32) in Ω
if Ai (x, u, Du) and B(x, u, Du) are locally integrable and
ˆ
Di vAi (x, u, Du) − vB(x, u, Du) dx ≤ (≥, =)0
(33)
Ω
for all non-negative v ∈ C01 (Ω). Writing b = (b1 , . . . , bn ), c = (c1 , . . . , cn ), f = (f 1 , . . . , f n ) and using the
Schwarz inequality, we have the inequalities
λ 2 1
pi Ai (x, z, p) ≥ |p| − (|bz|2 + |f |2 )
2 λ
|B(x, z, p)| ≤ |c||p| + |dz| + |g|.
for some k > 0. Then for an 0 < < 1, we finally have the following inequalities
λ
pi Ai (x, z, p) ≥ (|p|2 − 2b̄z̄ 2 )
2
λ 2 b̄ 2
|z̄B(x, z, p)| ≤ |p| + z̄ .
2
If we denote aij by a, then we can write |A(x, z, p)| ≤ |a||p| + |bz| + |f |. Also note that we can divide (32)
by λ2 to finally get the structural inequalities
32
Theorem 3.6. Let L be uniformly elliptic with bounded coefficients and suppose that f i ∈ Lq (Ω), g ∈ Lq/2 (Ω)
for q > n. Let u ∈ W 1,2 (Ω) be a supersolution in Ω. If u is non-negative in B4R (y) ⊂ Ω and 1 ≤ p <
n/(n − 2), we have
−n/p
R kukLp (B2R (y)) ≤ C inf u + k(R) . (36)
BR (y)
Proof. It is convenient to prove these two theorems conjointly in the case where u is a bounded non-negative
subsolution. We begin by assuming that R = 1, k > 0. The general case will be obtained from transforming
x 7→ x/R and letting k → 0. Let β 6= 0, η ∈ C01 (B4 ) be non-negative. We define v := η 2 ūβ . Recall that
ū = u + k. Then we have that
Dv = 2ηDηūβ + βη 2 ūβ−1 Du.
Note that v is a valid test function. Then we plug this v into our definition of subsolutions
ˆ
Di vAi (x, u, Du) − vB(x, u, Du) dx ≤ 0
Ω
and obtain
ˆ ˆ
β 2 β−1
η 2 ūβ B(x, u, Du) dx.
2ηDηū A(x, u, Du) + βη ū DuA(x, u, Du) dx ≤ (37)
Ω Ω
We will now attempt to apply our structural inequalities (34) into (37):
33
n o
Let = min 1, β4 . Then we can further consolidate this into
ˆ ˆ
η 2 |Du|2 ūβ−1 dx ≤ C(β) b̄η 2 + 1 + |a|2 |Dη|2 ūβ+1 dx.
(38)
Ω Ω
Before we move any further, we need to introduce a few results from analysis of Sobolev spaces.
Lemma 3.7 (Interpolation Inequality). Let p ≤ q ≤ r. Then for u ∈ Lr (Ω), we have
where
1 1
p − q
µ= 1 1 .
q − r
for n̂ = n for n > 2, and 2 < 2̂ < q. Now we apply Hölder’s Inequality and the Interpolation Inequality to
get
ˆ
b̄(ηw)2 dx ≤ kb̄kq/2 kηwk22q/(q−2)
Ω
2
≤ kb̄kq/2 kηwk2n̂/(n̂−2) + −σ kηwk2 ,
where σ = n̂/(q
´ − n̂). Now we attempt to plug in these estimates into (39). Let ξ = n̂/(n̂ − 2). Now we add
a factor of Ω |wDη|2 dx and carry out some computations to get the following
34
ˆ
kηwk2χ ≤ Cγ 2 b̄(ηw)2 + |wDη|2 + |a|2 |wDη|2 dx
Ω
ˆ
2 −σ
2 2
≤ Cγ kb̄kq/2 kηwk2ξ + kηwk2 + Cγ (1 + |a|2 )|wDη|2 dx
ˆ Ω
2
= Cγ 2 kηwk2χ + −σ kηwk2 + Cγ 2 (1 + |a|2 )|wDη|2 dx
Ω
≤ Cγ 2 2 kηwk22χ + 1−σ kηwk2χ kηwk2 + −2σ kηwk22 + kwDηk22
where C = C(n̂, Λ, ν, q, |β|) is bounded when |β| is bounded away from zero. We will now get a better cutoff
function η. Let r1 , r2 be such that 1 ≤ r1 < r2 ≤ 3, η ≡ 1 in Br1 , β ≡ 0 in Ω\Br2 , with
2
|Dη| ≤ .
r2 − r2
Then we have from (40)
C(1 + |γ|)σ+1
kwkL2χ (Br1 ) ≤ kwkL2 (Br2 ) . (41)
r2 − r1
Before we move on, lets make a quick backtrack to functional spaces. Let Ω ⊂ Rn be a bounded domain.
Then I claim that if u is a measurable function on Ω such that |u|p ∈ L1 (Ω) for some p ∈ R, and define
ˆ 1/p
1
φp (u) := |u|p dx ,
|Ω| Ω
then we have
35
Now fix and define A = {x ∈ Ω||u| ≥ supΩ |u| − }. Then we see that
ˆ 1/p ˆ 1/p
1 1
φp (u) = |u|p dx ≥ |u|p dx
|Ω| Ω |Ω| A
ˆ p 1/p
1
≥ sup |u| − dx
|Ω| A Ω
|A|
= sup |u| − .
|Ω| Ω
This and our first inequality yield the claim (42). We go back to our proof. For r < 4, we define the function
ˆ 1/p
φ(p, r) := |ū|p dx .
Br
Transforming x 7→ x/R we have the desired estimate (35) if u is a subsolution. Now in the cases when u is
a supersolution, we need to approach the problem a bit differently. Recall that when u is a supersolution,
β < 0 and γ < 1. Then for any p, p0 such that 0 < p0 < p < χ, we have
φ(p, 2) ≤ Cφ(p0 , 3)
φ(−p0 , 3) ≤ Cφ(−∞, 1).
Then we will finish proving our theorem if we can show that φ(p0 , 3) ≤ C(−p0 , 3). This is done in an intricate
way and is left as an exercise.
36
Putting our two theorems together give us the full Harnack Inequality:
Theorem 3.8. Let L be uniformly elliptic and have bounded coefficients. Let u ∈ W 1,2 (Ω) satisfy u ≥ 0 in
Ω and Lu = 0 in Ω. Then for any ball B4R (y) ⊂ Ω, we have
sup u ≤ C inf u,
BR (y) BR (y)
Proof. Start out by defining M0 , M1 , M4 , m1 , m4 . Then apply Harnack inequality with p = 1 to M4 − u and
u − m4 to end up with
ω(R) ≤ γω(4R) + k(R).
Here ω(R) = oscBR u. Before we go on, we have to prove something else first. Suppose ω is non-decreasing
on (0, R0 ], R ≤ R0 satisfies
ω(τ R≤ γω(R) + σ(R),
where σ is also non-decreasing, 0 < γ, τ, < 1. Then for any µ ∈ (0, 1), we have
α
R µ 1−µ
ω(R) ≤ C ω(R)0 + σ(R R0 ) .
R0
37
4 Harnack Inequality for Non-Divergence Equations
The formulation of weak solutions to divergence equations relied heavily on the fact that the operator L was
in divergence form. This allowed us to integrate by parts and so a weak solution u needs to be once weakly
differentiable (W 1,2 ). A classical solution u must be at least second order continuously differentiable. In this
section we will concern ourselves with the intermediate situation of strong solutions.
Definition 4.1. For operators of the form
with coefficients aij , bi , c(x) defined on a domain Ω ⊂ Rn and a function f on Ω, a strong solution of Lu = f
is a function u ∈ W 2,p (Ω) that satisfies (45) almost everywhere. See [1][p. 185] for existence and uniqueness
of equations of this type.
0 < λ ≤ D? ≤ Λ
where λ, Λ are the minimum and maximum eigenvalues of (aij ). Assume b = c = 0 so that we have
Lu = aij uij . Our condition on aij and f are now
f /D ? ∈ Ln (Ω).
2,n
Theorem 4.2 (ABP Maximum Principle). Let Lu ≥ f in a bounded domain Ω and u ∈ C 0 (Ω) ∩ Wloc (Ω).
Then
d f
sup u ≤ sup + 1/n
Ω ∂Ω nωn D ? Ln (Γ+ )
where d = diam Ω.
2,n
It is important to note that the Morrey’s embedding theorem guarantees that u ∈ Wloc (Ω) will be at
least continuous in Ω because whenever kp > n, W k,p
is embedded in C for 0 ≤ α < k − np . Before we
α
begin the proof, we must first go through notions of contact sets and normal mappings.
Definition 4.3. Suppose u is an arbitrary function on Ω. The upper contact set Γ+ or Γ+
u is defined to be
the subset of Ω such that the graph of u is below a support hyperplane in Rn+1 , i.e.
From the definition, u will be a concave function on Ω if and only if Γ+ = Ω. It is clear that p = Du(y) if
u ∈ C 1 (Ω). Finally, if u ∈ C 2 (Ω), the Hessian D2 u ≤ 0 on Γ+ . This means that we can essentially think of
Γ+ as the subset of Ω where u is concave down. The upper contact set of u is closed in Ω.
Definition 4.4. Suppose u ∈ C 0 (Ω) is arbitrary. We define the normal mapping χ(y) = χu (y) for a point
y ∈ Ω to be the set of slopes of supporting hyperplanes at y lying above the graph of u, i.e.
38
Proof of Theorem 4.2. Assume that u ∈ C 0 (Ω) ∩ C 2 (Ω). Subtracting sup∂Ω u from u yields the same differ-
ential inequality L(u − sup∂Ω ) ≥ f and so we can assume u ≤ 0 on the boundary. Note that we can assume
supΩ u ≥ 0 because if it was negative, there would be nothing to prove. This assumption implies that if
u(y) := supΩ u, then y is in the interior of Ω because u ≤ 0 on the boundary. Now I show that
This can be seen by sliding hyperplanes onto the graph of u. Consider the cone with vertex u(y) with base
∂Ω. Then the slope of the cone is u(y)/d and u ≤ 0 on ∂Ω implies that dropping down hyperplanes of slope
u(y)/d will eventually be tangent to u(x) at a point x ∈ Γ. This can be seen very clearly in pictures:
It is a bit difficult to see, but in the picture, we have u|∂Ω≤0 . Now we compute
ˆ
|Du(Γ)| = 1
Du(Γ)
ˆ
≤ | det D2 u|. (47)
Γ
Lets do a bit of linear algebra. I claim that given two positive matrices A, B, then
n
Tr AB
det A det B ≤ .
n
39
which is exactly the inequality of arithmetic and geometric means. Taking A = D2 u,B = (aij ) then on Γ
1
| det D2 u| = det −D2 u = det(aij ) det −D2 u
n D
1 f
≤ − . (48)
D n
which is precisely the ABP estimate because we had replaced u with u − sup∂Ω u.
The ABP maximum principle can be naturally extended to functions u ∈ C 0 (Ω) ∩ Wloc 2.n
(Ω) by approxi-
mating with smooth functions. It can also be extended for coefficients satisfying |b|/D ∈ Ln (Ω) and c ≤ 0
?
in Ω, but these will be taken as a black box. Now we have the correct tools to begin proving the Harnack
inequality for non-divergence equations. We say an operator of the form (45) is strictly elliptic when
2
Λ |b| |c|
≤ γ, , ≤ ν.
λ λ λ
We will assume throughout the rest of this section that the operator L given by (45) is uniformly elliptic.
Note that have the same ABP estimate from the other side: if Lu ≤ f and u|∂B1 ≥ 0, then
ˆ 1/n
n
| inf u| ≤ Cn f
B1 Γ
Theorem 4.5. Let u ∈ W 2,n and assume aij uij ≤ 0 for some bounded measurable and uniformly elliptic
aij in B2 . If u ≥ 0 and u(x) ≤ 1 for some x ∈ ∂B1 , then
40
radius with a paraboloid of radius 2; it’ll be made clear with the picture). Suppose that r ≥ 1/2 at the point
(r, 0, . . . , 0). Then we compute some derivatives at this point:
Dij ϕ = 0 for i 6= j
D11 ϕ = −M2 α(1 + α)r−α−2 r−α−2
Dii ϕ = M2 αr−α−2 .
By rotational symmetry of our function and uniform ellipticity, we have for |x| ≥ 1/4,
by our choice of α. However, for |x| ≤ 1/4, aij uij ≤ C = C(n, λ, Λ). This observation along with (50) shows
that aij ϕij ≤ Cη for some C universal with η ≡ 1 in B1/4 and η ≡ 0 outside of B1/2 , i.e. η ∈ C0∞ (it is
smooth because of the smoothness of |x|α in this domain). Finally note that |ϕ| ≤ M for some M universal
given by the distance of the vertex of the parabola that we capped off by and the origin.
Now the crucial part of the proof is to notice that the convex envelop of w is in the set that w ≤ 0 =⇒
u + ϕ ≤ 0 =⇒ u ≤ −ϕ ≤ M . Hence our estimate just gave us 1 ≤ C|{u ≤ M } ∩ B1/2 |. This concludes the
proof with µ = 1/C.
Corollary 4.6. We can show that this is scaling invariant. That is, if u is defined in B2r and u ≤ α
somewhere on ∂Br , then |{u ≤ M α} ∩ Br/2 | ≥ µ.
41
Lemma 4.7. Suppose we have the same u as in Theorem 3.5. Then
Proof. This proof will follow from Calderon-Zygmund decomposition and induction. For k = 1, this is the
statement of Theorem 4.5. Now suppose it holds for k − 1. We introduce the classical decomposition: if Q is
a dyadic cube different from Q1 , we say Q̃ is the predecessor of Q if Q is one of the 2n cubes obtained from
dividing Q̃. Then the decomposition states the following: if A ⊂ B ⊂ Q1 are measurable sets and 0 < δ < 1
such that
Then I claim that ũ is under the hypothesis of Theorem 4.5. By (49), it follows that
We see that |Q − A| > µ|Q|, which contradicts our assumption that |A ∩ Q| > (1 − µ)|Q|. Showing that ũ
satisfies the conditions of Theorem 4.5 will be left as an exercise. As a hint, the following property of dyadic
cubes: if Q is a dyadic cube Q = Q1/2i (x0 ) for some i ≥ 0 and x0 ∈ Q1 , then
i ≥ 1 =⇒ Q̃ ⊂ Q3/2i (x0 ).
(51) follows from the results above by taking d = (1 − µ)−1 and such that 1 − µ = M − .
Now we need yet another measure theoretic lemma. Let f be a measurable function on a domain Ω in
Rn . Define the distribution function µ(t) = |{f ≥ t}| for t > 0. This measures the “relative” size of f . Note
that µ is a decreasing function on the positive real line.
Lemma 4.8. For any p > 0 and |f |p ∈ L1 (Ω),
ˆ ˆ ∞
p
|f | = p tp−1 µ(t)dt.
Ω 0
42
Proof. This is just computations. Suppose f ∈ L1 . Then
ˆ ˆ ˆ |f (x)|
|f | = dtdx
Ω
ˆΩ∞ 0
= µ(t)
0
and the lemma will hold for general p after change of variables.
hence kukLp (Q1 ) ≤ C and we re-scale u away from the assumptions of 4.7 to get the result of the weak HI.
The second piece of HI is called the local maximum principle
Theorem 4.10. Let u ∈ W 2,n (Q1 ) and let aij uij ≥ 0. Then for any 0 < p ≤ n, we have
43
5 Curved C 1,α Domains
5.1 Estimate for Laplacian
We begin with a simple definition.
Definition 5.1. A continuous function u on Rn is said to be C 2,α at x0 if there exists a quadratic polynomial
P and constants C, ρ such that
ku − P kL∞ (Br ) ≤ Cr2+α , ∀r ≤ ρ. (52)
We say u ∈ C 2,α (B1 ) if u is C 2,α at all x ∈ B1 .
Now we develop a tool that helps us determine when a function is C 2,α .
Proposition 5.2. Let Ω ⊂ Rn be open and bounded. Suppose we can find a sequence of paraboloids Pk =
ak + bk · x + 21 xT ck x and an r < 1 such that
Thus we have that the Pk converges to a polynomial P = a + bx + xT cx. Putting these together yield
44
Let P1 be the harmonic quadratic approximation to w that satisfies
The constant C that is above is in terms of n because |w| ≤ 1. Putting these together we have
Now here we use the fact that 0 < α < 1 so choose r so small that 2Cr3 ≤ r2+α and choose so small that
2C ≤ r2+α . This gives
ku − P1 kL∞ (Br ) ≤ r2+α .
1
Now take the 2 + α rescaling v(x) = r2+α (u − P1 )(rx). Notice that |v| ≤ 1 and since P1 is harmonic we
1 1
have that ∆v = rα ∆u(rx) = rα f (rx) = g(x). The right side again satisfies |g| ≤ so we repeat our process
again to find a harmonic polynomial P2 such that
Iterating this will give us a sequence of polynomials approximating u. Thus we are in the position of
Proposition 1.2, and we are finished.
Lets shift gears a little bit and develop tools for boundary estimates. We say that a Ω is a Schauder
domain, or a C k,α – domain if it is a domain in Euclidean space with sufficiently regular boundary i.e., ∂Ω
can locally be viewed as the graph of a C k,α function. For this reason, we will assume that ∂Ω = {(x0 , g(x0 ))}
where g ∈ C 2,α , x0 ∈ Rn−1 . Recall that the first step in the iteration for Theorem 5.3 required us to obtain
the estimate |u − w| ≤ C in B1 . Since this sort of iteration process is all we can do at the moment, I will
now attempt to show the first step:
Lemma 5.4. Let B1 be the ball centered at the origin, f ∈ L∞ , and suppose that u solves
∆u = f in Ω ∩ B1
u = 0 on ∂Ω
B1 ∩ {xn ≥ } ⊂ B1 ∩ Ω.
|u − w| ≤ C( + δ)
on B 14 (0) ∩ Ω.
Proof. We first consider the case where f = 0, so u is harmonic on Ω ∩ B1 . Let Γ(x) be the fundamental
solution to Laplace’s equation, and define the following barrier function as follows
Γ(x) − Γ(r)
G(x) :=
Γ(R) − Γ(r)
for 0 < r < R. Notice that 0 ≤ G, G(R) = 1, G(r) = 0, and that G is harmonic in the annulus of little
radius r and big radius R. Consider the lines xn = ± and let x0 = (x00 , −) ∈ Rn be so small that it is in
B1 . Lets look at G the barrier function with smaller radius tangent to xn = −, and see that it is centered
at (x00 , − − r) =: x̃0 .
45
Now I will show that |u| ≤ G on the common domain of influence. First note that G ≥ |u| on ∂Ω because
u = 0 on ∂Ω. Recall that we assumed |u| ≤ 1 in Ω, and since G = 1 on ∂B(x̃0 , R), we see that |u| ≤ G on
Ω ∩ ∂B(x̃0 , R). Putting these two together yields |u| ≤ G on ∂ (Ω ∩ B(x̃0 , R)), and the maximum principle
(which applies because these two are harmonic on Ω ∩ B(x̃0 , R)) tells us that |u| ≤ G on the entire common
domain of influence.
Lets restrict ourselves for a moment on rays starting from x̃0 and moving in the xn direction. Notice
that since G is Lipschitz, we have
Let t ∈ R be such that x̃0 + ten ∈ Ω. We now use the fact that G is zero on the smaller ball of radius r:
Observe that x̃0 can be moved horizontally as long as the outer ball of radius R is in B1 , and the inner ball
of r is below −, and so,
|u| ≤ C|xn + | on all of B 41 (0) ∩ Ω (54)
Now that we have this estimate, lets find the corresponding harmonic polynomial that gives us the desired
result. Let w be the harmonic polynomial such that
+
w = u on ∂(B1/4 ) ∩ Ω ∩ {xn ≥ }
+
w = 0 on ∂(B1/4 ) ∩ {0 ≤ xn < }
w = 0 on {xn = 0}.
+
There is the possibility that w is not continuous on ∂(B1/4 ) ∩ Ω ∩ {xn = } because on one side, w = u and
on the other w = 0. The way to fix this is the following: move δ > 0 above and consider a C ∞ cutoff
function vδ that satisfies
+
0 ≤ vδ ≤ 1 W ⊂⊂ ∂(B1/4 ) ∩ Ω ∩ { ≤ xn ≤ + δ}
vδ ≡ 1 V ⊂⊂ W
where vδ decreases monotonically to zero. Now define the new harmonic function
+
w=u on ∂(B1/4 ) ∩ Ω ∩ {xn ≥ + δ}
w = v 2 u on ∂(B + ) ∩ Ω ∩ { ≤ x ≤ + δ}
δ 1/4 n
wδ := +
w = 0 on ∂(B 1/4 ) ∩ {0 ≤ x n < }
w=0 on {xn = 0}.
Then since
46
+
(i) |wδ | = |u| ≤ G on ∂(B1/4 ) ∩ Ω ∩ {xn ≥ + δ}
+
(ii) |wδ | = |vδ2 u| ≤ |u| ≤ G on ∂(B1/4 ) ∩ Ω ∩ { ≤ xn ≤ + δ}
+
(iii) G ≥ 0 = wδ on ∂(B1/4 ) ∩ {0 ≤ xn < }
+
|wδ | ≤ C|xn + | on all of B1/4 ∩ Ω. (55)
− −
Similarly on B1/4 ∩ Ω, one could extend oddly by w̃δ (x0 , xn ) = −wδ (x0 , −xn ) on B1/4 , and comparing w̃ with
−
−u, one achieves the same bound as (55) on B1/4 ∩ Ω. Define the new oddly reflected function
(
+
wδ on B1/4
wodd = −
w̃δ on B1/4
which satisfies
Putting (54) and (56) together finally proves the Lemma for u harmonic:
|u − wodd | ≤ C
on B1/4 ∩ Ω. Now lets consider the general case ∆u = f where f ∈ L∞ and |f | ≤ δ. Here, apply the above
δ
to ũ = u ± 2n |x|2 and comparison principles allow us to say
Proof. In order to see why this is true, we must examine where we used u = 0 on ∂Ω in the proof of Lemma
5.4. We were comparing trying to show that |u| ≤ G on the common domain of influence and used the fact
that u = 0 ≤ G on ∂Ω. In order to get the same estimates of the Lemma for (57), we simply move the point
x̃0 = (x00 , − − r) to x̃0 = x00 , − − r − 20 . The reason for this is that the barrier function G grows at the
rate
|DG(r)| ≤ Cr1−n ,
and so by moving down a sufficiently small amount, G will be bigger than , and in turn bigger than u on
∂Ω
47
Theorem 5.6. Suppose u satisfies
∆u = f in Ω
u = 0 on ∂Ω
with f (0) = g(0) = ∇g(0) = 0, |u| ≤ 1, |f | ≤ δ. Suppose Ω is a C 1,α domain, that is ∂Ω = {(x0 , g(x0 ))} for
g ∈ C 1,α that satisfies |g| ≤ δr1+α . Then u ∈ C 1,α (Br/2 ∩ Ω) with the estimate
kukC 1,α (Br/2 ∩Ω) ≤ Cn,α (kukL∞ + kf kL∞ + kgkC 1,α (|x0 |) )
Proof. Consider Br ∩ Ω we want to show that there exists a linear function l such that
and WLOG assume r ≤ 1. Then since u = 0 along the boundary, all the tangential derivatives are then zero
and hence we may reduce this to
and we may also assume |a| ≤ 1. In order to achieve this by some sort of iteration used in Theorem 5.3, we
need to conclude
ku − ãxn kL∞ (Bρr ∩Ω) ≤ (ρr)1+α
for some ã. The first thing we do is to re-scale the ball Br into B1 (and hence diluting Ω to Ω̃) and it is
clear that we need to define some ũ that satisfies
because ũ’s domain of influence is B1 . Our hypothesis then becomes |ũ| ≤ 1 and note that we can rewrite it
as
u(rx) − a(rxn )
ũ(x) = .
r1+α
Observe that diluting Ω to Ω̃ makes the latter the graph of g̃(x0 ) = 1r g(rx̃0 ) which satisfies |g̃| ≤ δrα . Now we
2
need to figure out what equation ũ satisfies: ∆ũ = r ∆u(rx)
r 1+α = r1−α f (rx) =: f˜. The bound that f˜ satisfies
is |f˜| ≤ δr 1−α
≤ δ, where we used that r ≤ 1. Now lets figure out the boundary conditions that ũ satisfies:
u(rx) − a(rxn )
ũ|∂ Ω̃ =
r1+α ∂ Ω̃
|axn r| |axn |
= α+1 = α
r r
≤δ
where we used the fact that u(x) vanishes on ∂Ω =⇒ u(rx) vanishes on ∂ Ω̃. Putting everything together
gives us
|ũ| ≤ 1
|g̃| ≤ δrα (59)
∆ũ = f˜, |f˜| ≤ δ
|ũ| ≤ δ on ∂ Ω̃.
Now we are exactly in the position of Corollary 5.5 because the flatness condition is precisely met by (59)
(after replacing the in the Lemma with δ because ∂ Ω̃ is the graph of g̃, and so its height in B1 is at most δ
by (59)). Applying the Lemma gives us |ũ − w| ≤ Cδ in B 14 ∩ Ω̃. Recall that w is the solution that vanishes
48
on x0 =⇒ |w − bxn |Bρ ≤ Cρ2 . Now noticing that the radius of B 41 doesn’t have to be 1
4 (as long as it is small
enough for the proof of Lemma 5.4 to hold) we reach
|ũ − bxn |Bρ∩Ω̃ ≤ C(δ + ρ2 ).
By picking δ and ρ small enough, and using the fact that 0 < α < 1 yields
|ũ − bxn |Bρ∩Ω̃ ≤ Cρ1+α .
49
Note that this was just in this particular domain B1/2 (en ) but we want to establish the proof in all of
+
B1/2 . In order to remedy, we have to slide our barrier to the left and right by x0 such that outer ball where φ
vanishes is tangent to B + . The center of these two extreme balls is given by 21 (1, . . . , 1) and 12 (−1, . . . , −1, 1).
So how does this help us? We can actually apply the above estimate at each of the translated barriers because
we can iterate HI to get a thin strip of domain where we know u ≥ C and so we can construct similar barriers
on every point along this strip. Then at the end of these iterations take the smallest angle of the planes
generated by each barrier function and trap all of u ≥ cxn in B1/2 . Since u has this flatness associated to
is, it is C 1,α , as will be seen in the proof of the next theorem.
We now introduce the final theorem of this thesis. We first state it and prove an important lemma that
will be useful.
Theorem 5.8. Let u ∈ W 2,n (Ω) ∩ C 0 (Ω̄) where Ω is a C 1,α domain such that 0 ∈ ∂Ω and near zero
∂Ω = {(x0 , g(x0 ))} with |g(x0 )| ≤ δ|x0 |1+α . Suppose Lu = aij (x)uij = f (x) strongly on Ω with λI ≤ aij ≤
ΛI, |f | ≤ δ in B1 ∩ Ω and u ≤ δ|x0 |1+α on ∂Ω. Then u ∈ C 1,α (0).
Lemma 5.9. Assume u satisfies
a− +
0 xn − M δ ≤ u ≤ a0 xn + M δ
− + −
in B1 with |a+
0 − a0 | = 1 and a0 , a0 ∈ [−10, 10]. Then, in Br0 with r0 ≤ 1, u satisfies
a− 1+α
1 xn − M δr0 ≤ u ≤ a+ 1+α
1 xn + M δr0
− − −
with a+ + +
1 ≤ a0 + Cδ, a0 − Cδ ≤ a1 , and a1 − a1 ≤ (1 − η) for small η.
50
and so w00 > 0, w0 < 0 implies that
n−1 0
aij wij ≥ λw00 (r) + Λ w = γM1 rγ−2 (λ(γ − 1) + Λ(n − 1)) ≥ C1
r
by choosing γ. Now define
C0 w
W := − 100δ.
2
We want to do our estimates on Ω ∩ B1 − B1/10 (1/2en ) and so we examine first ∂B1 ∩ Ω. Here we know
W is very negative from its construction (more so than ū ≥ −M δ) and if it isn’t, we simply move our
computations from 1/2en 7→ 1/4en . This gives us ū ≥ W on Ω ∩ ∂B1 . From the bounds on g, we know that
the first points of contact between ∂B1 and ∂Ω will be at most size δ. But from our estimates of ū near
zero, we have
ū ≥ −11δ ≥ max W ≥ max W.
|xn |≤δ ∂Ω∩{|xn |≤δ}
Note that W ≡ C0 /2 − 100δ on ∂B1/10 (1/2en ) and ū ≥ C0 − M δ on B1/10 (1/2) by the harnack inequality.
Hence ū ≥ W on ∂B1/10 (1/2en ). If δ is small enough, which can be done by multiplying f by a constant
since it doesn’t change any of our inequalities, we have LW = C0 C1 ≥ Lū ∼ δ and so the maximum principle
dictates that W ≤ ū on Ω ∩ B1 − B1/10 (1/2en ).
Observe that in the radial direction {x0 = 0}, W satisfies
C0
W (0, xn ) = w(0, xn ) − 100δ
2
≥ C0 C3 xn − 100δ
because of how |x|−γ grows. Also notice that x0 = 0 was not specific and we could have taken any x0 ∈
{|xn | ≤ δ} ∩ ∂Ω as we look for planes in the radial xn direction starting from ∂Ω. Taking the maximum C3
of all such planes we obtain
u − a−
0 xn = ū ≥ W
≥ C0 C3 xn − 100δ
u ≥ (C0 C3 + a−
0 ) − 100δ
and C0 C3 + a− −
0 =: a1 .
Now assume that u is closer to the upper plane than the lower plane. In this case then u satisfies
1 +
a − u(1/2en ) ≥ 1/4.
2 0
51
We define u = a+
0 xn − u and clearly we are looking for a lower bound for u. The hypothesis of the Lemma
becomes
−a− +
0 xn + M δ ≥ −u ≥ −a0 xn − M δ
−
(a+
0 − a0 )xn + M δ ≥ u ≥ −M δ.
xn + M δ ≥ u ≥ −M δ.
Also since u was closer to the upper plane, u(1/2en ) ≥ 1/4 − M δ. The Harnack Inequality tells us that
u ≥ C0 − M δ. As one can probably tell, the proof is going to be identical to the one for ū.
Now consider the barrier w(x) = M1 |x|−γ − M2 with M1 , M2 chosen such that w ≡ 1 on ∂B1/10 (1/2en )
and w ≡ 0 on ∂B1/2 (1/2en ). In a very similar sense as the lower bound by the barrier in the previous case,
we have aij wij ≥ C1 = C1 (n, λ, Λ, γ). Define W by
C0 w
W := − 100δ.
2
We want to do our estimates on Ω ∩ B1 − B1/10 (1/2en ) and so we examine first ∂B1 ∩ Ω. Here we know
W is very negative from its construction (more so than u ≥ −M δ) and if it isn’t, we simply move our
computations from 1/2en 7→ 1/4en . This gives us u ≥ W on Ω ∩ ∂B1 . From the bounds on g, we know that
the first points of contact between ∂B1 and ∂Ω will be at most size δ. But from our estimates of u near zero,
we have
u ≥ −11δ ≥ max W ≥ max W.
|xn |≤δ ∂Ω∩{|xn |≤δ}
Note that W ≡ C0 /2 − 100δ on ∂B1/10 (1/2en ) and u ≥ C0 − M δ on B1/10 (1/2) by the harnack inequality.
Hence u ≥ W on ∂B1/10 (1/2en ). If δ is small enough, which can be done by multiplying f by a constant
since it doesn’t change any of our inequalities, we have LW = C0 C1 ≥ Lu ∼ δ and so the maximum principle
dictates that W ≤ u on Ω ∩ B1 − B1/10 (1/2en ).
Observe that in the radial direction {x0 = 0}, W satisfies
C0
W (0, xn ) = w(0, xn ) − 100δ
2
≥ C0 C3 xn − 100δ
because of how |x|−γ grows. Also notice that x0 = 0 was not specific and we could have taken any x0 ∈
{|xn | ≤ δ} ∩ ∂Ω as we look for planes in the radial xn direction starting from ∂Ω. Taking the maximum C3
of all such planes we obtain
a+
0 xn − u = u ≥ W
≥ C0 C3 xn − 100δ
(a+
0 − C0 C3 )xn + 100δ ≥ u
and a+ +
0 + C0 C3 =: a1 . Thus we can see that even though the distance from the origin increases a little bit,
the slope decreases a lot more.
Proof of Theorem. We rescale u by defining
u(r0 x)
ũ(x) = .
r0
52
Then by Lemma 5.9, in Br0 we have
a− α + α
1 xn − M δr0 ≤ ũ ≤ a1 xn + M δr0
−
with |a+ 0 0
1 − a1 | = 1 − η by direct computations. We also rescale the boundary by g̃(x ) = g(r0 x )/r0 =⇒
α 0 1+α
|g̃| ≤ δr0 |x | . We can also compute
− ±
in Br = Br0k with |a+ k β
k − ak | = (1 − η) ≈ r for some β. This implies convergence of the ak to an a∞ in the
−
sense that |a+ β β
k − a∞ | ≤ Cr , |ak − a∞ | ≤ Cr . Then
a−
k xn − M δr
1+α
≤ u ≤ a+
k xn + M δr
1+α
(a−
k − a∞ )xn − M δr
1+α
≤ u − a∞ xn ≤ (a+
k − a∞ )xn + M δr
1+α
.
53
References
[1] Gilbarg, D., Trudinger, N.S. Elliptic Partial Differential Equations of Second Order, 2nd ed., Springer-
Verlag, NY, 1983.
[2] Caffarelli, L., Cabré, X. Fully Nonlinear Elliptic Equations, AMS,1995
54