Lectures On Quantum Theory Complete
Lectures On Quantum Theory Complete
E-mail: [email protected]
www.youtube.com/playlist?list=PLPH7f_7ZlzxQVx5jRjbfRGEzWY_upS5K6
These lecture notes are not endorsed by Dr. Schuller or the University.
While I have tried to correct typos and errors made during the lectures (some helpfully
pointed out by YouTube commenters), I have also taken the liberty to add and/or modify
some of the material at various points in the notes. Any errors that may result from this
are, of course, mine.
If you have any comments regarding these notes, feel free to get in touch. Visit my
blog for the most up to date version of these notes
http://mathswithphysics.blogspot.com
My gratitude goes to Dr. Schuller for lecturing this course and for making it available
on YouTube.
Simon Rea
I have picked up the notes from where Simon left off (Lecture 8). I have also been
through the first lectures and added small details that I thought helpful. I have tried
to stay consistent with Simon’s inclusion of additional valuable material throughout the
remainder of the course. As with Simon’s comment above, any mistakes made because of
this are, of course, mine.
I have also made the up-to-date version of the notes available on my blog site
https://richie291.wixsite.com/theoreticalphysics
I would like to extend a message of thanks to Simon for providing these notes (and the
notes for Dr. Schuller’s other courses) online, I have personally found them very useful.
I would also like to show my gratitude to Dr. Schuller for putting his courses on
YouTube, I have found them both very informative and interesting, a credit to his brilliant
teaching ability.
Richie Dadhley
Contents
Introduction 1
2 Banach spaces 9
2.1 Generalities on Banach spaces 9
2.2 Bounded linear operators 13
2.3 Extension of bounded linear operators 21
5 Measure theory 50
5.1 General measure spaces and basic results 50
5.2 Borel σ-algebras 56
5.3 Lebesgue measure on Rd 58
5.4 Measurable maps 59
5.5 Push-forward of a measure 60
–i–
7 Self-adjoint and essentially self-adjoint operators 80
7.1 Adjoint operators 80
7.2 The adjoint of a symmetric operator 81
7.3 Closability, closure, closedness 82
7.4 Essentially self-adjoint operators 84
7.5 Criteria for self-adjointness and essential self-adjointness 85
13 Spin 130
13.1 General Spin 132
13.2 Derivation of Pure Point Spectrum 132
13.3 Pure Spin-j Systems 138
– ii –
14 Composite Systems 140
14.1 Tensor Product of Hilbert Spaces 141
14.2 Practical Rules for Tensor Products of Vectors 146
14.3 Tensor Product Between Operators 146
14.4 Symmetric and Antisymmetric Tensor Products 147
14.5 Collapse of Notation 149
14.6 Entanglement 149
– iii –
20 Periodic Potentials 187
20.1 Basics of Rigged Hilbert Space 188
20.2 Fundamental Solutions and Fundamental Matrix 188
20.3 Translation Operator 190
20.4 Application to Our Quantum Problem 192
20.5 Energy Bands 193
20.6 Quantitative Calculation (Outline) 193
– iv –
Introduction
Quantum mechanics has a reputation for being a difficult subject, and it really deserves
that reputation. It is, indeed, very difficult. This is partly due to the fact that, unlike
classical mechanics or electromagnetism, it is very different from what we feel the world
is. But the fault is on us. The world does not behave in the way that we feel it should
from our everyday experience. Of course, the reason why classical mechanics works so well
for modelling stones, rockets and planets is that the masses involved are much larger than
those of, say, elementary particles, while the speeds are much slower than the speed of light.
However, even the stone that one throws doesn’t follow a trajectory governed by Newton’s
axioms. In fact, it doesn’t follow a trajectory at all. The very idea of a point particle
following a trajectory turns out to be entirely wrong. So don’t worry if your classical
mechanics course didn’t go well. It’s all wrong anyway!
We know from the double slit experiment that the reality is more complicated. The
result of the experiment can be interpreted as the electron going through both slits and
neither slit at the same time, and in fact taking every possible path. The experiment has
been replicated with objects much larger than an electron1 , and in principle it would work
even if we used a whale (which is not a fish!).
1
Eibenberger et al., Matter-wave interference with particles selected from a molecular library with masses
exceeding 10000 amu, https://arxiv.org/abs/1310.8343
–1–
1 Axioms of quantum mechanics
People discovered what was wrong with classical mechanics bit by bit and, consequently,
the historical development of quantum mechanics was highly “non-linear”. Rather than
following this development, we will afford the luxury of having a well-working theory of
quantum mechanics, and we will present it from the foundations up. We begin by writing
down a list things we would like to have.
1.1 Desiderata2
A working theory of quantum mechanics would need to account for the following.
(a) Measurements of observables, unlike in classical mechanics, don’t just range over an
interval I ⊆ R.
Recall that in classical mechanics an observable is a map F : Γ → R, where Γ is
the phase space of the system, typically given by the cotangent space T ∗ Q of some
configuration manifold Q. The map is taken to be at least continuous with respect
to the standard topology on R and an appropriate topology on Γ, and hence if Γ is
connected, we have F (Γ) = I ⊆ R.
Consider, for instance, the two-body problem. We have a potential V (r) = − 1r
and, assuming that the angular momentum L is non-zero, the energy observable (or
Hamiltonian) H satisfies H(Γ) = [Emin , ∞) ⊂ R.
However, measurements of the spectrum of the hydrogen atom give the following
values for the energies (in electronvolts) assumed by the electron
{−13.6 × 1
n2
| n ∈ N+ } ∪ (0, ∞).
σ(H) =
−13.6 eV 0 eV
Note that one of the parts may actually be empty. For instance, as we will later show,
the simple quantum harmonic oscillator has the following energy spectrum
σ(H) =
1
2 ~ω ( 12 + n)~ω
–2–
Also, the continuous part need not be connected, as is the case with spectrum of the
Hamiltonian an electron in a periodic potential
σ(H) =
It turns out that self-adjoint linear maps on a complex Hilbert space provide a suitable
formalism to describe the observables of quantum mechanics.
(b) An irreducible impact that each measurement has on the state of a quantum system.
The crucial example demonstrating this is the Stern-Gerlach experiment, which con-
sists in the following. Silver atoms are heated up in an oven and sent against a screen
with a hole. The atoms passing through the hole are then subjected to an inho-
mogeneous magnetic field, which deflects them according to the component of their
angular momentum in the direction of the field. Finally, a screen detects the various
deflections.
S Wτϕ
Ag
atoms
inhomogeneous
magnetic field
Since the angular momentum distribution of the silver atoms coming from the oven is
random, we would expect an even distribution of values of the component along the
direction of the magnetic field to be recorded on the final screen, as in S. However,
the impact pattern actually detected is that on the W τ ϕ screen. In fact, 50% of
the incoming atoms impact at the top and we say that their angular momentum
component is ↑, and the other 50% hit the bottom region, and we say that their
angular momentum component is ↓. This is another instance of our earlier point:
there seem to be only two possible values for the component of angular momentum
along the direction of the magnetic field, i.e. the spectrum is discrete. Hence, this is
not particularly surprising at this point.
Let us now consider successive iterations of this experiment. Introduce some system
of cartesian coordinates (x, y, z) and let SG(x) and SG(z) denotes a Stern-Gerlach
apparatus whose magnetic field points in the x and z-direction, respectively.
–3–
Suppose that we sent the atoms through a first SG(z) apparatus, and then we use
the z ↑ -output as the input of a second SG(z) apparatus.
z↑
100%
z↑
SG(z)
z↓
SG(z) 0%
z↓
The second SG(z) apparatus finds no z ↓ -atoms. This is not surprising since, intu-
itively, we “filtered out” all the z ↓ -atoms with the first apparatus. Suppose now that
we feed the z ↑ output of a SG(z) apparatus into a SG(x) apparatus.
x↑
50%
z↑
SG(x)
x↓
SG(z) 50%
z↓
Experimentally, we find that about half of the atoms are detected in the state x↑ and
half in the state x↓ . This is, again, not surprising since we only filtered out the z ↑
atoms, and hence we can interpret this result as saying that the x↑ , x↓ states are
independent from the z ↑ , z ↓ .
If our ideas of “filtering states out” is correct, then feeding the x↑ -output of the
previous set-up to another SG(z) apparatus should clearly produce a 100% z ↑ -output,
since we already filtered out all the z ↓ ones in the previous step.
z↑
50%
x↑
SG(z)
z↑ z↓
SG(x) 50%
x↓
SG(z)
z↓
Surprisingly, the output is again 50-50. The idea behind this result is the following.
The SG(z) apparatus left the atoms in a state such that a repeated measurement
with the SG(z) apparatus would give the same result, and similarly for the SG(x)
–4–
apparatus. However, the measurement of the SG(x) apparatus somehow altered the
state of the atoms in such a way as to “reset” them with respect to a measurement
by the SG(z) apparatus. For more details on the Stern-Gerlach experiment and
further conclusions one can draw from its results, you should consult the book Modern
Quantum Mechanics by J. J. Sakurai. The conclusion that we are interested in here
is that measurements can alter the state of a system.
(c) Even if the state ρ of a quantum system is completely known, the only prediction one
can make for the measurement of some observable A is the probability that the mea-
sured value, which is an element of the spectrum σ(A), lies within a Borel-measurable
subset E ⊆ R, denoted by µA ρ (E).
A suitable theory that accommodates all known experimental facts has been developed
between 1900 and 1927 on the physics side by, among others, Schrödinger, Heisenberg and
Dirac, and on the mathematical side almost single-handedly by von Neumann who invented
a massive proportion of a field known today as functional analysis.
Axiom 1 (Quantum systems and states). To every quantum system there is asso-
ciated a separable complex Hilbert space (H, +, ·, h·|·i). The states of the system are
all positive, trace-class linear maps ρ : H → H for which Tr ρ = 1.
Remark 1.1 . Throughout the quantum mechanics literature, it is stated that the unit, or
normalised, elements ψ ∈ H (that is, hψ|ψi = 1) are the states of the quantum system.
This is not correct.
States can be pure or mixed. A state ρ : H → H is called pure if
hψ|αi
∃ ψ ∈ H : ∀ α ∈ H : ρ(α) = ψ.
hψ|ψi
Thus, we can associate to each pure state ρ an element ψ ∈ H. However, this correspondence
is not one-to-one. Even if we restrict to pure states and impose the normalisation condition,
there can be many ψ ∈ H representing the same pure state ρ.
Therefore, it is wrong to say that the states of the quantum system are the normalised
elements of the Hilbert space, since they do not represent all the states of the system, and
do not even represent uniquely the states that they do represent.
–5–
The terms used in Axiom 1 are defined as follows.
• H is a set
• + is a map + : H × H → H
and moreover
• H is a complete metric space with respect to the metric induced by the norm induced
in turn by the sesqui-linear map h·|·i. Explicitly, for every sequence φ : N → H that
satisfies the Cauchy property, namely
where φn := φ(n) and kψk := hψ|ψi, then the sequence converges in H, i.e.
p
∃ ϕ ∈ H : ∀ ε > 0 : ∃ N ∈ N : ∀ n ≥ N : kϕ − φn k < ε.
Note that the C-vector space (H, +, ·) need not be finite-dimensional and, in fact, we
will mostly work with infinite-dimensional Hilbert spaces.
From now on, if there is no risk of confusion, we will write Aϕ := A(ϕ) in order to
spare some brackets. We will be particularly interested in special types of linear map.
∀ ψ ∈ H : ∀ ε > 0 : ∃ α ∈ DA : kα − ψk < ε.
∀ ψ ∈ DA : hψ|Aψi ≥ 0.
3
sesqui is Latin for “one and a half”.
–6–
Definition. A linear map A : DA → H is said to be of trace-class if DA = H and, for any
orthonormal basis {en } of H, the sum/series
X
hen |Aen i < ∞.
n
• ∀ ϕ ∈ DA : Aϕ = A∗ ϕ.
Definition. The adjoint map A∗ : DA∗ → H of a linear map A : DA → H is defined by
• DA∗ := {ψ ∈ H | ∀ α ∈ DA : ∃ η ∈ H : hψ|Aαi = hη|αi}
• A∗ ψ := η.
We will later show that the adjoint map is well-defined, i.e. for each α ∈ DA and ψ ∈ H
there exists at most one η ∈ H such that hψ|Aαi = hη|αi.
Remark 1.2 . If we defined DA∗ by requiring that η ∈ DA , we would obtain a notion of self-
adjointness which has undesirable properties. In particular, the spectrum (to be defined
later) of a self-adjoint operator would not be guaranteed to be a subset of R.
µA
ρ (E) := Tr(PA (E) ◦ ρ)
where the map PA : Borel(R) → L(H), from the Borel-measurable subsets of R to the
Banach space of bounded linear maps on H, is the unique projection-valued measure
that is associated with the self-adjoint map A according to the spectral theorem.
–7–
We will later see that the composition of a bounded linear map with a trace-class map
is again of trace-class, so that Tr(PA (E) ◦ ρ) is well-defined. For completeness, the spectral
theorem states that for any self-adjoint map A there exists a projection-valued measure PA
such that A can be represented in terms of the Lebesgue-Stieltjes integral as
Z
A= λ dPA (λ).
R
U (t) := exp − ~i Ht ,
where H is the energy observable and, for any observable A and f : R → C, we define
Z
f (A) := f (λ) dPA (λ).
R
Note that, as was the case for the previous axiom, the spectral theorem is crucial since
it is needed to define the unitary evolution operator.
where ρbefore is the state immediately preceding the measurement and E ⊆ R is the
smallest Borel set in which the actual outcome of the measurement happened to lie.
–8–
2 Banach spaces
Hilbert spaces are a special type of a more general class of spaces known as Banach spaces.
We are interested in Banach spaces not just for the sake generality, but also because they
naturally appear in Hilbert space theory. For instance, the space of bounded linear maps
on a Hilbert space is not itself a Hilbert space, but only a Banach space.
Definition. A normed space is a (complex) vector space (V, +, ·) equipped with a norm,
that is, a map k · k : V → R satisfying
(i) kf k ≥ 0 (non-negativity)
(ii) kf k = 0 ⇔ f = 0 (definiteness)
–9–
Once we have a norm k · k on V , we can define a metric d on V by
d(f, g) := kf − gk.
Then we say that the normed space (V, k · k) is complete if the metric space (V, d), where
d is the metric induced by k · k, is complete. Note that we will usually suppress inessential
information in the notation, for example writing (V, k · k) instead of (V, +, ·, k · k).
Example 2.1 . The space CC0 [0, 1] := {f : [0, 1] → C | f is continuous}, where the continuity
is with respect to the standard topologies on [0, 1] ⊂ R and C, is a Banach space. Let us
show this in some detail.
Proof. (a) First, define two operations +, · pointwise, that is, for any x ∈ [0, 1]
and similarly for g. Fix x0 ∈ [0, 1] and ε > 0. Then, there exist δ1 , δ2 > 0 such that
ε
∀ x ∈ (x0 − δ1 , x0 + δ1 ) : |f (x) − f (x0 )| < 2
ε
∀ x ∈ (x0 − δ2 , x0 + δ2 ) : |g(x) − g(x0 )| < 2.
Since x0 ∈ [0, 1] was arbitrary, we have f + g ∈ CC0 [0, 1]. Similarly, for any z ∈ C and
f ∈ CC0 [0, 1], we also have z · f ∈ CC0 [0, 1]. It is immediate to check that the complex
vector space structure of C implies that the operations
+ : CC0 [0, 1] × CC0 [0, 1] → CC0 [0, 1] · : C × CC0 [0, 1] → CC0 [0, 1]
(f, g) 7→ f + g (z, f ) 7→ z · f
(b) Since [0, 1] is closed and bounded, it is compact and hence every complex-valued
continuous function f : [0, 1] → C is bounded, in the sense that
– 10 –
We can thus define a norm on CC0 [0, 1], called the supremum (or infinity) norm, by
kf k∞ := sup |f (x)|.
x∈[0,1]
Let us show that this is indeed a norm on (CC0 [0, 1], +, ·) by checking that the four
defining properties hold. Let f, g ∈ CC0 [0, 1] and z ∈ C. Then
But since we also have |f (x)| ≥ 0 for all x ∈ [0, 1], f is identically zero.
(b.iii) kz · f k∞ := sup |zf (x)| = sup |z||f (x)| = |z| sup |f (x)| = |z|kf k∞ .
x∈[0,1] x∈[0,1] x∈[0,1]
(b.iv) By using the triangle inequality for the modulus of complex numbers, we have
(c) We now show that CC0 [0, 1] is complete. Let {fn }n∈N be a Cauchy sequence of func-
tions in CC0 [0, 1], that is
We seek an f ∈ CC0 [0, 1] such that lim fn = f . We will proceed in three steps.
n→∞
that is, the sequence of complex numbers {fn (y)}n∈N is a Cauchy sequence. Since
C is a complete metric space4 , there exists zy ∈ C such that lim fn (y) = zy .
n→∞
4
The standard metric on C is induced by the modulus of complex numbers.
– 11 –
Thus, we can define a function
f : [0, 1] → C
x 7→ zx ,
Note that this does not automatically imply that lim fn = f (converge with
n→∞
respect to the supremum norm), nor that f ∈ CC0 [0, 1], and hence we need to
check separately that these do, in fact, hold.
(c.ii) First, let us check that f ∈ CC0 [0, 1], that is, f is continuous. Let x0 ∈ [0, 1] and
ε > 0. For each x ∈ [0, 1], we have
Since f is the pointwise limit of {fn }n∈N , for each x ∈ [0, 1] there exists N ∈ N
such that
∀ n ≥ N : |f (x) − fn (x)| < 3ε .
In particular, we also have
Moreover, since fn ∈ CC0 [0, 1] by assumption, there exists δ > 0 such that
|f (x) − f (x0 )| ≤ |f (x) − fn (x)| + |fn (x) − fn (x0 )| + |fn (x0 ) − f (x0 )|
ε ε ε
< 3 + 3 + 3
= ε.
kfn − f k∞ = kfn − fm + fm − f k∞
≤ kfn − fm k∞ + kfm − f k∞ .
∀ n, m ≥ N1 : kfn − fm k∞ < 2ε .
– 12 –
Moreover, since f is the pointwise limit of {fn }n∈N , for each x ∈ [0, 1] there
exists N2 ∈ N such that
∀ m ≥ N2 : |fm (x) − f (x)| < 2ε .
By definition of supremum, we have
∀ m ≥ N2 : kfm − f k∞ = sup |fm (x) − f (x)| ≤ 2ε .
x∈[0,1]
Remark 2.2 . The previous example shows that checking that something is a Banach space,
and the completeness property in particular, can be quite tedious. However, in the following,
we will typically already be working with a Banach (or Hilbert) space and hence, rather
than having to check that the completeness property holds, we will instead be able to use
it to infer the existence (within that space) of the limit of any Cauchy sequence.
(iv) the map A : V → W is continuous with respect to the topologies induced by the respec-
tive norms on V and W
– 13 –
The first one of these follows immediately from the homogeneity of the norm. Indeed,
suppose that kf kV 6= 1. Then
kAf kW
= kf k−1 −1
V kAf kW = kA(kf kV f )kW = kAf kW
e
kf kV
where fe := kf k−1
V f is such that kf kV = 1. Hence, the boundedness property is equivalent
e
to condition (i) above.
Example 2.4 . Let idW : W → W be the identity operator on a Banach space W . Then
k idW f kW
sup = sup 1 = 1 < ∞.
f ∈W kf kW f ∈W
Example 2.5 . Denote by CC1 [0, 1] the complex vector space of once continuously differen-
tiable complex-valued functions on [0, 1]. Since differentiability implies continuity, this is
a subset of CC0 [0, 1]. Moreover, since sums and scaling by a complex number of continu-
ously differentiable functions are again continuously differentiable, this is, in fact, a vector
subspace of CC0 [0, 1], and hence also a normed space with the supremum norm k · k∞ .
Consider the first derivative operator
We know from undergraduate real analysis that D is a linear operator. We will now show
that D is an unbounded5 linear operator. That is,
kDf k∞
sup = ∞.
1 [0,1]
f ∈CC kf k∞
Note that, since the norm is a function into the real numbers, both kDf k∞ and kf k∞ are
always finite for any f ∈ CC1 [0, 1]. Recall that the supremum of a set of real numbers is its
least upper bound and, in particular, it need not be an element of the set itself. What we
have to show is that the set
kDf k∞ 1
f ∈ CC [0, 1] ⊂ R
kf k∞
contains arbitrarily large elements. One way to do this is to exhibit a positively divergent
(or unbounded from above) sequence within the set.
5
Some people take the term unbounded to mean “not necessarily bounded”. We take it to mean “definitely
not bounded” instead.
– 14 –
Consider the sequence {fn }n≥1 where fn (x) := sin(2πnx). We know that sine is con-
tinuously differentiable, hence fn ∈ CC1 [0, 1] for each n ≥ 1, with
We have
kfn k∞ = sup |fn (x)| = sup | sin(2πnx)| = sup [−1, 1] = 1
x∈[0,1] x∈[0,1]
and
kDfn k∞ = sup |Dfn (x)| = sup |2πn cos(2πnx)| = sup [−2πn, 2πn] = 2πn.
x∈[0,1] x∈[0,1]
Hence, we have
kDf k∞ kDf k∞
sup ≥ sup = sup 2πn = ∞,
f ∈C 1 [0,1] kf k∞
C
{fn }n≥1 kf k∞ n≥1
which is what we wanted. As an aside, we note that CC1 [0, 1] is not complete with respect
to the supremum norm, but it is complete with respect to the norm
kf kC 1 := kf k∞ + kf 0 k∞ .
While the derivative operator is still unbounded with respect to this new norm, in general,
the boundedness of a linear operator does depend on the choice of norms on its domain and
target, as does the numerical value of the operator norm.
Remark 2.6 . Apart from the “minor” detail that in quantum mechanics we deal with Hilbert
spaces, use a different norm than the supremum norm and that the (one-dimensional)
momentum operator acts as P (ψ) := −i~ψ 0 , the previous example is a harbinger of the fact
that the momentum operator in quantum mechanics is unbounded. This will be the case
for the position operator Q as well.
Lemma 2.7. Let (V, k·k) be a normed space. Then, addition, scalar multiplication, and the
norm are all sequentially continuous. That is, for any sequences {fn }n∈N and {gn }n∈N in
V converging to f ∈ V and g ∈ V respectively, and any sequence {zn }n∈N in C converging
to z ∈ C, we have
(ii) lim zn fn = zf .
n→∞
Proof. (i) Let ε > 0. Since lim fn = f and lim gn = g by assumption, there exist
n→∞ n→∞
N1 , N2 ∈ N such that
ε
∀ n ≥ N1 : kf − fn k < 2
ε
∀ n ≥ N2 : kg − gn k < 2.
– 15 –
Let N := max{N1 , N2 }. Then, for all n ≥ N , we have
∃ k > 0 : ∀ n ∈ N : |zn | ≤ k.
Let ε > 0. Since lim fn = f and lim zn = z, there exist N1 , N2 ∈ N such that
n→∞ n→∞
ε
∀ n ≥ N1 : kf − fn k <
2k
ε
∀ n ≥ N2 : kz − zn k < .
2kf k
Let N := max{N1 , N2 }. Then, for all n ≥ N , we have
kzn fn − zf k = kzn fn − zn f + zn f − zf k
= kzn (fn − f ) + (zn − z)f k
≤ kzn (fn − f )k + k(zn − z)f k
= |zn |kfn − f k + |zn − z|kf k
ε ε
<k + kf k
2k 2kf k
= ε.
Hence lim zn fn = zf .
n→∞
∀ n ≥ N : kfn − f k < ε.
Hence, for all n ≥ N , we have kfn k − kf k < ε and thus lim kfn k = kf k.
n→∞
– 16 –
Note that by taking {zn }n∈N to be the constant sequence whose terms are all equal to
some fixed z ∈ C, we have lim zfn = zf as a special case of (ii).
n→∞
This lemma will take care of some of the technicalities involved in proving the following
crucially important result.
Theorem 2.8. The set L(V, W ) of bounded linear operators from a normed space (V, k·kV )
to a Banach space (W, k · kW ), equipped with pointwise addition and scalar multiplication
and the operator norm, is a Banach space.
kA + Bk ≤ kAk + kBk.
kzAk = |z|kAk.
and it is immediate to check that the vector space structure of W induces a vector
space structure on L(V, W ) with these operations.
– 17 –
(b) We need to show that (L(V, W ), k · k) is a normed space, i.e. that k · k satisfies the
properties of a norm. We have already shown two of these in part (a), namely
(c) The heart of the proof is showing that (L(V, W ), k · k) is complete. We will proceed
in three steps, analogously to the case of CC0 [0, 1].
(c.i) Let {An }n∈N be a Cauchy sequence in L(V, W ). Fix f ∈ V and let ε > 0. Then,
there exists N ∈ N such that
ε
∀ n, m ≥ N : kAn − Am k < .
kf kV
Then, for all n, m ≥ N , we have
kAn f − Am f kW = k(An − Am )f kW
k(An − Am )f kW
= kf kV
kf kV
k(An − Am )f kW
≤ kf kV sup
f ∈V kf kV
=: kf kV kAn − Am k
ε
< kf kV
kf kV
= ε.
A: V → W
f 7→ lim An f,
n→∞
– 18 –
(c.ii) We now need to show that A ∈ L(V, W ). This is where the previous lemma
comes in handy. For linearity, let f, g ∈ V and z ∈ C. Then
where we have used the linearity of each An and part (i) and (ii) of Lemma 2.7.
For boundedness, part (ii) and (iii) of Lemma 2.7 yield
kAf kW
∀f ∈ V : ≤ lim kAn k.
kf kV n→∞
Hence, to show that A is bounded, it suffices to show that the limit on the right
hand side is finite. Let ε > 0. Since {An }n∈N is a Cauchy sequence, there exists
N ∈ N such that
∀ n, m ≥ N : kAn − Am k < ε.
Then, by the proof of part (i) of Lemma 2.7, we have
kAn k − kAm k ≤ kAn − Am k < ε
for all n, m ≥ N . Hence, the sequence of real numbers {kAn k}n∈N is a Cauchy
sequence. Since R is complete, this sequence converges to some real number
r ∈ R. Therefore
kAf kW
sup ≤ lim kAn k = r < ∞
f ∈V kf kV n→∞
∀ n, m ≥ N1 : kAn − Am k < 2ε .
– 19 –
Moreover, since A is the pointwise limit of {An }n∈N , for any f ∈ V there exists
N2 ∈ N such that
εkf kV
∀ m ≥ N2 : kAm f − Af kW <
2
and hence, for all m ≥ N2
εkf k
V
kAm f − Af kW ε
kAm − Ak := sup ≤ 2 =
f ∈V kf kV kf kV 2
Remark 2.9 . Note that if V and W are normed spaces, then L(V, W ) is again a normed
space, while for L(V, W ) to be a Banach space it suffices that W be a Banach space.
Remark 2.10 . In the proof that L(V, W ) is a Banach space, we have shown a useful in-
equality which we restate here for further emphasis. If A : V → W is bounded, then
∀ f ∈ V : kAf kW ≤ kAkkf kV
Note that, since C is a Banach space, the dual of a normed space is a Banach space.
The elements of V ∗ are variously called covectors or functionals on V .
Remark 2.11 . You may recall from undergraduate linear algebra that the dual of a vector
space was defined to be the vector space of all linear maps V → C, rather than just the
bounded ones. This is because, in finite dimensions, all linear maps are bounded. So the
two definitions agree as long as we are in finite dimensions. If we used the same definition
for the infinite-dimensional case, then V ∗ would lack some very desirable properties, such
as that of being a Banach space.
The dual space can be used to define a weaker notion of convergence called, rather
unimaginatively, weak convergence.
– 20 –
Note that {ϕ(fn )}n∈N is just a sequence of complex numbers. To indicate that the
sequence {fn }n∈N converges weakly to f ∈ V we write
w-lim fn = f.
n→∞
In order to further emphasise the distinction with weak convergence, we may say that
{fn }n∈N converges strongly to f ∈ V if it converges according to the usual definition, and
we will write accordingly
s-lim fn = f.
n→∞
Proposition 2.12. Let {fn }n∈N be a sequence in a normed space (V, k · kV ). If {fn }n∈N
converges strongly to f ∈ V , then it also converges weakly to f ∈ V , i.e.
s-lim fn = f ⇒ w-lim fn = f.
n→∞ n→∞
Proof. Let ε > 0 and let ϕ ∈ V ∗ . Since {fn }n∈N converges strongly to f ∈ V , there exists
N ∈ N such that
ε
∀ n ≥ N : kfn − f kV < .
kϕk
Then, since ϕ ∈ V ∗ is bounded, we have
Lemma 2.13. Let (V, k · k) be a normed space and let DA be a dense subspace of V . Then,
for any f ∈ V , there exists a sequence {αn }n∈N in DA which converges to f .
Proof. Let f ∈ V . Clearly, there exists a sequence {fn }n∈N in V which converges to f (for
instance, the constant sequence). Let ε > 0. Then, there exists N ∈ N such that
∀ n ≥ N : kfn − f k < 2ε .
6
Bounded Linear Transformation, not Bacon, Lettuce, Tomato.
– 21 –
Since DA is dense in V and each fn ∈ V , we have
∀ n ∈ N : ∃ αn ∈ DA : kαn − fn k < 2ε .
∀ α ∈ DA : Aα
b = Aα.
Theorem 2.14 (BLT theorem). Let V be a normed space and W a Banach space. Any
b : V → W such that A
densely defined linear map A : DA → W has a unique extension A b is
bounded. Moreover, kAk b = kAk.
Proof. (a) Let A ∈ L(DA , W ). Since DA is dense in V , for any f ∈ V there exists a
sequence {αn }n∈N in DA which converges to f . Moreover, since A is bounded, we
have
∀ n ∈ N : kAαn − Aαm kW ≤ kAkkαn − αm kV ,
from which it quickly follows that {Aαn }n∈N is Cauchy in W . As W is a Banach
space, this sequence converges to an element of W and thus we can define
b: V → W
A
f 7→ lim Aαn ,
n→∞
where {αn }n∈N is any sequence in DA which converges to f .
– 22 –
where we have used the fact that A is bounded. Thus, we have shown
that is, A
b is indeed well-defined.
Aα
b := lim Aαn = lim Aα = Aα.
n→∞ n→∞
γn := zαn + βn
lim γn = zf + g.
n→∞
Then, we have
A(zf
b + g) := lim Aγn
n→∞
= lim A(zαn + βn )
n→∞
= lim (zAαn + Aβn )
n→∞
= z lim Aαn + lim Aβn
n→∞ n→∞
=: z Af
b + Ag.
b
Therefore
kAf
b kW kAkkf kV
sup ≤ sup = sup kAk = kAk < ∞
f ∈V kf kV f ∈V kf kV f ∈V
and hence A
b is bounded.
– 23 –
(e) For uniqueness, suppose that A
e ∈ L(V, W ) is another extension of A. Let f ∈ V and
{αn }n∈N a sequence in DA which converges to f . Then, we have
kAf
e − Aαn kW = kAf
e − Aα
e n kW ≤ kAkkf
e − αn kV .
It follows that
e − Aαn ) = 0
lim (Af
n→∞
Therefore, A
e = A.
b
kAf
b kW
kAk
b := sup ≤ kAk.
f ∈V kf kV
kAf kW kAf
b kW kAf
b kW
kAk := sup = sup ≤ sup =: kAk.
b
f ∈DA kf kV f ∈DA kf kV f ∈V kf kV
where
kAf
b kW kAf kW
kAk
b L(V,W ) := sup and kAkL(DA ,W ) := sup .
f ∈V kf kV f ∈DA kf kV
– 24 –
3 Separable Hilbert Spaces
k · k: V → R
p
f 7→ hf |f i.
Of course, since Hilbert spaces are a special case of Banach spaces, everything that we
have learned about Banach spaces also applies to Hilbert paces. For instance, L(H, H),
the collection of all bounded linear maps H → H, is a Banach space with respect to the
operator norm. In particular, the dual of a Hilbert space H is just H∗ := L(H, C). We will
see that the operator norm on H∗ is such that there exists an inner product on H∗ which
induces it, so that the dual of a Hilbert space is again a Hilbert space.
First, in order to check that the norm induced by an inner product on V is indeed a
norm on V , we need one of the most important inequalities in mathematics.
hf |gi
z := ∈ C.
hf |f i
7
Also known as the Cauchy-Bunyakovsky-Schwarz inequality in the Russian literature.
– 25 –
Then, by positive-definiteness of h·|·i, we have
0 ≤ hzf − g|zf − gi
= |z|2 hf |f i − zhf |gi − zhg|f i + hg|gi
|hf |gi|2 hf |gi hf |gi
= 2
hf |f i − hf |gi − hf |gi + hg|gi
hf |f i hf |f i hf |f i
|hf |gi|2 |hf |gi|2 |hf |gi|2
= − − + hg|gi
hf |f i hf |f i hf |f i
|hf |gi|2
=− + hg|gi.
hf |f i
By rearranging, since hf |f i > 0, we obtain the desired inequality.
Note that, by defining kf k := hf |f i, and using the fact that |hf |gi| ≥ 0, we can write
p
(i) kf k := hf |f i ≥ 0
p
(ii) kf k = 0 ⇔ kf k2 = 0 ⇔ hf |f i = 0 ⇔ f = 0 by positive-definiteness
(iv) Using the fact that z+z = 2 Re z and Re z ≤ |z| for any z ∈ C and the Cauchy-Schwarz
inequality, we have
kf + gk2 := hf + g|f + gi
= hf |f i + hf |gi + hg|f i + hg|gi
= hf |f i + hf |gi + hf |gi + hg|gi
= hf |f i + 2 Rehf |gi + hg|gi
≤ hf |f i + 2|hf |gi| + hg|gi
≤ hf |f i + 2kf kkgk + hg|gi
= (kf k + kgk)2 .
Hence, we see that any inner product space (i.e. a vector space equipped with a sesqui-
linear inner product) is automatically a normed space under the induced norm. It is only
natural to wonder whether the converse also holds, that is, whether every norm is induced
by some sesqui-linear inner product. Unfortunately, the answer is negative in general. The
following theorem gives a necessary and sufficient condition for a norm to be induced by a
sesqui-linear inner product and, in fact, by a unique such.
– 26 –
Theorem 3.3 (Jordan-von Neumann). Let V be a vector space. A norm k · k on V is
induced by a sesqui-linear inner product h·|·i on V if, and only if, the parallelogram identity
holds for all f, g ∈ V , in which case, h·|·i is determined by the polarisation identity
3
1X k
hf |gi = i kf + i4−k gk2
4
k=0
1
= (kf + gk2 − kf − gk2 + ikf − igk2 − ikf + igk2 ).
4
Proof. (⇒) If k · k is induced by h·|·i, then by direct computation
and
ikf − igk2 − ikf + igk2 := ihf − ig|f − igi − ihf + ig|f + igi
= ihf |f i + hf |gi − hg|f i + ihg|gi
− ihf |f i + hf |gi − hg|f i − ihg|gi
= 2hf |gi − 2hg|f i.
Therefore
kf + gk2 − kf − gk2 + ikf − igk2 − ikf + igk2 = 4hf |gi.
that is, the inner product is determined by the polarisation identity.
– 27 –
(i) For conjugate symmetry
1
hf |gi := 2 2 2 2
4 kf + gk − kf − gk + ikf − igk − ikf + igk
1 2 2 2 2
= 4 (kf + gk − kf − gk − ikf − igk + ikf + igk )
1 2 2 2 2
= 4 (kf + gk − kf − gk − ik(−i)(if + g)k + iki(−if + g)k )
1 2 2 2 2 2 2
= 4 (kg + f k − kg − f k − i(| − i|) kg + if k + i(|i|) kg − if k )
1 2 2 2 2
= 4 (kg + f k − kg − f k − ikg + if k + ikg − if k )
=: hg|f i
(ii) We will now show linearity in the second argument. This is fairly non-trivial
and quite lengthy. We will focus on additivity first. We have
1
hf |g + hi := (kf + g + hk2 − kf − g − hk2 + ikf − ig − ihk2 − ikf + ig + ihk2 ).
4
Consider the real part of hf |g + hi. By successive applications of the parallelo-
gram identity, we find
Hence, we have
hf |g + hi = Rehf |g + hi + i Imhf |g + hi
= Rehf |gi + Rehf |hi + i(Imhf |gi + Imhf |hi)
= Rehf |gi + i Imhf |gi + Rehf |hi + i Imhf |hi
= hf |gi + hf |hi,
– 28 –
(b) Suppose that hf |ngi = nhf |gi for some n ∈ N. Then, by additivity
and thus
∀ n ∈ Z : hf |ngi = nhf |gi.
(e) Now note that for any m ∈ Z \ {0}
1 (d) 1
mhf | m gi = hf |m m gi = hf |gi
n (d) 1 (e) n
hf |rgi = hf | m gi = nhf | m gi = m hf |gi = rhf |gi
and hence
∀ r ∈ Q : hf |rgi = rhf |gi.
√
(g) Before we turn to R, we need to show that |hf |gi| ≤ 2kf kkgk. Note that
here we cannot invoke the Cauchy-Schwarz inequality (which would also
provide a better estimate) since we don’t know that h·|·i is an inner product
yet. First, consider the real part of hf |gi.
– 29 –
Replacing g with −ig and noting that k − igk = | − i|kgk = kgk, we also have
Hence, we find
and thus
∀ r ∈ R : rhf |gi = hf |rgi.
(j) We now note that
– 30 –
(k) Let z ∈ C. By additivity, we have
hf |zgi = hf |(Re z + i Im z)gi
= hf |(Re z)gi + hf |i(Im z)gi
(j)
= hf |(Re z)gi + ihf |(Im z)gi
(i)
= Re zhf |gi + i Im zhf |gi
= (Re z + i Im z)hf |gi
= zhf |gi,
which shows scaling invariance in the second argument.
Combining additivity and scaling invariance in the second argument yields lin-
earity in the second argument.
(iii) For positive-definiteness
hf |f i := 41 (kf + f k2 − kf − f k2 + ikf − if k2 − ikf + if k2 )
2 2 2
= 1
4 (4kf k + i|1 − i| kf k − i|1 + i|2 kf k2 )
1 2 2 2
= 4 (4 + i|1 − i| − i|1 + i| )kf k
1 2
= 4 (4 + 2i − 2i)kf k
2
= kf k .
Thus, hf |f i ≥ 0 and hf |f i = 0 ⇔ f = 0.
Hence, h·|·i is indeed a sesqui-linear inner product. Note that, from part (iii) above,
we have p
hf |f i = kf k.
That is, the inner product h·|·i does induce the norm from which we started, and this
completes the proof.
Remark 3.4 . Our proof of linearity is based on the hints given in Section 6.1, Exercise
27, from Linear Algebra (4th Edition) by Friedberg, Insel, Spence. Other proofs of the
Jordan-von Neumann theorem can be found in
• Kadison, Ringrose, Fundamentals of the Theory of Operator Algebras: Volume I:
Elementary Theory, American Mathematical Society 1997
– 31 –
Example 3.6 . Consider CC0 [0, 1] and let f (x) = x and g(x) = 1. Then
and hence
kf + gk2∞ + kf − gk2∞ = 5 6= 4 = 2kf k2∞ + 2kgk2∞ .
Thus, by the Jordan-von Neumann theorem, there is no inner product on CC0 [0, 1] which
induces the supremum norm. Therefore, (CC0 [0, 1], k · k∞ ) cannot be a Hilbert space.
Proof. We already know that H∗ := L(H, C) is a Banach space. The norm on H∗ is just
the usual operator norm
|f (ϕ)|
kf kH∗ := sup
ϕ∈H kϕkH
where, admittedly somewhat perversely, we have reversed our previous notation for the dual
√
elements. Since the modulus is induced by the standard inner product on C, i.e. |z| = zz,
it satisfies the parallelogram identity. Hence, we have
where several steps are justified by the fact that the quantities involved are non-negative.
Hence, by the Jordan-von Neumann theorem, the inner product on H∗ defined by the
polarisation identity induces k · kH∗ . Hence, H∗ is a Hilbert space.
Proof. Let h·|·i be an inner product on V . Fix ϕ ∈ V and let lim ψn = ψ. Then
n→∞
– 32 –
3.2 Hamel versus Schauder8
Choosing a basis on a vector space is normally regarded as mathematically inelegant. The
reason for this is that most statements about vector spaces are much clearer and, we main-
tain, aesthetically pleasing when expressed without making reference to a basis. However,
in addition to the fact that some statements are more easily and usefully written in terms
of a basis, bases provide a convenient way to specify the elements of a vector space in terms
of components. The notion of basis for a vector space that you most probably met in your
linear algebra course is more properly know as Hamel basis.
(ii) the set B is a generating (or spanning) set for V . That is, for any element v ∈ V ,
there exist a finite subset {e1 , . . . , en } ⊆ B and λ1 , . . . , λn ∈ C such that
n
X
v= λi ei .
i=1
i.e. the set of all finite linear combinations of elements of U with complex coefficients,
we can restate this condition simply as V = span B.
Given a basis B, one can show that for each v ∈ V the λ1 , . . . , λn appearing in (ii)
above are uniquely determined. They are called the components of v with respect to B.
One can also show that if a vector space admits a finite Hamel basis B, then any other
basis of V is also finite and, in fact, of the same cardinality as B.
Definition. If a vector space V admits a finite Hamel basis, then it is said to be finite-
dimensional and its dimension is dim V := |B|. Otherwise, it is said to be infinite-
dimensional and we write dim V = ∞.
For a proof of (a slightly more general version of) this theorem, we refer the interested
reader to Dr Schuller’s Lectures on the Geometric Anatomy of Theoretical Physics.
Note that the proof that every vector space admits a Hamel basis relies on the axiom
of choice and, hence, it is non-constructive. By a corollary to Baire’s category theorem, a
8
Not a boxing competition.
– 33 –
Hamel basis on a Banach space is either finite or uncountably infinite. Thus, while every
Banach space admits a Hamel basis, such bases on infinite-dimensional Banach spaces are
difficult to construct explicitly and, hence, not terribly useful to express vectors in terms
of components and perform computations. Thankfully, we can use the extra structure of a
Banach space to define a more useful type of basis.
Definition. Let (W, k · k) be a Banach space. A Schauder basis of W is a sequence {en }n∈N
in W such that, for any f ∈ W , there exists a unique sequence {λn }n∈N in C such that
n
X ∞
X
f = lim λi ei =: λi ei
n→∞
i=0 i=0
• Since Schauder bases require a notion of convergence, they can only be defined on
a vector space equipped with a (compatible) topological structure, of which Banach
spaces are a special case.
• Since the convergence of a series may depend on the order of its terms, Schauder bases
must be considered as ordered bases. Hence, two Schauder bases that merely differ
in the ordering of their elements are different bases, and permuting the elements of a
Schauder basis doesn’t necessarily yield another Schauder basis.
• The uniqueness requirement in the definition immediately implies that the zero vector
cannot be an element of a Schauder basis.
• Schauder bases satisfy a stronger linear independence property than Hamel bases,
namely
X∞
λi ei = 0 ⇒ ∀ i ∈ N : λi = 0.
i=0
• At the same time, they satisfy a weaker spanning condition. Rather than the linear
span of the basis being equal to W , we only have that it is dense in W . Equivalently,
W = span{en | n ∈ N},
– 34 –
Definition. A Schauder basis {en }n∈N of (W, k · k) is said to be normalised if
∀ n ∈ N : ken k = 1.
Proposition 3.10. An infinite-dimensional Hilbert space is separable if, and only if, it
admits an orthonormal Schauder basis. That is, a Schauder basis {en }n∈N such that
(
1 if i = j
∀ i, j ∈ N : hei |ej i = δij := .
0 if i 6= j
Whether this holds for Banach spaces or not was a famous open problem in
functional analysis, problem 153 from the Scottish book. It was solved in 1972,
more that three decades after it was first posed, when Swedish mathematician
Enflo constructed an infinite-dimensional separable Banach space which lacks
a Schauder basis. That same year, he was awarded a live goose9 for his effort.
Remark 3.11 . The Kronecker symbol δij appearing above does not represent the compo-
nents of the identity map on H. Instead, δij are the components of the sesqui-linear form
h·|·i, which is a map H × H → C, unlike idH which is a map H → H. If not immediately
understood, this remark may be safely ignored.
Remark 3.12 . In finite-dimensions, since every vector space admits (by definition) a finite
Hamel basis, every inner product space admits an orthonormal basis by the Gram-Schmidt
orthonormalisation process.
From now on, we will only consider orthonormal Schauder bases, sometimes also called
Hilbert bases, and just call them bases.
Lemma 3.13. Let H be a Hilbert space with basis {en }n∈N . The unique sequence in the
expansion of ψ ∈ H in terms of this basis is {hen |ψi}n∈N .
9
https://en.wikipedia.org/wiki/Per_Enflo#Basis_problem_of_Banach
– 35 –
Proof. By using the continuity of the inner product, we have
∞ j
X
hei |ψi = ei
λ ej
j=0
n
X
j
= ei lim λ ej
n→∞
j=0
n j
X
= lim ei λ ej
n→∞
j=0
n
X
= lim λj hei |ej i
n→∞
i=0
∞
X
= λj δij
i=0
= λi ,
While we have already used the term orthonormal, let us note that this means both
orthogonal and normalised. Two vectors ϕ, ψ ∈ H are said to be orthogonal if hϕ|ψi = 0,
and a subset of H is called orthogonal if its elements are pairwise orthogonal.
Lemma 3.14 (Pythagoras’ theorem). Let H be a Hilbert space and let {ψ0 , . . . , ψn } ⊂ H
be a finite orthogonal set. Then
n n
X
2 X
kψi k2 .
ψi
=
i=0 i=0
n n
n n n
X
2 X X X X X
kψi k2 .
ψi
:= ψi
ψj = hψ |ψ
i i i + hψ |ψ
i j i =:
i=0 i=0 j=0 i=0 i6=j i=0
– 36 –
a certain structure-preserving map A → B is invertible and its inverse is also structure-
preserving, then both these maps are generically called isomorphisms and A and B are
said to be isomorphic instances of that structure. Isomorphic instances of a structure are
essentially the same instance of that structure, just dressed up in different ways. Typically,
there are infinitely many concrete instances of any given structure. The highest form of
understanding of a structure that we can hope to achieve is that of a classification of its
instances up to isomorphism. That is, we would like to know how many different, non-
isomorphic instances of a given structure there are.
In linear algebra, the structure of interest is that of vector space over some field F. The
structure-preserving maps are just the linear maps and the isomorphisms are the linear
bijections (whose inverses are automatically linear). Finite-dimensional vector spaces over
F are completely classified by their dimension, i.e. there is essentially only one vector space
over F for each n ∈ N, and Fn is everyone’s favourite. Assuming the axiom of choice,
infinite-dimensional vector spaces over F are classified in the same way, namely, there is,
up to linear isomorphism, only one vector space over F for each infinite cardinal.
Of course, one could do better and also classify the base fields themselves. The classi-
fication of finite fields (i.e. fields with a finite number of elements) was achieved in 1893 by
Moore, who proved that the order (i.e. cardinality) of a finite field is necessarily a power of
some prime number, and there is only one finite field of each order, up to the appropriate
notion of isomorphism. The classification of infinite fields remains an open problem.
A classification with far-reaching implications in physics is that of finite-dimensional,
semi-simple, complex Lie algebras, which is discussed in some detail in Dr Schuller’s Lectures
on the Geometric Anatomy of Theoretical Physics.
The structure-preserving maps between Hilbert spaces are those that preserve both the
vector space structure and the inner product. The Hilbert space isomorphisms are called
unitary maps.
Definition. Let H and G be Hilbert spaces. A bounded bijection U ∈ L(H, G) is called a
unitary map (or unitary operator ) if
∀ ψ, ϕ ∈ H : hU ψ|U ϕiG = hψ|ϕiH .
If there exists a unitary map H → G, then H and G are said to be unitarily equivalent and
we write H ∼ =Hil G.
There are a number of equivalent definitions of unitary maps (we will later see one
involving adjoints) and, in fact, our definition is fairly redundant.
Proposition 3.16. Let U : H → G be a surjective map which preserves the inner product.
Then, U is a unitary map.
Proof. (i) First, let us check that U is linear. Let ψ, ϕ ∈ H and z ∈ C. Then
kU (zψ + ϕ) − zU ψ − U ϕk2G = hU (zψ + ϕ) − zU ψ − U ϕ|U (zψ + ϕ) − zU ψ − U ϕiG
= hU (zψ + ϕ)|U (zψ + ϕ)iG + |z|2 hU ψ|U ψiG + hU ϕ|U ϕiG
− zhU (zψ + ϕ)|U ψiG − hU (zψ + ϕ)|U ϕiG + zhU ψ|U ϕiG
− zhU ψ|U (zψ + ϕ)iG − hU ϕ|U (zψ + ϕ)iG + zhU ϕ|U ψiG
– 37 –
= hzψ + ϕ|zψ + ϕiH + |z|2 hψ|ψiH + hϕ|ϕiH
− zhzψ + ϕ|ψiH − hzψ + ϕ|ϕiH + zhψ|ϕiH
− zhψ|zψ + ϕiH − hϕ|zψ + ϕiH + zhϕ|ψiH
= 2|z|2 hψ|ψiH + zhψ|ϕiH + zhϕ|ψiH + 2hϕ|ϕiH
− |z|2 hψ|ψiH − zhϕ|ψiH − zhψ|ϕiH − hϕ|ϕiH + zhψ|ϕiH
− |z|2 hψ|ψiH − zhψ|ϕiH − zhϕ|ψiH − hϕ|ϕiH + zhϕ|ψiH
= 0.
U (zψ + ϕ) = zU ψ + U ϕ.
we have
kU ψkG
sup = 1 < ∞.
ψ∈H kψkH
(iii) Finally, recall that a linear map is injective if, and only if, its kernel is trivial. Suppose
that ψ ∈ ker U . Then, we have
∀ ψ ∈ H : kU ψkG = kψkH .
Linear isometries are, of course, the structure-preserving maps between normed spaces. We
have shown that every unitary map is an isometry has unit operator norm, hence the name
unitary operator.
We define addition and scalar multiplication of sequences termwise, that is, for all n ∈ N
and all complex numbers z ∈ C,
(a + b)n := an + bn
(z · a)n := zan .
– 38 –
These are, of course, just the discrete analogues of pointwise addition and scalar multipli-
cation of maps. The triangle inequality and homogeneity of the modulus, together with the
vector space structure of C, imply that (`2 (N), +, ·) is a complex vector space.
The standard inner product on `2 (N) is
∞
X
ha|bi`2 := ai bi .
i=0
with respect to which `2 (N) is complete. Hence, (`2 (N), +, ·, h·|·i`2 ) is a Hilbert space.
Consider the sequence of sequences {en }n∈N where
e0 = (1, 0, 0, 0, . . .)
e1 = (0, 1, 0, 0, . . .)
e2 = (0, 0, 1, 0, . . .)
..
.
where λi = hei |ai`2 = ai . The sequences en are clearly square-summable and, in fact, they
are orthonormal with respect to h·|·i`2
∞
X ∞
X
hen |em i`2 := (en )i (em )i = δni δmi = δnm .
i=0 i=0
Hence, the sequence {en }n∈N is an orthonormal Schauder basis of `2 (N), which is
therefore an infinite-dimensional separable Hilbert space.
Proof. Let H be a separable Hilbert space with basis {en }n∈N . Consider the map
U : H → `2 (N)
ψ 7→ {hen |ψiH }n∈N .
Note that, for any ψ ∈ H, the sequence {hen |ψiH }n∈N is indeed square-summable since we
have
∞
X
|hei |ψiH |2 = kψk2H < ∞.
i=0
– 39 –
By our previous proposition, in order to show that U is a unitary map, it suffices to
show that it is surjective and preserves the inner product. For surjectivity, let {an }n∈N
be a complex square-summable sequence. Then, by elementary analysis, we know that
lim |an | = 0. This implies that, for any ε > 0, there exists N ∈ N such that
n→∞
n
X
∀n ≥ m ≥ N : |ai |2 < ε.
i=m
Then, for all n, m ≥ N (without loss of generality, assume n > m), we have
n m
2
n
2 n n
X X
X X
2 2
X
|ai |2 < ε.
a e
i i − a e
j j
=
a e
i i
= |ai | ke k
i H =
i=0 j=0 H i=m+1 H i=m+1 i=m+1
Pn
That is, i=0 ai ei n∈N is a Cauchy sequence in H. Hence, by completeness, there exists
ψ ∈ H such that
X∞
ψ= ai ei
i=0
and we have U ψ = {an }n∈N , so U is surjective. Moreover, we have
X ∞ ∞
X
hψ|ϕiH = hei |ψiH ei
hej |ϕiH ej
i=0 j=0 H
∞ X
X ∞
= hei |ψiH hej |ϕiH hei |ej iH
j=0 j=0
X∞ X ∞
= hei |ψiH hej |ϕiH δij
j=0 j=0
X∞
= hei |ψiH hei |ϕiH
j=0
=: {hen |ψiH }n∈N {hen |ϕiH }n∈N `2
=: hU ψ|U ϕi`2 .
– 40 –
4 Projectors, bras and kets
4.1 Projectors
Projectors play a key role in quantum theory, as you can see from Axioms 3 and 5.
Definition. Let H be a separable Hilbert space. Fix a unit vector e ∈ H (that is, kek = 1)
and let ψ ∈ H. The projection of ψ to e is
ψ := he|ψie
while the orthogonal complement of ψ is
ψ⊥ := ψ − ψ .
We can extend these definitions to a countable orthonormal subset {ei }i∈N ⊂ H, i.e. a
subset of H whose elements are pairwise orthogonal and have unit norm. Note that {ei }i∈N
need not be a basis of H.
Proposition 4.1. Let ψ ∈ H and let {ei }i∈N ⊂ H be an orthonormal subset. Then
(a) we can write ψ = ψ + ψ⊥ , where
∞
X
ψ := hei |ψiei , ψ⊥ := ψ − ψ
i=0
and we have
∀ i ∈ N : hψ⊥ |ei i = 0.
– 41 –
(b) From part (a), we have
n Xn
X
hψ⊥ |ψ i = ψ⊥
hei |ψiei = hei |ψihψ⊥ |ei i = 0.
i=0 i=0
kψk2 = kψ + ψ⊥ k2 = kψ k2 + kψ⊥ k2 .
Pn
(c) Let γ ∈ span{ei | 0 ≤ i ≤ n}. Then γ = i=0 γi ei for some γ0 , . . . , γn ∈ C. Hence
To extend this to a countably infinite orthonormal set {ei }i∈N , note that by part (b)
and Bessel’s inequality, we have
n n
X
2 X
|hei |ψi|2 ≤ kψk2 .
hei |ψiei
=
i=0 i=0
Pn
Since |hei
|ψi|2 ≥ 0, the sequence of partial sums i=0 |hei |ψi| n∈N is monotonically
2
increasing and bounded from above by kψk. Hence, it converges and this implies that
∞
X
ψ := hei |ψiei
i=0
exists as an element of H. The extension to the countably infinite case then follows by
continuity of the inner product.
– 42 –
Definition. Let H be a normed space. A subset M ⊂ H is said to be open if
∀ ψ ∈ M : ∃ r > 0 : ∀ ϕ ∈ H : kψ − ϕk < r ⇒ ϕ ∈ M.
∀ ψ ∈ M : ∃ r > 0 : Br (ψ) ⊆ M.
Proof. Let {ψn }n∈N be a Cauchy sequence in the closed subset M. Then, {ψn }n∈N is also
a Cauchy sequence in H, and hence it converges to some ψ ∈ H since H is complete. We
want to show that, in fact, ψ ∈ M. Suppose, for the sake of contradiction, that ψ ∈/ M,
i.e. ψ ∈ H \ M. Since M is closed, H \ M is open. Hence, there exists r > 0 such that
∀ ϕ ∈ H : kϕ − ψk < r ⇒ ϕ ∈ H \ M.
However, since ψ is the limit of {ψn }n∈N , there exists N ∈ N such that
∀ n ≥ N : kψn − ψk < r.
Corollary 4.3. A closed linear subspace M of a Hilbert space H is a sub-Hilbert space with
the inner product on H. Moreover, if H is separable, then so is M.
Knowing that a linear subspace of a Hilbert space is, in fact, a sub-Hilbert space can
be very useful. For instance, we know that there exists an orthonormal basis for the linear
subspace. Note that the converse to the corollary does not hold: a sub-Hilbert space need
not be a closed linear subspace.
M⊥ := {ψ ∈ H | ∀ ϕ ∈ M : hϕ|ψi = 0}
– 43 –
Proof. Let ψ1 , ψ2 ∈ M⊥ and z ∈ C. Then, for all ϕ ∈ M
fϕ : H → C
ψ 7→ hϕ|ψi.
Since the inner product is continuous (in each slot), the maps fϕ are continuous. Hence, the
pre-images of closed sets are closed. As the singleton {0} is closed in the standard topology
on C, the sets preimfϕ ({0}) are closed for all ϕ ∈ M. Thus, M⊥ is closed since arbitrary
intersections of closed sets are closed.
Remark 4.5 . Note that M⊥ is also not open (which is not necessarily the same as closed).
To clarify, equally the preimage of {0} is an open set (as {0} is open), however it is only
finite intersections of open sets that are open. So the inclusion of the intersection plays an
important role.
H = M ⊕ M⊥ := {ψ + ϕ | ψ ∈ M, ϕ ∈ M⊥ }
Proposition 4.6. For any closed linear subspace X ⊂ H its true that X ⊥⊥ = X.
Proof. Let x ∈ X, then for all y ∈ X ⊥ we have hx, yi = 0 and so x ∈ X ⊥⊥ . This gives
X ⊆ X ⊥⊥ .
Now consider z ∈ X ⊥⊥ . As X is closed from the above note we know it can be decomposed
as z = x + y for x ∈ X and y ∈ X ⊥ . We then have
0 = hy, zi
= hy, x + yi
= hy, xi + hy, yi
= kyk2
=⇒ y = 0
where the last step comes from the definiteness of k·k. So we have z ∈ X and X ⊥⊥ ⊆ X.
Proposition 4.7. For any linear subspace M ⊆ H it is true that M⊥⊥ = M, where the
latter is the topological closure of the set.
– 44 –
Proof. We start with two observations:
(i) M ⊆ M⊥⊥ , which was shown at the start of the last proof (as there was no use of
the fact that X was closed there.
(ii) If M1 ⊆ M2 then M⊥
2 ⊆ M1 , which can be shown easily.
⊥
First let’s show that M⊥⊥ ⊆ M. Clearly M ⊆ M (where the equality hold only if M is
⊥ ⊥⊥
closed), and so from (ii) we have M ⊆ M⊥ , which in turn gives M⊥⊥ ⊆ M . But M
is closed and so from the previous proposition we have M⊥⊥ ⊆ M.
Now we need to show the reverse inclusion. From Proposition 4.4 we know that M⊥⊥
is closed. Then (i) instantly tells us that M ⊆ M⊥⊥ .
Definition. Let M be a closed linear subspace of a separable Hilbert space H and fix some
orthonormal basis of M. The map
PM : H → M
ψ 7→ ψ
(iii) PM⊥ ψ = ψ⊥
Proof. Let {ei }i∈I and {ei }i∈J be bases of M and M⊥ respectively, where I, J are disjoint
and either finite or countably infinite, such that {ei }i∈I∪J is a basis of H (Note that we
should think of I ∪ J as having a definite ordering).
– 45 –
(ii) Let ψ, ϕ ∈ H. Then
X
hPM ψ|ϕi := hei |ψiei ϕ
i∈I
X
= hei |ψihei |ϕi
i∈I
X
= hei |ϕihψ|ei i
i∈I
X
= ψ hei |ϕiei
i∈I
=: hψ|PM ϕi.
Hence
PM⊥ ψ = ψ − PM ψ = ψ − ψ =: ψ⊥ .
– 46 –
Theorem 4.10 (Riesz representation). Every f ∈ H∗ is of the form fϕ for a unique ϕ ∈ H.
Proof. First, suppose that f = 0, i.e. f is the zero functional on H. Then, clearly, f = f0
with 0 ∈ H. Hence, suppose that f 6= 0. Since, ker f := preimf ({0}) is a closed linear
subspace, we can write
H = ker f ⊕ (ker f )⊥ .
As f 6= 0, there exists some ψ ∈ H such that ψ ∈ / ker f . Hence, ker f 6= H, and thus
(ker f ) 6= {0}. Let ξ ∈ (ker f ) \ {0} and assume, w.l.o.g., that kξk = 1. Define
⊥ ⊥
ϕ := f (ξ)ξ ∈ (ker f )⊥ .
Note that
f (f (ξ)ψ − f (ψ)ξ) = f (ξ)f (ψ) − f (ψ)f (ξ) = 0,
that is, f (ξ)ψ − f (ψ)ξ ∈ ker f . Since ξ ∈ (ker f )⊥ , we have
and hence fϕ (ψ) = f (ψ) for all ψ ∈ H, i.e. f = fϕ . For uniqueness, suppose that
f = fϕ1 = fϕ2
R : H → H∗
ϕ 7→ fϕ ≡ hϕ| · i
is a linear isomorphism, and H and H∗ be identified with one another as vector spaces.
This lead Dirac to suggest the following notation for the elements of the dual space
fϕ ≡ hϕ|.
– 47 –
Correspondingly, he wrote |ψi for the element ψ ∈ H. Since h · | · i is “a bracket”, the first
half h · | is called a bra, while the second half | · i is called a ket (nobody knows where the
missing c is). With this notation, we have
The notation makes evident the fact that, for any ϕ, ψ ∈ H, we can always consider the
inner product hϕ|ψi ∈ C as the result of applying fϕ ∈ H∗ to ψ ∈ H.
The advantage of this notation is that some formulæ become more intuitive and hence
are more easily memorised. For a concrete example, consider
∞
X
ψ= hei |ψiei
i=0
By allowing the scalar multiplication of kets also from the right, defined to yield the same
result as that on the left, we have
∞
X
= |ei ihei |ψi.
i=0
where by “quite obviously”, we mean that we have a suppressed tensor product (see section
8 of the Lectures on the Geometric Anatomy of Theoretical Physics for more details on
tensors)
X∞
= |ei i ⊗ hei | |ψi.
i=0
– 48 –
and hence interpret the expansion of |ψi in terms of the basis as the “insertion” of an identity
∞
X ∞
X
|ψi = idH |ψi = |ei ihei | |ψi = hei |ψi|ei i.
i=0 i=0
But the original expression was already clear in the first place, without the need to add
hidden tensor products and extra rules. Of course, part of the appeal of this notation is
that one can intuitively think of something like |ei ihei | as a map H → H, by imagining
that the bra on the right acts on a ket in H, thereby producing a complex number which
becomes the coefficient of the remaining ket
|ei ihei | |ψi = |ei ihei |ψi = hei |ψi|ei i.
The major drawback of this notation, and the reason why we will not adopt it, is that
in many places (for instance, if we consider self-adjoint operators, or Hermitian operators)
this notation doesn’t produce inconsistencies only if certain conditions are satisfied. While
these conditions will indeed be satisfied most of times, it becomes extremely confusing to
formulate conditions on our objects by using a notation that only makes sense if the objects
already satisfy conditions.
Of course, as this notation is heavily used in physics and related applied sciences, it is
necessary to be able to recognise it and become fluent in it. But note that it does not make
things clearer. If anything, it makes things more complicated.
– 49 –
5 Measure theory
This and the next section will be a short recap of basic notions from measure theory and
Lebesgue integration. These are inescapable subjects if one wants to understand quantum
mechanics since
(ii) the most commonly emerging separable Hilbert space in quantum mechanics is the
space L2 (Rd ), whose definition needs the notion of Lebesgue integral.
(i) M ∈ Σ
(ii) if A ∈ Σ, then M \ A ∈ Σ
S∞
(iii) for any sequence {An }n∈N in Σ we have n=0 An ∈ Σ.
Remark 5.1 . If we relax the third condition so that it applies only to finite (rather than
countable) unions, we obtain the notion of an algebra, often called an algebra of sets in
order to distinguish it from the notion of algebra as a vector space equipped with a bilinear
product, with which it has nothing to do.
Remark 5.2 . Note that by condition (ii) and De Morgan’s laws, condition (iii) can be equiv-
alently stated in terms of intersections rather than unions. Recall that De Morgan’s laws
“interchange” unions with intersections and vice-versa under the complement operation.
That is, if M is a set and {Ai }i∈I is a collection of sets, then
[ \ \ [
M\ Ai = (M \ Ai ), M\ Ai = (M \ Ai ).
i∈I i∈I i∈I i∈I
A σ-algebra is closed under countably infinite unions (by definition) but also under
countably infinite intersections and finite unions and intersections.
Proposition 5.3. Let M be a set and let Σ be a σ-algebra on M . Let {An }n∈N be a
sequence in Σ. Then, for all k ∈ N, we have
(i) kn=0 An ∈ Σ
S
(ii) ∞
T Tk
n=0 An ∈ Σ and n=0 An ∈ Σ.
– 50 –
S∞
Then, {Bn }n∈N is a sequence in Σ, so n=0 Bn ∈ Σ. Hence, we have:
∞
[ k
[ ∞
[ k
[
Bn = Bn ∪ Bn = An
n=0 n=0 n=k+1 n=0
Sk
and thus n=0 An ∈ Σ.
S∞
(ii) As {An }n∈N is a sequence in Σ, so is {M \ An }n∈N and hence n=0 (M \ An ) ∈ Σ.
Thus, we also have
[ ∞
M\ (M \ An ) ∈ Σ
n=0
Our goal is to assign volumes (i.e. measures) to subsets of a given set. Of course, we
would also like this assignment to satisfy some sensible conditions. However, it turns out
that one cannot sensibly assign volumes to any arbitrary collection of subsets of a given
set10 . It is necessary that the collection of subsets be a σ-algebra. In addition, just like
in topology openness and closeness are not properties of subsets but properties of subsets
with respect to a choice of topology, so does measurability of subsets only make sense with
respect to a choice of σ-algebra. In particular, a given subset could be measurable with
respect to some σ-algebra and not measurable with respect to some other σ-algebra.
Example 5.4 . The pair (M, P(M )) is a measurable space for any set M . Of course, just
like the discrete topology is not a very useful topology, the power set P(M ) is not a very
useful σ-algebra on M , unless M is countable.
Definition. The extended real line is R := R ∪ {−∞, +∞}, where the symbols −∞ and
+∞ (the latter often denoted simply by ∞) satisfy
∀ r ∈ R : −∞ ≤ r ≤ ∞
with strict inequalities if r ∈ R. The symbols ±∞ satisfy the following arithmetic rules
(i) ∀ r ∈ R : ±∞ + r = r ± ∞ = ±∞
(iv) 0(±∞) = ±∞ 0 = 0.
– 51 –
Definition. Let (M, Σ) be a measurable space. A measure on (M, Σ) is a function
µ : Σ → [0, ∞],
(i) µ(∅) = 0
A sequence {An }n∈N that satisfies the condition that Ai ∩ Aj = ∅ for all i 6= j is called
a pairwise disjoint sequence.
Remark 5.5 . Both sides of the equation in part (ii) of the definition of measure might take
the value ∞. There are two possible reasons why ∞ n=0 µ(An ) might be infinite. It could
P
be that µ(An ) = ∞ for some n ∈ N or, alternatively, it could be that µ(An ) < ∞ for all
n ∈ N but the sequence of partial sums { ni=0 µ(Ai )}n∈N , which is an increasing sequence
P
Definition. A measure space is a triple (M, Σ, µ) where (M, Σ) is a measurable space and
µ : Σ → [0, ∞] is a measure on M .
and thus, the triple (N, P(N), µ) is a measure space. The measure µ on (N, P(N)) is called
counting measure and it is the usual measure on countable measurable spaces.
– 52 –
Proposition 5.7. Let (M, Σ, µ) be a measure space.
Proof. (i) Let An = ∅ for all n > k. Then, {An }n∈N is a pairwise disjoint sequence in Σ
and hence, we have:
k
[ ∞
[ ∞
X k
X
µ An =µ An = µ(An ) = µ(An ).
n=0 n=0 n=0 n=0
Note, however, that this only makes sense if µ(A) < ∞, for otherwise we must also
have µ(B) = ∞ by part (ii), and then µ(B) − µ(A) would not be defined.
Proposition 5.8. Let (M, Σ, µ) be a measure space and let {An }n∈N be a sequence in Σ.
(ii) If µ(A0 ) < ∞ and {An }n∈N is decreasing, i.e. An+1 ⊆ An for all n ∈ N, then
\∞
µ En = lim µ(An ).
n→∞
n=0
We say that µ is (i) continuous from below and (ii) continuous from above.
Proof. (i) Define B0 := A0 and Bn := An \ An−1 . Then, {Bn }n∈N is a pairwise disjoint
sequence in Σ such that
n
[ ∞
[ ∞
[
B i = An and An = Bn .
i=0 n=0 n=0
– 53 –
Hence, we have
∞
[ ∞
[
µ An =µ Bn
n=0 n=0
∞
X
= µ(Bn )
n=0
n
X
= lim µ(Bi )
n→∞
i=0
n
[
= lim µ Bn
n→∞
i=0
= lim µ(An ).
n→∞
(ii) Define Bn := A0 \ An . Then, Bn ⊆ Bn+1 for all n ∈ N and thus, by part (i), we have
∞
[
µ Bn = lim µ(Bn )
n→∞
n=0
= lim (µ(A0 ) − µ(An ))
n→∞
= µ(A0 ) − lim µ(An ).
n→∞
Therefore, we have
∞
\
µ An = lim µ(An ).
n→∞
n=1
Remark 5.9 . Note that the result in the second part of this proposition need not be true
if µ(A0 ) = ∞. For example, consider (N, P(N), µ), where µ is the counting measure on
(N, P(N)). If An = {n, n + 1, n + 2, . . .}, then {An }n∈N is a decreasing sequence. Since
µ(An ) = ∞ for all n ∈ N, we have lim µ(An ) = ∞. On the other hand, ∞ n=0 An = ∅ and
T
T∞ n→∞
thus µ( n=0 An ) = 0.
– 54 –
Proposition 5.10. Let (M, Σ, µ) be a measure space. Then, µ is countably sub-additive.
That is, for any sequence {An }n∈N in Σ, we have
[∞ X ∞
µ An ≤ µ(An ).
n=0 n=0
Proof. (a) First, we show that µ(A ∪ B) ≤ µ(A) + µ(B) for any A, B ∈ Σ. Note that, for
any pair of sets A and B, the sets A \ B, B \ A and A ∩ B are pairwise disjoint and
their union is A ∪ B.
A B
(b) We now extend this to finite unions by induction. Let {An }n∈N be a sequence in Σ
and suppose that
[n X n
µ Ai ≤ µ(Ai )
i=0 i=0
for some n ∈ N. Then, by part (a), we have
n+1
[ n
[
µ Ai = µ An+1 ∪ Ai
i=0 i=0
n
[
≤ µ(An+1 ) + µ Ai
i=0
n
X
≤ µ(An+1 ) + µ(Ai )
i=0
n+1
X
= µ(Ai ).
i=0
Hence, by induction on n with base case n = 1 and noting that the case n = 0 is
trivial (it reduces to µ(A0 ) = µ(A0 )), we have
[ n Xn
∀n ∈ N : µ Ai ≤ µ(Ai ).
i=0 i=0
– 55 –
(c) Let {An }n∈N be a sequence in Σ. Define Bn := ni=0 An . Then, {Bn }n∈N is an
S
Definition. Let (M, Σ, µ) be a measure space. The measure µ is said to be finite if there
exists a sequence {An }n∈N in Σ such that ∞n=0 An = M and
S
∀ n ∈ N : µ(An ) < ∞.
Example 5.11 . The counting measure on (N, P(N)) is finite. To see this, define An := {n}.
Then, clearly ∞n=0 An = N and µ(An ) = |{n}| = 1 < ∞ for all n ∈ N.
S
(ii) Let A ∈ Σ. Then, A ∈ Σi for all i ∈ I and, since each Σi is a σ-algebra, we also have
M \ A ∈ Σi for all i ∈ I. Hence, M \ A ∈ Σ.
(iii) Let {An }n∈N be a sequence in Σ. Then, {An }n∈N is a sequence in each Σi . Thus,
∞
[
∀i ∈ I : An ∈ Σi .
n=0
S∞
Hence, we also have n=0 An ∈ Σ.
– 56 –
Definition. Let M be a set and let E ⊆ P(M ) be a collection of subsets of M . The
σ-algebra generated by E, denoted σ(E), is the smallest σ-algebra on M containing all the
sets in E. That is,
A ∈ σ(E) ⇔ for all σ-algebras Σ on M : E ⊆ Σ ⇒ A ∈ Σ
or, by letting {Σi | i ∈ I} be the collection of σ-algebras on M such that E ⊆ Σ,
\
σ(E) := Σi .
i∈I
The set E is called a generating set for σ(E). Observe that the second characterisation
makes it manifest that σ(E) is indeed a σ-algebra on M by the previous proposition.
Theorem 5.13. Let (M, Σ) be a measurable space. Then, Σ = σ(E) for some E ⊆ P(M ).
This generating construction immediately allows us to link the notions of topology and
σ-algebra on a set M via the following definition.
Definition. Let (M, O) be a topological space. The Borel σ-algebra on (M, O) is σ(O).
Recall that a topology on M is a collection O ⊆ P(M ) of subsets of M which contains
∅ and M and is closed under finite intersections and arbitrary (even uncountable) unions.
The elements of the topology are called open sets. Of course, while there many choices of
σ-algebra on M , if we already have a topology O on M , then the associated Borel σ-algebra
is very convenient choice of σ-algebra since, as we will soon see, it induces a measurable
structure which is “compatible” with the already given topological structure.
This is, in fact, the usual philosophy in mathematics: we always let the stronger struc-
tures induce the weaker ones, unless otherwise specified. For instance, once we have chosen
an inner product on a space, we take the norm to be the induced norm, which induces a
metric, which in turn induces a topology on that space, from which we now know how to
obtain a canonical σ-algebra.
We remark that, while the Borel σ-algebra on a topological space is generated by the
open sets, in general, it contains much more that just the open sets.
Example 5.14 . Recall that the standard topology on R, denoted OR , is defined by
A ∈ OR ⇔ ∀ a ∈ A : ∃ ε > 0 : ∀ r ∈ R : |r − a| < ε ⇒ r ∈ A.
In fact, the elements of OR are at most countable unions of open intervals in R. Consider
now the Borel σ-algebra on (R, OR ). Let a < b. Then, for any n ∈ N, the interval (a − n1 , b)
is open. Hence, {(a − n1 , b)}n∈N is a sequence in σ(OR ). Since σ-algebras are closed under
countable intersections, we have
∞
\
(a − n1 , b) = [a, b) ∈ σ(OR ).
n=0
Hence, σ(OR ) contains, in addition to all open intervals, also all half-open intervals. It is not
difficult to show that it contains all closed intervals as well. In particular, since singletons
are closed, σ(OR ) also contains all countable subsets of R. In fact, it is non-trivial11 to
produce a subset of R which is not contained in σ(OR ).
11
https://en.wikipedia.org/wiki/Borel_set#Non-Borel_sets
– 57 –
5.3 Lebesgue measure on Rd
Definition. Let (M, Σ, µ) be a measure space. If A ∈ Σ is such that µ(A) = 0, then A is
called a null set or a set of measure zero.
The following definition is not needed for the construction of the Lebesgue measure.
However, since it is closely connected with that of null set and will be used a lot in the
future, we chose to present it here.
Definition. Let (M, Σ, µ) be a measure space and let P be some property or statement.
We say that P holds almost everywhere on M if
Example 5.15 . Let (M, Σ, µ) be a measure space and let f, g : M → N be maps. We say
that f and g are almost everywhere equal, and we write f =a.e. g, if there exists a null set
Z ∈ Σ such that
∀ m ∈ M \ Z : f (m) = g(m).
The case f = g corresponds to Z = ∅.
∀ A ∈ Σ : ∀ B ∈ P(A) : µ(A) = 0 ⇒ B ∈ Σ.
Note that since for any A, B ∈ Σ, B ⊆ A implies µ(B) ≤ µ(A), it follows that every
subset of a null set, if measurable, must also be a null set.
Definition. Let (M, Σ, µ) be a measure space and let (M, +, ·) be a vector space. The
measure µ is said to be translation-invariant if
where A + m := {a + m | a ∈ A}.
Theorem 5.16. Let ORd be the standard topology on Rd . There exists a unique complete,
translation-invariant measure
λd : σ(ORd ) → [0, ∞]
such that for all ai , bi ∈ R with 1 ≤ i ≤ d and ai < bi , we have
d
d
Y
λ [a1 , b1 ) × · · · × [ad , bd ) = (bi − ai ).
i=1
– 58 –
The superscript d in λd may be suppressed if there is no risk of confusion. Note that
the Lebesgue measure on R, R2 and R3 coincides with the standard notions of length, area
and volume, with the further insight that these are only defined for the elements of the
respective Borel σ-algebras.
That is, {an }n∈N is the sequence (0, 1, −1, 2, −2, 3, −3, . . .).
−3 −2 −1 0 1 2 3
Clearly, we have ∞ n=0 [an , an + 1) = R. Since, for all n ∈ N, [an , an + 1) ∈ σ(OR ) and
S
Note that this is exactly the definition of continuous map between topological spaces,
with “continuous” replaced by “measurable” and topologies replaced by σ-algebras.
Corollary 5.19. Let (M, OM ) and (N, ON ) be topological spaces. Any continuous map
M → N is measurable with respect to the Borel σ-algebras on M and N .
– 59 –
Corollary 5.20. Any monotonic map R → R is measurable with respect to the Borel σ-
algebra (with respect to OR ).
Hence, g ◦ f is measurable.
Proposition 5.22. Let (M, ΣM ) and (N, ΣN ) be measurable spaces and let {fn }n∈N be a
sequence of measurable maps from M to N whose pointwise limit is f . Then, f is measur-
able.
Proposition 5.23. Let (M, ΣM , µ) be a measure space, let (N, ΣN ) be a measurable space
and let f : M → N be a measurable map. Then, the map
f∗ µ : ΣN → [0, ∞]
A 7→ µ(preimf (A))
That f∗ µ is a measure follows easily from the fact that µ is a measure and basic
properties of pre-images of maps, namely
– 60 –
6 Integration of measurable functions
We will now focus on measurable functions M → R and define their integral on a subset
of M with respect to some measure on M , which is called the Lebesgue integral. Note
that, even if M ⊆ Rd , the Lebesgue integral of a function need not be with respect to the
Lebesgue measure.
The key application of this material is the definition of the Banach spaces of (classes
of) Lebesgue integrable functions Lp . The case p = 2 is especially important since L2 is, in
fact, a Hilbert space. It appears a lot in quantum mechanics where it is loosely referred to
as the space of square-integrable functions.
1 χ(1,2]
0 1 2 3
(iii) χA∩B = χA χB
(iv) χM \A + χA = 1
where the addition and multiplication are pointwise and the 0 and 1 in parts (i) and (iv)
are the constant functions M → R mapping every m ∈ M to 0 ∈ R and 1 ∈ R, respectively.
Definition. Let M be a set. A function s : M → R is simple if s(M ) = {r1 , . . . , rn } for
some n ∈ N.
Equivalently, s : M → R is simple if there exist r1 , . . . , rn ∈ R and A1 , . . . , An ∈ P(M ),
for some n ∈ N, such that
X n
s= ri χAi .
i=1
So s is simple if it is a linear combination of characteristic functions.
– 61 –
Example 6.3 . Consider the simple function s : R → R given by s := χ[1,3] + 2χ[2,5] .
0 1 2 3 4 5
where Ai ∩ Aj = ∅ whenever i 6= j.
Any simple function can be written in standard form. It is clear that if s is in its
standard form, then Ai = preims ({ri }).
Proposition 6.4. Let (M, Σ) be a measurable space and let A, A1 , . . . , An ∈ P(M ). Then
(i) χA is measurable if, and only if, A ∈ Σ
(ii) if s = ni=1 ri χAi is a simple function in its standard form, then s is measurable if,
P
preimχA (α) = A,
preimχA (β) = ∅,
preimχA (γ) = M,
– 62 –
(ii) First define χ
eAi := ri χAi , which satisfies
(
ri if m ∈ Ai
χ
eAi (m) =
0 if m ∈
/ Ai .
As s is in its standard form we know χ eAi ∩ χeAj = ∅ for all i 6= j. Combining these
two things, and defining αi = [ri , ∞), γi = [0, ri ], the result follow from part (i).
Z n
X
s dµ := ri µ(Ai ).
M i=1
Note that the non-negativity condition is essential since µ takes values in [0, ∞], hence
we could have µ(Ai ) = ∞ for more that one Ai , and if the corresponding coefficients ri
have opposite signs, then we would have M s dµ = ∞ − ∞, which is not defined. For the
R
Example 6.5 . Consider the measure space (N, P(N), µ), where µ is the counting measure,
let f : N → R be non-negative and suppose that there exists N ∈ N such that f (n) = 0 for
all n > N . Then, we can write f as
N
X
f= f (n)χ{n} .
n=0
Hence, the “integral” of f over N with respect to the counting measure is just the sum.
The need for simple functions to be in their standard form, which was introduced to
avoid any potential ambiguity in the definition of their integral, can be relaxed using the
following lemma.
Lemma 6.6. Let (M, Σ, µ) be a measure space. Let s and t be non-negative, measurable,
simple functions M → R and let c ∈ [0, ∞). Then
Z Z Z
(cs + t) dµ = c s dµ + t dµ.
M M M
– 63 –
Pn
Proposition 6.7. Let (M, Σ, µ) be a measure space and let s = i=1 ri χAi be a non-
negative, measurable, simple function M → R not necessarily in its standard form. Then
Z n
X
s dµ = ri µ(Ai ).
M i=1
Corollary 6.8. Let (M, Σ, µ) be a measure space. Let s and t be non-negative, measurable,
simple functions M → R such that s ≤ t (that is, s(m) ≤ t(m) for all m ∈ M ). Then
Z Z
s dµ ≤ t dµ.
M M
Pn
Lemma 6.9. Let (M, Σ, µ) be a measure space and let s = i=1 ri χAi be a non-negative,
measurable, simple function M → R. Define the map
νs : Σ → [0, ∞]
Z
A 7→ sχA dµ,
M
where sχA is the pointwise product of s and χA . Then, νs is a measure on (M, Σ).
– 64 –
6.3 Integration of non-negative measurable functions
As we are interested in measurable functions M → R, we need to define a σ-algebra on R.
We cannot use the Borel σ-algebra since we haven’t even defined a topology on R. In fact,
we can easily get a σ-algebra on R as follows.
In other words, we can simply ignore the infinities in a subset of R and consider it to
be measurable if A \ {−∞, +∞} is in the Borel σ-algebra of R. We will always consider R
to be equipped with this σ-algebra.
Lemma 6.11. Let (M, Σ) be a measurable space and let f, g : M → R be measurable. Then,
the following functions are measurable.
(ii) |f | and f 2
where x is a dummy variable and could be replaced by any other symbol. The reason why
this is a convenient notation is that, while some functions have standard symbols but cannot
be easily represented by an algebraic expression (e.g. characteristic functions), others are
easily expressed in terms of an algebraic formula but do not have a standard name. For
instance, it is much easier to just write
Z
x2 µ(dx)
R
than having to first denote the function R → R, x 7→ x2 by a generic f or, say, the more
specific sqR , and then write Z
sqR dµ.
R
In computer programming, this is akin to defining anonymous functions.
– 65 –
Definition. Let (M, Σ, µ) be a measure space and let f : M → R be a non-negative,
measurable function. For any A ∈ Σ (that is, any measurable subset of M ), we define
Z Z
f dµ := f χA dµ.
A M
Proof. (i) Denote by Sf and Sg the sets of non-negative, measurable, simple functions
that are less than or equal to f and g, respectively. As f ≤ g, we have Sf ⊆ Sg and
hence Z Z Z Z
f dµ := sup s dµ ≤ sup s dµ =: g dµ.
M s∈Sf M s∈Sg M M
Proposition 6.14 (Markov inequality). Let (M, Σ, µ) be a measure space and let f : M → R
be a non-negative, measurable function. For any z ∈ [0, ∞], we have
Z
f dµ ≥ z µ(preimf ([z, ∞])).
M
– 66 –
The following is the pivotal theorem of Lebesgue integration.
Theorem 6.15 (Monotone convergence theorem). Let (M, Σ, µ) be a measure space and let
{fn }n∈N be a sequence of non-negative, measurable functions M → R such that fn+1 ≥ fn
for all n ∈ N. If there exists a function f : M → R such that
Remark 6.16 . Observe that this result is in stark contrast with what one may be used
from Riemann integration, where pointwise converge of a sequence of integrable functions
{fn }n∈N is not a sufficient condition for the integral of the limit f to be equal to the limit
of the integrals of fn or, in fact, even for f to be integrable. For these, we need stronger
conditions on the sequence {fn }n∈N , such as uniform converge.
Example 6.17 . Consider the measure space (N, P(N), µ), where µ is the counting measure,
and let f : N → R be non-negative. Note that the choice of σ-algebra P(N) on N makes
every function on N (to any measurable space) measurable. Define, for every n ∈ N,
n
X
sn = f (i)χ{i} .
i=0
If you ever wondered why series seem to share so many properties with integrals, the reason
is that series are just integrals with respect to a discrete measure.
The monotone convergence theorem can be used to extend some of the properties of in-
tegrals of non-negative, measurable simple functions to non-negative, measurable functions
which are not-necessarily simple.
– 67 –
Lemma 6.18. Let (M, Σ, µ) be a measure space, let f, g : M → R be non-negative, mea-
surable functions and let c ∈ [0, ∞). Then
Z Z Z
(i) (cf + g) dµ = c f dµ + g dµ
M M M
Z
(ii) the map νf : Σ → [0, ∞] defined by νf (A) := f dµ is a measure on (M, Σ)
A
Z Z Z
(iii) for any A ∈ Σ, we have f dµ = f dµ + f dµ.
M A M \A
Proof. (i) Let {sn }n∈N and {tn }n∈N be increasing sequences of non-negative, measurable,
simple functions whose pointwise limits are f and g, respectively. Then, it is easy to
see that {csn + tn }n∈N is an increasing sequence of non-negative, measurable, simple
functions whose pointwise limit is cf + g. Hence, by Lemma 6.6 and the monotone
converge theorem
Z Z
(cf + g) dµ = lim (csn + tn ) dµ
M n→∞ M
Z Z
= lim c sn dµ + tn dµ
n→∞
Z M M Z
= c lim sn dµ + lim tn dµ
n→∞ M n→∞ M
Z Z
=c f dµ + g dµ.
M M
(ii) To check that νf is a measure on (M, Σ), first note that we have
Z Z
νf (∅) = f dµ := f χ∅ dµ = 0.
∅ M
fn := f χ(Sn Ai ) .
i=0
Since, for all n ∈ N, we have ni=0 Ai ⊆ n+1i=0 Ai and f is non-negative, {fn }n∈N is
S S
– 68 –
limits is f χ(S∞ Ai ) . Hence, by recalling Proposition 6.2, we have
i=0
∞
[ Z
νf Ai := S∞ f dµ
i=0 i=0 Ai
Z
= f χ(S∞ Ai ) dµ
i=0
MZ
= lim f χ(Sn Ai ) dµ
n→∞ M i=0
Z X n
= lim f χAi dµ
n→∞ M
i=0
Xn Z
= lim f χAi dµ
n→∞ M
i=0
∞
X
= νf (Ai ).
i=0
(iii) Note that A ∩ (M \ A) = ∅. Hence, by using the fact that νf from part (ii) is a
measure on (M, Σ), we have
Z Z Z
f dµ = νf (M ) = νf (A) + νf (M \ A) = f dµ + f dµ.
M A M \A
Part (i) of the previous lemma and the monotone convergence theorem also imply that,
for any sequence {fn }n∈N of non-negative, measurable functions, we have
∞
Z X ∞ Z
X
fn dµ = fn dµ.
M n=0 n=0 M
Again, note that this result does not hold for the Riemann integral unless stronger condi-
tions are places on the sequence {fn }n∈N .
Finally, we have a simple but crucial result for Lebesgue integration.
1
sn := n+1 χAn .
– 69 –
and thus, sn dµ = 0 for all n ∈ N. Since by definition
R
M
Z
1
sn dµ = n+1 µ(An ),
M
we must also have µ(An ) = 0 for all n ∈ N. Let A := {m ∈ M | f (m) 6= 0}. Then, as
f is non-negative, we have
∞
[ ∞
[
1
A= An = {m ∈ M | f (m) > n+1 }
n=0 n=0
(⇐) Suppose that f =a.e. 0. Let S be the set of non-negative, measurable, simple functions
s such that s ≤ f . As f =a.e. 0, we have s =a.e. 0 for all s ∈ S. Thus, if
n
X
s= ri χAi ,
i=1
Therefore, we have Z Z
f dµ := sup s dµ = 0.
M s∈S M
This means that, for the purposes of Lebesgue integration, null sets can be neglected
as they do not change the value of an integral. The following are some examples of this.
– 70 –
(ii) As f =a.e. g, we have f − g =a.e. 0 and thus
Z Z Z
0= (f − g) dµ = f dµ − g dµ.
M M M
Example 6.21 . Consider (R, σ(OR ), λ) and let f : R → R be the Dirichlet function
(
1 if r ∈ Q
f (r) :=
0 if r ∈ R \ Q.
The Dirichlet function is the usual example of a function which is not Riemann integrable
(on any real interval). We will now show that we can easily assign a numerical value to
its integral on any measurable subset of R. First, note that a set A ∈ σ(OR ) is null with
respect to the Lebesgue measure if, and only if,
∞
[ ∞
X
∀ ε > 0 : ∃ {In }n∈N : A⊂ In and λ(In ) < ε
n=0 n=0
where {In }n∈N is a sequence of real intervals. From this, it immediately follows that any
countable subset of R has zero Lebesgue measure. Thus, λ(Q) = 0 and hence, f =a.e. 0.
Therefore, by the previous lemmas, we have
Z
f dλ = 0
A
Definition. Let (M, Σ, µ) be a measure space and let f : M → R. The function f is said
to be (Lebesgue) integrable if it is measurable and
Z
|f | dµ < ∞.
M
We denote the set of all integrable functions M → R by LR1 (M, Σ, µ), or simply L 1 (M ) if
there is no risk of confusion.
– 71 –
For any f : M → R, we define f + := max(f, 0) and f − := max(−f, 0), which are
measurable whenever f is measurable by part (iv) of Lemma 6.11.
f f+ f−
Definition. Let (M, Σ, µ) be a measure space and let f : M → R be integrable. Then, the
(Lebesgue) integral of f over M with respect to µ is
Z Z Z
f dµ := f + dµ − f − dµ.
M M M
It should be clear that the role of the integrability condition M |f | dµ < ∞ is to prevent
R
where |f | denotes the complex modulus, i.e. |f |2 = Re(f )2 + Im(f )2 . We denote the set
of all integrable complex functions by LC1 (M, Σ, µ), or simply L 1 (M ) if there is no risk of
confusion.
The following lemma gives the properties expected of sums and scalar multiples of
integrals. Note, however, that before we show that, say, the integral of a sum is the sum of
the integrals, it is necessary to first show that the sum of two functions in L 1 (M ) is again
in L 1 (M ).
– 72 –
Lemma 6.22. Let (M, Σ, µ) be a measure space, let f, g ∈ L 1 (M ) and let c ∈ R. Then
Z Z
(i) |f | ∈ L (M ) and f dµ ≤
1
|f | dµ
M M
Z Z
(ii) cf ∈ L 1 (M ) and cf dµ = c f dµ
M M
Z Z Z
(iii) f + g ∈ L 1 (M ) and (f + g) dµ = f dµ + g dµ
M M M
Z Z Z
f dµ = f + dµ − −
f dµ
M ZM Z M
+ −
≤ f dµ + f dµ
ZM Z M
= +
f dµ + f − dµ
ZM M
+ −
= (f + f ) dµ
ZM
= |f | dµ.
M
(ii) We have Z Z Z
|cf | dµ = |c||f | dµ = |c| |f | dµ < ∞
M M M
= c (f + − f − ) dµ
ZM
=c f dµ
M
– 73 –
Now suppose c = −1. Then, (−f )+ = f − and (−f )− = f + . Thus
Z Z Z
(−f ) dµ = (−f ) dµ − (−f )− dµ
+
M ZM Z M
= f − dµ − f + dµ
MZ M Z
+ −
=− f dµ − f dµ
Z M M
= − (f + − f − ) dµ
ZM
=− f dµ.
M
The case c < 0 follows by writing c = (−1)(−c) and applying the above results.
f + g = f + − f − + g + − g − = (f + + g + ) − (f − + g − ),
= f dµ + g dµ.
M M
(iv) The set of all functions from M to R is a vector space. By parts (ii) and (iii), we have
that L 1 (M ) is a vector subspace of this vector space and hence, a vector space in its
own right.
Some properties of the integrals of non-negative, measurable functions easily carry over
to general integrable functions.
– 74 –
Just as the monotone convergence theorem was very important for integrals of non-
negative, measurable functions, there is a similar theorem that is important for integrals of
functions in L 1 (X).
Theorem 6.24 (Dominated convergence theorem). Let (M, Σ, µ) be a measure space and
let {fn }n∈N be a sequence of measurable functions which converges almost everywhere to a
measurable function f . If there exists g ∈ L 1 (M ) such that |fn | ≤a.e. g for all n ∈ N, then
(i) f ∈ L 1 (M ) and fn ∈ L 1 (M ) for all n ∈ N
Z
(ii) lim |fn − f | dµ = 0
n→∞ M
Z Z
(iii) lim fn dµ = f dµ.
n→∞ M M
Remark 6.25 . By “{fn }n∈N converges almost everywhere to f ” we mean, of course, that
there exists a null set A ∈ Σ such that
∀ m ∈ M \ A : lim fn (m) = f (m).
n→∞
and, similarly,
LC∞ (M, Σ, µ) := f : M → C Re(f ) and Im(f ) are measurable and ess sup |f | < ∞ .
Whenever there is no risk of confusion, we lighten the notation to just L p , for p ∈ [1, ∞].
– 75 –
All the L p spaces become vector spaces once equipped with pointwise addition and
multiplication. Let us show this is detail for LC2 .
Proposition 6.26. Let (M, Σ, µ) be a measure space. Then, LC2 is a complex vector space.
Proof. The set of all functions M → C, often denoted M C , is a vector space under pointwise
addition and multiplication. Hence, it suffices to show that LC2 is a subspace of M C .
|f + g|2 = (f + g)(f + g)
= (f + g)(f + g)
= f f + f g + gf + gg
= |f |2 + f g + gf + |g|2 .
Moreover, as
0 ≤ |f − g|2 = |f |2 − f g − gf + |g|2 ,
we have f g + gf ≤ |f |2 + |g|2 , and thus
Therefore Z Z Z
|f + g|2 dµ ≤ 2 |f |2 dµ + 2 |g|2 dµ < ∞,
M M M
Ideally, we would like to turn all these L p space into Banach spaces. Let us begin by
equipping them with a weaker piece of extra structure.
Proposition 6.27. Let (M, Σ, µ) be a measure space and let p ∈ [0, ∞]. Then, the maps
k · kp : L p → R defined by
Z 1
p
p
|f | dµ for 1 ≤ p < ∞
kf kp := M
ess sup |f | for p = ∞
(i) kf kp ≥ 0
– 76 –
(iii) kf + gkp ≤ kf kp + kgkp .
In other words, the notion of semi-norm is a generalisation of that of norm obtained by
relaxing the definiteness condition. If the measure space (M, Σ, µ) is such that the empty
set is the only null set, then k · kp is automatically definite and hence, a norm.
Example 6.28 . Consider (N, P(N), µ), where µ is the counting measure. Then, as µ(A) is
the cardinality of A, the only null set is the empty set. Thus, recalling that functions on N
are just sequences, the maps
1
∞
X p
|an |p
for 1 ≤ p < ∞
k{an }n∈N kp = n=0
sup{|a | | n ∈ N} for p = ∞
n
kf kp = 0 ⇔ f =a.e. 0,
as we have shown in Theorem 6.19 for L 1 , and it is often very easy to produce an f 6= 0
such that kf kp = 0. The solution to this problem is to construct new spaces from the L p in
which functions that are almost everywhere equal are, in fact, the same function. In other
words, we need to consider the quotient space of L p by the equivalence relation “being
almost everywhere equal”.
Definition. Let M be a set. An equivalence relation on M is a set ∼ ⊆ M × M such that,
writing a ∼ b for (a, b) ∈ ∼, we have
(i) a ∼ a (reflexivity)
(ii) a ∼ b ⇔ b ∼ a (symmetry)
M/∼ := {[m] | m ∈ M }.
In fact, the notions of equivalence relation on M and partition of M are one and the same.
Lemma 6.29. Let (M, Σ, µ) be a measure space and let ∼ be defined by
f ∼g :⇔ f =a.e. g.
– 77 –
Proof. Let f, g, h ∈ L p . Clearly, f ∼ f and f ∼ g ⇔ g ∼ f . Now suppose that f ∼ g and
g ∼ h. Then, there exist null sets A, B ∈ Σ such that f = g on M \ A and g = h on M \ B.
Recall that σ-algebras are closed under intersections and hence, A ∩ B ∈ Σ. Obviously, we
have f = h on M \ (A ∩ B) and, since
Lp := L p /∼ = {[f ] | f ∈ L p }.
Lemma 6.31 (Hölder’s inequality). Let (M, Σ, µ) be a measure space and let p, q ∈ [1, ∞]
be such that p1 + 1q = 1 (where ∞
1
:= 0). Then, for all measurable functions f, g : M → C,
we have Z Z 1 Z 1
p q
p q
f g dµ ≤ |f | dµ |g| dµ .
M M M
The equality holds if and only if |f |p and |g|q are linearly dependent on L1 . That is,
there exists non-negative real numbers α, β ∈ R such that
Theorem 6.32. The spaces Lp are Banach spaces for all p ∈ [0, ∞].
We have already remarked that the case p = 2 is special in that L2 is the only Lp space
which can be made into a Hilbert space.
h·|·iL2 : L2 × L2 → C
Z
([f ], [g]) 7→ f g dµ.
M
– 78 –
Proof. First note that if [f ] ∈ L2 , then [f ] ∈ L2 and hence, by Hölder inequality, [f g] ∈ L1 .
This ensures that [f ][g] L2 ∈ C for all [f ], [g] ∈ L2 .
To show well-definedeness, let f 0 =a.e. f and g 0 =a.e. g. Then, f 0 g 0 =a.e. f g and thus
Z Z
0 0 0
[f ] [g ] L2 :=
0
f g dµ = f g dµ =: [f ][g] L2 .
M M
(ii) We have
Z
[f ]z[g] + [h] L2 = f (zg + h) dµ
M
Z Z
=z f g dµ + f h dµ
M
M
= z [f ][g] L2 + [f ][h] L2
Z Z
(iii) We have [f ][f ] L2 = |f |2 dµ ≥ 0 and
f f dµ =
M M
Z
|f |2 dµ = 0
[f ][f ] L2 = 0 ⇔ ⇔ k[f ]k2 = 0.
M
Thus, [f ] = 0 := [0].
The last part of the proof also shows that h·|·iL2 induces the norm k · k2 , with respect
to which L2 is a Banach space. Hence, (L2 , h·|·iL2 ) is a Hilbert space.
Remark 6.34 . The inner product h·|·iL2 on L2C (N, P(N), µ) coincides with the inner product
h·|·i`2 on `2 (N) defined in the section on separable Hilbert spaces.
– 79 –
7 Self-adjoint and essentially self-adjoint operators
While we have already given some of the following definitions in the introductory section
on the axioms of quantum mechanics, we reproduce them here for completeness.
(ii) A∗ ψ := η.
hη − ηe|αi = hη|αi − he
η |αi = hψ|Aαi − hψ|Aαi = 0
If A and B are densely defined and DA = DB , then the pointwise sum A + B is clearly
densely defined and hence, (A + B)∗ exists. However, we do not have (A + B)∗ = A∗ + B ∗
in general, unless one of A and B is bounded, but we do have the following result.
(A + z idDA )∗ = A∗ + z idDA
for any z ∈ C.
The identity operator idDA is usually suppressed in the notation, so that the above
equation reads (A + z)∗ = A∗ + z.
The range is also called image and im(A) and A(DA ) are alternative notations.
– 80 –
Proposition 7.3. An operator A : DA → H is
Proof. We have
(i) DA ⊆ DB
(ii) ∀ α ∈ DA : Aα = Bα.
∀ β ∈ DB : hψ|Bβi = hη|βi.
∀ α, β ∈ DA : hα|Aβi = hAα|βi.
Remark 7.7 . In the physics literature, symmetric operators are usually referred to as Her-
mitian operators. However, this notion is then confused with the that of self-adjointness
when physicists say that observables in quantum mechanics correspond to Hermitian op-
erators, which is not the case. On the other hand, if one decides to use Hermitian as a
synonym of self-adjoint, it is then not true that all symmetric operators are Hermitian. In
order to prevent confusion, we will avoid the term Hermitian altogether.
– 81 –
Proposition 7.8. If A is symmetric, then A ⊆ A∗ .
∀ α ∈ DA : hψ|Aαi = hη|αi
(i) DA = DA∗
(ii) ∀ α ∈ DA : Aα = A∗ α.
Remark 7.9 . Observe that any self-adjoint operator is also symmetric, but a symmetric
operator need not be self-adjoint.
A ⊆ B = B ∗ ⊆ A∗ = A
and hence, B = A.
In fact, self-adjoint operators are maximal even with respect to symmetric extension,
for we would have B ⊆ B ∗ instead of B = B ∗ in the above equation.
Remark 7.11 . Note that we have used the overline notation in several contexts with different
meanings. When applied to complex numbers, it denotes complex conjugation. When
applied to subsets of a topological space, it denotes their topological closure. Finally, when
applied to (closable) operators, it denotes their closure as defined above.
H = DA ⊆ DA∗ ⊆ H.
Hence, DA∗ = H.
– 82 –
Note carefully that the adjoint of a symmetric operator need not be symmetric. In
particular, we cannot conclude that A∗ ⊆ A∗∗ . In fact, the reversed inclusion holds.
Proposition 7.13. If A is symmetric, then A∗∗ ⊆ A∗ .
Proof. Since A is symmetric, we have A ⊆ A∗ . Hence, A∗∗ ⊆ A∗ by Proposition 7.6.
Proof. Let H and K be two Hilbert spaces and let A : H → K be densely defined. We
define the graph of A as
Γ(A) := {(h, Ah) | h ∈ DH }.
In this context, we say A is closed if and only if Γ(A) is closed w.r.t. the product topology
on H ⊕ K. Next define the operator
J : H⊕K →H⊕K
h ⊕ k 7→ (−k) ⊕ h.
We now wish to show that Γ(A∗ ) ∼ = [J (Γ(A))]⊥ , as the right hand side is closed due
to the orthogonal projection, which gives us that A∗ is closed.
12
Many thanks to Alfredo Sepulveda-Ximenez for providing a brilliant answer to this on Quora.
– 83 –
(⇒) Let x ∈ DA and y ∈ DA∗ . We have y ⊕ A∗ (y) ∈ Γ(A∗ ), and
hy ⊕ z, −A(x) ⊕ xi = 0
∴ hy, A(x)i = hz, xi,
which, from the definition of the adjoint, tells us that y ∈ DA∗ and z = A∗ (y) and so
y ⊕ z ∈ Γ(A∗ ) and [J (Γ(A))]⊥ ⊆ Γ(A∗ ).
Theorem 7.19. If A is essentially self-adjoint, then there exists a unique self-adjoint ex-
tension of A, namely A.
– 84 –
Remark 7.20 . One may get the feeling at this point that checking for essential self-adjointness
of an operator A, i.e. checking that A∗∗ = A∗∗∗ , is hardly easier than checking whether A
is self-adjoint, that is, whether A = A∗ . However, this is not so. While we will show below
that there is a sufficient criterion for self-adjointness which does not require to calculate
the adjoint, we will see that there is, in fact, a necessary and sufficient criterion to check
for essential self-adjointness of an operator without calculating a single adjoint.
Remark 7.21 . If a symmetric operator A fails to even be essentially self-adjoint, then there
is either no self-adjoint extension of A or there are several.
Definition. Let A be a densely defined operator. The defect indices of A are
A∗ ψ + zψ ∈ H.
Now suppose that z satisfies the hypothesis of the theorem. Then, as ran(A + z) = H,
∃ α ∈ DA : A∗ ψ + zψ = (A + z)α.
∀ϕ ∈ H : hψ|ϕi = hα|ϕi.
– 85 –
Theorem 7.25. A symmetric operator A is essentially self-adjoint if, and only if,
The following criterion for essential self-adjointness, which does require the calculation
of A∗ ,is equivalent to the previous result and, in some situations, it can be easier to check.
Theorem 7.26. A symmetric operator A is essentially self-adjoint if, and only if,
Proof. We show that this is equivalent to the previous condition. Recall that if M is a
linear subspace of H, then M⊥ is closed and hence, M⊥⊥ = M (Proposition 4.7). Thus,
by Proposition 7.5, we have
and, similarly,
ran(A + z) = ker(A∗ + z)⊥ .
Since H⊥ = {0}, the above condition is equivalent to
– 86 –
8 Spectra and perturbation theory
We will now focus on the spectra of operators and on the decomposition of the spectra of
self-adjoint operators. The significance of spectra is that the axioms of quantum mechanics
prescribe that the possible measurement values of an observable (which is, in particular, a
self-adjoint operator) are those in the so-called spectrum of the operator.
A common task in almost any quantum mechanical problem that you might wish to
solve is to determine the spectrum of some observable. This is usually the Hamiltonian,
or energy operator, since the time evolution of a quantum system is governed by the expo-
nential of the Hamiltonian, which is more practically determined by first determining its
spectrum.
More often than not, it is not possible to determine the spectrum of an operator exactly
(i.e. analytically). One then resorts to perturbation theory which consists in expressing the
operator whose spectrum we want to determine as the sum of an operator whose spectrum
can be determined analytically and another whose contribution is “small” in some sense to
be made precise.
– 87 –
Remark 8.3 . If H is finite-dimensional, then the converse of the above corollary holds ad
hence, the spectrum coincides with the set of eigenvalues. However, in infinite-dimensional
spaces, the spectrum of an operator contains more than just the eigenvalues of the operator.
These form a partition of σ(A), i.e. they are pairwise disjoint and their union is σ(A).
Clearly, σp (A) ∪ σc (A) = σ(A) but, since σp (A) ∩ σc (A) = σpec (A) is not necessarily
empty, the point and continuous spectra do not form a partition of the spectrum in general.
Thus, we have
(λ − λ)hψ|ψi = 0
and since ψ 6= 0, it follows that λ = λ. That is, λ ∈ R.
– 88 –
Theorem 8.5. If A is a self-adjoint operator, then the elements of σp (A) are precisely the
eigenvalues of A.
and hence
ran(A − λ) = ran(A − λ)⊥⊥ = {0}⊥ = H.
Therefore, λ ∈
/ σp (A).
(ii) The eigenvalue λ is said to be non-degenerate if dim EigA (λ) = 1, and degenerate if
dim EigA (λ) > 1.
Remark 8.7 . Of course, it is possible that dim EigA (λ) = ∞ in general. However, in this
section, we will only consider operators whose eigenspaces are finite-dimensional.
– 89 –
Lemma 8.8. Eigenvectors associated to distinct eigenvalues of a self-adjoint operator are
orthogonal.
A. Unperturbed spectrum
Let H0 be a self-adjoint operator whose eigenvalues and eigenvectors are known and satisfy
H0 enδ = hn enδ ,
where
• the index δ varies over the range 1, 2, . . . , d(n), with d(n) := dim EigH0 (hn )
Note that, since we are assuming that all eigenspaces of H0 are finite-dimensional, EigH0 (hn )
is a sub-Hilbert space of H and hence, for each fixed n, we can choose the enδ so that
In fact, thanks to our previous lemma, we can choose the eigenvectors of H0 so that
Hλ := H0 + λW.
Further assume that Hλ is self-adjoint for all λ ∈ (−ε, ε). Recall, however, that this
assumption does not force W to be self-adjoint.
– 90 –
We seek to understand the eigenvalue equation for Hλ ,
by exploiting the fact that it coincides with the eigenvalue equation for H0 when λ = 0.
In particular, we will be interested in the lifting of the degeneracy of hn (for some fixed
n) once the perturbation W is “switched on”, i.e. when λ 6= 0. Indeed, it is possible, for
instance, that while the two eigenvectors en1 and en2 are associated to the same (degenerate)
eigenvalue hn of H0 , the “perturbed” eigenvectors en1 (λ) and en2 (λ) may be associated to
different eigenvalues of Hλ . Hence the reason why we added a δ-index to the eigenvalue in
the above equation. Of course, when λ = 0, we have hnδ (λ) = hn for all δ.
Remark 8.9 . Recall that the Big O notation is defined as follows. If f and g are functions
I ⊆ R → R and a ∈ I, then we write
f (x) = O(g(x)) as x → a
to mean
∃ k, M > 0 : ∀ x ∈ I : 0 < |x − a| < k ⇒ |f (x)| < M |g(x)|.
The qualifier “as x → a” can be omitted when the value of a is clear from the context. In
our expressions above, we obviously have “as λ → 0”.
Inserting the formal power series ansatz into these conditions yields
(k)
(i) Imhenδ |nδ i = 0 for k = 1, 2, . . .
(1) (2) (1)
(ii) 0 = 2λ Rehenδ |nδ i + λ2 2 Rehenδ |nδ i + knδ k2 + O(λ3 ).
13
German for “educated guess”.
– 91 –
Since (ii) holds for all λ ∈ (−ε, ε), we must have
(1) (2) (1)
Rehenδ |nδ i = 0, 2 Rehenδ |nδ i + knδ k2 = 0.
(1) (2)
Since we know from (i) that Imhenδ |nδ i = 0 and Imhenδ |nδ i = 0, we can conclude
(1) (2) (1)
henδ |nδ i = 0, henδ |nδ i = − 12 knδ k2 .
(1)
That is, nδ is orthogonal to enδ and
(2) (1)
nδ = − 21 knδ k2 enδ + e
(H0 − hn )enδ = 0
(1) (1)
(H0 − hn )nδ = −(W − θnδ )enδ
(2) (1) (1) (2)
(H0 − hn )nδ = −(W − θnδ )nδ + θnδ enδ .
Of course, one may continue this expansion up to the desired order. Note that the zeroth
order equation is just our unperturbed eigenvalue equation.
E. First-order correction
To extract information from the first-order equation, let us project both sides onto the
unperturbed eigenvectors enα (i.e. apply henα | · i to both sides). This yields
(1) (1)
henα |(H0 − hn )nδ i = −henα |(W − θnδ )enδ i.
By self-adjointness of H0 , we have
(1) (1) (1)
henα |(H0 − hn )nδ i = h(H0 − hn )∗ enα |nδ i = h(H0 − hn )enα |nδ i = 0.
Therefore,
(1) (1)
0 = −henα |W enδ i + henα |θnδ enδ i = −henα |W enδ i + θnδ δαδ
– 92 –
and thus, the first-order eigenvalue correction is
(1)
θnδ = henδ |W enδ i.
Note that the right-hand side of the first-order equation is now completely known and
(1)
hence, if H0 −hn were invertible, we could determine nδ immediately. However, this is only
possible if the unperturbed eigenvalue hn is non-degenerate. More generally, we proceed as
follows. Let E := EigH0 (hn ). Then, we can rewrite the right-hand side of the first-order
equation as
(1) (1)
−(W − θnδ )enδ = − idH (W − θnδ )enδ
(1)
= −(PE + PE ⊥ )(W − θnδ )enδ
d(n)
(1) (1)
X
=− henβ |(W − θnδ )enδ ienβ − PE ⊥ W enδ + θnδ PE ⊥ enδ
β=1
= −PE ⊥ W enδ
(1)
so that we have (H0 − hn )nδ ∈ E ⊥ . Note that the operator
PE ⊥ ◦ (H0 − hn ) : E ⊥ → E ⊥
is solved by
(1)
PE ⊥ nδ = −PE ⊥ (H0 − hn )−1 PE ⊥ W enδ .
(1)
The “full” eigenvector correction nδ is given by
d(n)
(1) (1)
X
idH nδ = (PE + PE ⊥ )nδ = cδβ enβ − PE ⊥ (H0 − hn )−1 PE ⊥ W enδ ,
β=1
where the coefficients cδβ cannot be fully determined at this order in the perturbation.
What we do know is that our previous fixing of the phase and normalisation of the perturbed
(1)
eigenvectors implies that nδ is orthogonal to enδ , and hence we must have cδδ = 0.
– 93 –
(1)
and recalling that henδ |nδ i = 0 and henδ |enδ i = 1, we have
(2) (1)
θnδ = henδ |W nδ i.
(1)
Plugging in our previous expression for nδ yields
d(n)
(2)
X
−1
θnδ = enδ W
cδβ enβ − W PE ⊥ (H0 − hn ) PE ⊥ W enδ
β=1
d(n)
X
= cδβ henδ |W enβ i − henδ |W PE ⊥ (H0 − hn )−1 PE ⊥ W enδ i
β=1
d(n)
(1)
X
= cδβ θnβ δδβ − henδ |W PE ⊥ (H0 − hn )−1 PE ⊥ W enδ i
β=1
= −henδ |W PE ⊥ (H0 − hn )−1 PE ⊥ W enδ i
since cδδ = 0. One can show that the eigenvectors of H0 (or any other self-adjoint operator)
form an orthonormal basis of H. In particular, this implies than we can decompose the
identity operator on H as
∞ d(n)
X X
idH = henβ | · ienβ .
n=1 β=1
(2)
By inserting this appropriately into our previous expression for θnδ , we obtain
∞ d(m)
X |hemβ |W enδ i|
(2)
X
θnδ =− .
hm − hn
m=1 β=1
m6=n
Putting everything together, we have the following second-order expansion of the perturbed
eigenvalues
(1) (2)
hnδ (λ) = hn + λθnδ + λ2 θnδ + O(λ3 )
∞ d(m)
X X |hemβ |W enδ i|
2
= hn + λhenδ |W enδ i − λ + O(λ3 ).
hm − hn
m=1 β=1
m6=n
Remark 8.10 . Note that, while the first-order correction to the perturbed nδ eigenvalue
only depends on the unperturbed nδ eigenvalue and eigenvector, the second-order correction
draws information from all the unperturbed eigenvalues and eigenvectors. Hence, if we try
to approximate a relativistic system as a perturbation of a non-relativistic system, then the
second-order corrections may be unreliable.
– 94 –
9 Case study: momentum operator
We will now put the machinery developed so far to work by considering the so-called
momentum operator for two cases: a compact interval [a, b] ⊂ R, and a circle. As the
name suggests, this operator is meant to be the QM observable who’s eigenvalues are the
momenta of the system. It is clear, therefore, that we require (recall H = L2 (Rd ) up to
unitary equivalence)
P : DP → L2 (Rd )
to be self adjoint.
We will specialise to the case of d = 1 in order to simplify things, while also demon-
strating the main ideas. The concepts can be extended to higher values of d. We will also
set ~ = 1 throughout this section.
P : DP → L2 (R)
ψ 7→ (−i)ψ 0 ,
The first obvious question is why does this deserve its name? (i.e. how is it related
to what we know classically as the momentum?) The answer, unfortunately, can not yet
be provided in full detail as it requires us to know the spectral theorem and Stone-von
Neumann theorem. However these details are provided here as they will help later when
discussing these theorems. For now we must just take it in faith.14
There is yet another important question we must ask: how do we choose DP ? The
immediate response might be ‘such that the derivative is square integrable.’ However, this
is not good enough. We also require that P be self adjoint and, as we have seen previously,
the concept of self adjointness depends heavily on the domains considered.
Luckily, not all hope is lost. The method will be as follows: guess a reasonable DP and
then search for a self adjoint extension, should one exist. Before doing so, though, we will
first introduce some new definitions that will prove invaluable.
(i) The space of once-continuously differential functions over some interval, I; C 1 (I),
– 95 –
As we shall see they are related via
and so they will provide a convenient way to compare the domains DP , DP ∗ , etc, to test
for self adjointness.
Corollary 9.1. Given a absolutely continuous function, it is clear that ρ =a.e. ψ 0 , where the
almost everywhere condition comes from the fact that a Lebesgue integral does not distinguish
two elements that differ by a measure zero.
Corollary 9.4. Note that for any weakly differentiable function the integration by parts
result, Z Z
ψ(x)ϕ0 (x)dx = − ψ 0 (x)ϕ(x)dx,
Ω Ω
holds for all ϕ ∈ Cc∞ (Ω).
15
The subscript indicates that ϕ vanishes at the limits of integration.
– 96 –
Corollary 9.5. Given that ψ ∈ C k (Ω), ϕ ∈ Cc∞ (Ω), we can show by induction that
Z Z
(α) α
ψ(x)ϕ (x) = (−1) ψ (α) (x)ϕ(x),
Ω Ω
Remark 9.6 . In the above Corollary we have used the fact that we are only considering
one dimensional problems here. The expression is much the same for higher dimensional
problems, however one has to take into account the different derivative directions.
Remark 9.7 . Sobolev spaces can be made into Banach spaces by equipping them with a
norm and H k (Ω) can be made into a Hilbert space.
Proof. See Theorem 7.13 in ‘A first Course in Sobolev Spaces, Giovanni Leoni’.
P : DP → L2 (I)
ψ 7→ (−i)ψ 0 .
The question still remains, though, as to whether P is self adjoint. Recalling the results
of Lecture 7, it is first instructive to see if P is symmetric.
– 97 –
A. Symmetric?
Let ψ, ϕ ∈ DP , then
Z 2π
hψ, Pϕi = dxψ(x)(−i)ϕ0 (x)
0
Z2π 2π
dx(−i)ψ 0 (x)ϕ(x) − i ψ(x)ϕ(x) 0
=−
Z 2π0
= dx(−i)ψ 0 (x)ϕ(x)
0
= hPψ, ϕi,
B. Self Adjoint?
From above we know that P ⊆ P ∗ , and so DP ⊆ DP ∗ , so we need to ask the question of how
P ∗ behaves outside the domain DP . The obvious answer is to just extend the definition to
be
P ∗ : DP ∗ → L2 (I)
ψ 7→ (−i)ψ 0 .
Note that the ψ here is not necessarily the same as the ψ in above. The same symbol is
just used and the context tells us where it lives.
All that is left to check is the domain DP ∗ . From the definition of the adjoint we have
with η := P ∗ ψ. Before proceeding further with the calculation first introduce a function
N : I → C such that η =a.e. N 0 . Note N is Lebesgue integrable and that the almost
everywhere condition is sufficient as η appears in a Lebesgue integral. Therefore we have,
Z 2π Z 2π
0
dxψ(x)(−i)ϕ (x) = dxN 0 (x)ϕ(x)
0 0
Z 2π
2π
dx ψ(x)(−i)ϕ0 (x) + N (x)ϕ0 (x) = N (x)ϕ(x) 0
0
Z 2π
dx ψ(x) − iN (x) ϕ0 (x) = 0
−i
0
hψ − iN, ϕ0 i = 0,
– 98 –
R 2π
Proof. Let A := {ϕ0 | ϕ ∈ DP } and B := {ξ ∈ C 0 (I) | 0 ξ(x)dx = 0}.
Now consider a ϕ0 ∈ A, then
Z 2π
2π
ϕ0 (x)dx = ϕ(x) 0 = 0,
0
so clearly ξ := ϕ and A ⊆ B.
Now consider a ξ ∈ B and define
Z x
ϕ(x) := ξ(y)dy.
0
Then, since ξ ∈ C 0 (I) it follows that ϕ ∈ C 1 (I). It also follows that ϕ(0) = 0 = ϕ(2π) and
so ϕ0 ∈ A and B ⊆ A.
Lemma 9.10. Let {1} denote the set consisting of the element 1 ∈ L2 (I) with 1(x) = 1C
for all x ∈ I. Then
{ϕ0 | ϕ ∈ DP } = {1}⊥ .
Proof. From the previous Lemma we have
where the fact that C 0 (I) is dense in L2 (I) to go from the second to third line.
ψ − iN ∈ A⊥ = A⊥
= (A⊥ )⊥⊥
= (A⊥⊥ )⊥
⊥
=A
= {1}⊥⊥
= {1}
= {C : I → C | x 7→ CC },
and so
DP ∗ ⊆ AC(I).
Now recalling
P ∗ : DP ∗ → L2 (I)
ψ 7→ (−i)ψ 0 ,
– 99 –
and using Proposition 9.8, we have
DP ∗ ⊆ H 1 (I).
Finally we see that because all of the integration by parts results above were of the form
Z Z
dxψ 0 (x)ϕ(x) = ψ(x)ϕ0 (x),
I I
for arbitrary ϕ ∈ Cc1 (I).Now since ⊂ Cc1 (I), the integrals also hold for any ϕ ∈
Cc∞ (I)
Cc∞ (I), but this is just the condition for weak derivative and so we see that
DP ∗ = H 1 (I),
and
DP $ DP ∗ ,
so P is not self adjoint.
hPψ, ϕi = hP ∗ ψ, ϕi
in the above. Writing as integrals we have
Z 2π Z 2π
0
dxψ(x)ϕ (x) = dx(−i)ψ 0 (x)ϕ(x)
0 0
Z 2π 2π
dx ψ(x)ϕ0 (x) − ψ(x)ϕ0 (x) = i ψ(x)ϕ(x) 0
−i
0
0 = ψ(2π)ϕ(2π) − ψ(0)ϕ(0),
where again integration by parts has been used. We need to be careful in what conclusions
we draw from this final statement, though. ϕ ∈ DP ∗ = H 1 (I), which places no restrictions
on the values of ϕ on the boundary, nor does it make any conditions between the two values
ϕ(0) and ϕ(2π) — they are independently arbitrary. We must, therefore, conclude that
– 100 –
D. Defect Indices
We only have one tool left to check for a self adjoint extension of P, check the defect indices
to see if a self adjoint extension even exists. Recall
and a symmetric operator has a (not necessarily unique) self adjoint extension if d+ = d− .
We therefore need to determine how many ψ ∈ DP ∗ lie in ker(P ∗ ∓ i):
(P ∗ ∓ i)ψ = 0
−iψ 0 ∓ iψ = 0
ψ(x) = a± e∓x
for a+ , a− ∈ C. There is only one solution for each and so d+ = 1 = d− . We therefore know
that there does exist at least one self adjoint extension of P 16 , however we don’t know the
form of any of them.17
Remark 9.11 . If instead of a compact interval we take a half line I = [a, ∞), then d+ 6= d−
and so there is no self adjoint extension of P, meaning there is no notion of a QM momentum
in this case. Note however that people often talk about free particles along an infinite line
in QM, however they always require the wave function (ψ) to vanish at ±∞. This is clearly
just the same as taking a large, yet finite, compact interval I = [a, b].
which is exactly the same as before apart from now we do not require ψ to vanish at the
boundary. We still have
P : DP → L2 (R)
ψ 7→ (−i)ψ 0 ,
so it follows that PI $ Pc , where the I and c denote interval and circle respectively. In
other words Pc is an extension of PI .
A. Symmetric?
Repeating the steps from above it is clear that P is still symmetric. Note however, it is
symmetric for a different reason: before we had ψ(x)ϕ(x) = 0 as both ψ and ϕ vanished
at the limits, whereas now it holds simply because ψ(2π) = ψ(0) and likewise for ϕ.
16
Whew!
17
Not whew!
– 101 –
B. Self Adjoint?
As before we have
However, recalling that the adjoint flips the inequality sign we have Pc∗ ⊆ PI∗ and therefore
DPc∗ ⊆ H 1 (I). We can, therefore, replace the unknown Pc∗ with the known PI∗ in the final
part of the above, i.e.
hPc∗ ψ, ϕi = hPI∗ ψ, ϕi.
Then, following the exactly as before, we arrive at
2π
0 = i ψϕ(x) 0
= ψ(2π) − ψ(0) ϕ(0)
=⇒ ψ(2π) = ψ(0),
giving us18
DPc∗ := {ψ ∈ H 1 (I) | ψ(2π) = ψ(0)} =: Hcyc
1
(I),
so DPc $ DP
∗ , and therefore P is not self adjoint.
c c
which results in
2π
0 = i ψϕ(x) 0
= ψ(2π) − ψ(0) ϕ(0)
=⇒ ψ(2π) = ψ(0),
and so
1
DPc := Hcyc (I) = DPc∗ ,
so we conclude that Pc is essentially self adjoint and Pc is the unique self adjoint extension.
To summarise, we have found the momentum operator on a circle:
1
PS 1 : Hcyc (I) → L2 (R)
ψ 7→ (−i)ψ 0 .
18
Note we are OK extend the domain to all of H 1 (I) provided we impose the conditions above.
– 102 –
10 Inverse spectral theorem
This section is devoted to the development of all the notions and results necessary to
understand and prove the spectral theorem, stated below.
While useful in theory, existence results are often of limited use in practice since they
usually only tell us that something exists, and not how to construct it. However, we should
note here that the proof of the spectral theorem is, in fact, constructive in nature. Hence,
given any self-adjoint operator A, we will be able to explicitly determine its associated
projection-valued measure PA along the following steps.
ψ : σ(OR ) → R given by
(i) For each ψ ∈ H, construct the real-valued Borel measure µA
Z λ+δ
µA
ψ ((−∞, λ]) := lim lim dt Imhψ|RA (t + iε)ψi,
δ→0+ ε→0+ −∞
where RA : ρ(A) → L(H) is the resolvent map of A. This is know as the Stieltjes
inversion formula. Note that while not every element in σ(OR ) is of the form (−∞, λ],
such Borel measurable sets do generate the entire σ(OR ) via unions, intersections and
set differences. Hence, the value of µAψ (Ω) for Ω ∈ σ(OR ) can be determined by
applying the corresponding formulae for measures, namely σ-additivity, continuity
from above and measure of set differences.
ψ,ϕ : σ(OR ) → C by
(ii) For all ψ, ϕ ∈ H, define the complex-valued Borel measure µA
µA 1 A A A A
ψ,ϕ (Ω) := 4 (µψ+ϕ (Ω) − µψ−ϕ (Ω) + iµψ−iϕ (Ω) − iµψ+iϕ (Ω)).
(iii) Define the projection-valued measure PA : σ(OR ) → L(H) by requiring PA (Ω), for
each Ω ∈ σ(OR ), to be the unique map in L(H) satisfying
Z
∀ ψ, ϕ ∈ H : hψ|PA (Ω)ϕi = χΩ dµA ψ,ϕ .
R
We will now make all the notions and constructions used herein precise. In fact, we
will present the relevant definitions and results by taking the inverse route, starting with
projection-valued measures and arriving at their associated self-adjoint operators, obtaining
(and proving) what we will call the inverse spectral theorem.
– 103 –
10.1 Projection-valued measures
Projection-valued measures are, unsurprisingly, objects sharing characteristics of both mea-
sures and projection operators.
(iv) For any pairwise disjoint sequence {Ωn }n∈N in σ(OR ) and any ψ ∈ H,
∞
X ∞
[
P(Ωn )ψ = P Ωn ψ.
n=0 n=0
Remark 10.2 . Note in the final condition we included ψ ∈ H as, for the case countably
infinite n ∈ N we need to check convergence, which involves using the norm. Without the ψ
we would need to use the norm on L(H) which may prove difficult. However, by including
the ψ we can work with the norm on H itself.
Lemma 10.3. Let P : σ(OR ) → L(H) be a projection-valued measure. Then, for any
Ω, Ω1 , Ω2 ∈ σ(OR ),
(i)
P (∅)ψ = P (∅ ∪ ∅)ψ = P (∅)+P (∅) ψ = 2P (∅)ψ
∴ P (∅)ψ = 0H =⇒ P (∅) = 0.
(ii)
P (R) = P (R \ Ω) ∪ Ω = P (R \ Ω) + P (Ω)
∴ P (R \ Ω) = idH −P (Ω).
– 104 –
(iii)
P (Ω1 ) = P (Ω1 ∩ Ω2 ) ∪ (Ω1 \ Ω2 ) = P (Ω1 ∩ Ω2 ) + P (Ω1 \ Ω2 ) ,
and similarly for P (Ω2 ). Also
P (Ω1 \ Ω2 ) + P (Ω2 \ Ω1 ) + P (Ω1 ∩ Ω2 ) = P (Ω1 \ Ω2 ) ∪ (Ω2 \ Ω1 ) ∪ (Ω1 ∩ Ω2 )
= P (Ω1 ∪ Ω2 ).
(iv) First consider Ω1 ∩ Ω2 = ∅. Then, using (ii) from the definition we have
P (Ω1 ) ◦ P (Ω2 ) = [P (Ω1 \ Ω2 ) + P (Ω1 ∩ Ω2 ) ] ◦ [P (Ω2 \ Ω1 ) + P (Ω1 ∩ Ω2 ) ]
= P (Ω1 ∩ Ω2 ) ◦ P (Ω1 ∩ Ω2 )
= P (Ω1 ∩ Ω2 )
where we have made use of the fact that (Ω1 \ Ω2 ) ∩ (Ω2 \ Ω1 ) = ∅ etc.
(v) If Ω1 ⊂ Ω2 we have
which along with the fact that P (Ω) ≥ 0 gives the result.
Note most of these properties make sense simply by thinking of P (Ω) as the area of
the set Ω ∈ σ(OR ). For example P (Ω1 ) = P (Ω1 \ Ω2 ) + P (Ω1 ∩ Ω2 ) is
Ω1 Ω2 Ω1 Ω2
– 105 –
Remark 10.4 . As noted before, it suffices to know P (−∞, λ] for all λ ∈ R, and for this
Remark 10.5 . As we shall see, if f is bounded (in the sense that there exists a a ∈ R such
that |f (x)| < a for all x ∈ R) that the integral part of the operator will have nice properties.
For example it will be linear in f , i.e.
Z Z Z
(αf + g)dP = α f dP + gdP,
R R R
for α ∈ C. However, if f is unbounded, domain issues will destroy the equality above. It is
important to note, though, that it is exactly the latter case we need as
f = iR : R ,→ C
x 7→ x
in the Spectral theorem, which is clearly unbounded.
19
Recall footnote 5 from lecture 2: to us the term ‘unbounded’ means definitely not bounded.
– 106 –
A. Simple f
Recall a simple function is a measurable function that takes a finite number of results in
the target, i.e. if f : R → C then
Proof. Let S(R, C) denote the set of all simple functions f : R → C. We can make this set
into a C-vector space by inheriting the addition and s-multiplication from C, namely define
(f + g)(x) = fn + gn
Remark 10.7 . Observe that χΩ for any Ω ∈ σ(OR ) is simple (it only takes the values 0 or
1), and hence Z
χΩ dP = 1 · P (Ω) + 0 · P (σ \ Ω) = P (Ω).
R
– 107 –
Remark 10.8 . Observe also that for any ψ, ϕ ∈ H,
N
* Z + D X E
ψ, f dP ϕ = ψ, fn P (Ωn )ϕ
R n=1
N
X
= fn hψ, P (Ωn )ϕi
n=1
XN
= fn µψ,ϕ (Ωn )
n=1
Z
=: f dµψ,ϕ ,
R
which, if we equip S(R, C) with the suppremum norm and L(H) with its operator norm,
has operator norm k R dP k = 1.
R
Proof. We have already shown that R f dP ∈ L(H) (i.e. it is linear), so we just need to
R
– 108 –
where we have used the definition of the norm in terms of the inner product, the fact
that P is self adjoint, the fact that P (Ωn )P (Ωm ) = δnm for pairwise disjoint Ωn /Ωm , and
Proposition 6.27 along with the fact that kf k∞ := supx∈R f (x). The equality in the last
line can be assumed provided f and ψ are sufficiently chosen.
Thus we have
R
Z
f dP
dP
:= sup R L(H)
R
f ∈S(R,C) kf k∞
R
R f dP ψ H
:= sup sup
f ∈S(R,C) ψ∈H kf k∞ kψkH
= 1.
Proposition 10.9. The set B can be made into a Banach space by defining the norm
kf kB := sup |f (x)|.
x∈R
Proof. We turn the set into a linear vector space in the usual manner; we inherit the addition
and s-multiplication from C.
Now prove kf kB is a norm. Comparing to the definition given at the bottom of Page
9, for f, g ∈ B(R, C) and z ∈ C we have
(i) Clearly kf kB ≥ 0.
(ii)
kf kB = 0
⇔ sup |f (x)| = 0
x∈R
⇔ f (x) = 0 ∀x ∈ R
so f = 0.
(iii)
kz · f kB := sup |z · f (x)|
x∈R
= |z| sup |f (x)|
x∈R
=: |Z| · kf kB .
– 109 –
(iv)
Now let {fn }n∈N be a Cauchy sequence in B(R, C), that is: ∀ > 0, ∃N ∈ N : ∀m, n ≥ N
we have
d(fn , fm ) := kfn − fm kB := sup |fn (x) − fm (x)| < .
x∈R
Now from the definition of the supremum we have
so it follows that the sequence {fn (x)}n∈N is a Cauchy sequence in C. But C is a complete
metric space so we know that this Cauchy sequence converges in C, i.e.
lim fn (x) = zx ∈ C.
n→∞
We can thus define a point-wise limit f of the sequence {fn }n∈N ⊂ B(R, C) as
for all x ∈ R. Then, by equipping C and R with their respective Borel σ-algebras, Proposi-
tion 5.22 tells us that f is measurable.
Finally from the fact that B(R, C) ⊂ L(R, C), Theorem 2.8 tells us that f is bounded
and so B(R, C) is a Banach space.
Corollary 10.10. Observe that S(R, C) is in-fact a dense, linear subspace of B(R, C). Thus,
the BLT theorem tells us that we have a unique extension of the operator
Z
dP : S(R, C) → L(H)
R
to the domain B(R, C) with equal operator norm. That is, we have an operator
\
Z
dP : B(R, C) → L(H)
R
R
with
\R dP = 1.
Corollary 10.11. By suitable definition we can turn the space B(R, C) into an C ∗ -algebra
and our operator then has the following properties
(i) Z
1dP = idH
R
– 110 –
(ii) Z Z Z
(f · g)dP = f dP ◦ gdP
R R R
(iii) Z Z ∗
f dP = f dP
R R
where Z
2
DR := ψ ∈ H |f | dµψ < ∞ ⊆ H,
R f dP
R
is a dense, linear subspace. The linear map is defined via
Z Z
f dP ψ := lim fn dP ψ ,
R n→∞ R
fn := χ{x∈R|f (x)<n} f.
Remark 10.12 . Note that the map above includes the f , it is not just the integral defined
in the previous section. Note also for the case when f ∈ B(R, C) we just recover the case
above and we have DR f dP = H (i.e. we have L(H)). Otherwise it is a proper subset.
R
Df := DR f dP ,
R
however this could lead one to think of the domain of f itself, which here is R. We will
avoid this notation.
Remark 10.14 . The sequence {fn }n∈N can be thought of as ‘chopping’ f into bounded parts.
f
f2
f1
– 111 –
Lemma 10.15. The sequence {fn }n∈N is Cauchy in L2 (R). This in tern implies that the
R
sequence {( R fn dP )ψ}n∈N is Cauchy in H, which is required for the limit in the definition
to make sense, i.e. the result lies in H.
(i) As before Z Z ∗
f dP = f dP .
R R
where the equality holds only for bounded f . As explained earlier, the inequality
arises due to domain issues. We can now see this more explicitly from the defini-
tion of DR f dP ; just because f and g are both measurable, it does not mean that
R
their respective map domains will coincide. However, the domain for the LHS is
DR (|αf |+|g|)dP .
R
(iii) Z Z Z
f dP ◦ gdP ⊆ (f · g)dP,
R R R
again where the equality holds only when f and g are bounded.
Proof.
Z
∗
(AP ) = idR dP
ZR
= idR dP
R
= AP ,
so AP is self adjoint.
– 112 –
11 Spectral theorem
The inverse spectral theorem tells us how to construct a self adjoint operator AP given a
projection valued measure P . The aim of this lecture is to do the opposite; given a self
adjoint operator A we want to find a PVM PA . We want the two methods to be in unison
— that is we want APA = A and PAP = P . We shall start by assuming that A can be
written in integral form, and then shall remove this restriction.
Definition. Let the self adjoint operator A be spectrally decomposable; i.e. there exists a
PVM P such that Z
A= idR dP,
R
then for any measurable function f : R → C, we define the operator
f (A) : DR f dP → H,
given by Z Z
f (A) := (f ◦ idR )dP ≡ f (λ)P (dλ).
R R
Remark 11.1 . The spectral theorem will show that every self adjoint operator A is spectrally
decomposable by virtue of a uniquely detemerined P .
Proof.
Z ∗
[f (A)]∗ := (f ◦ idR )dP
Z R
= (f ◦ idR )dP
ZR
= f dP
R
=: f (A).
Z
exp(A) := eλ P (dλ)
R
– 113 –
is not self adjoint. This latter case is of high importance in QM, as can be seen by revisiting
Axiom 4 at the start.
Z
P (Ω) = χΩ dP
R
implies
Z
sq P (Ω) :=
(sq ◦ χΩ dP
ZR
≡ [χΩ (λ)]2 P (dλ)
ZR
= χΩ (λ)P (dλ)
R
= P (Ω),
Definition. Given a spectrally decomposable operator A and an the resolvant set ρ(A),
we define
rz (A) = RZ (A) := (A − z idH )−1 ,
which is rewritten as
rz : R → C
1
λ 7→ ,
λ−z
and, due to the fact that A is spectrally decomposable, satisfies
Z Z
1
rz (A) = (rz ◦ idR )dP ≡ P (dλ).
R R λ−z
Note, using the results in the previous lecture, we have that for any ψ ∈ H,
Z
hψ, RA (z)ψi = ψ, (rz ◦ idR )dP ψ
Z R
Definition. A Herglotz function is an analytic complex function that maps the upper
half plane into itself, but need not be surjective or injective. They are also known as
Nevanlinna/Pick/R functions.
– 114 –
Theorem 11.5. The function
hψ, RA (·)ψi : C → C
Z
1
z 7→ µψ (dλ)
R λ−z
is Herglotz.
Proof. Recall
µψ : σ(OR ) → R+
0
Recalling the start of last lecture, if we can find a way to construct µψ from our A then
we can use that to reconstruct P . The result of this is the previously mentioned Stieltjes
Inversion Formula, and it is obtained as follows.
Let t, ∈ R. Then, since A is self adjoint and so its spectrum is purely real, t+i ∈ ρ(A).
This allows us to act on it with RA . Thus, consider
1 t2 1 t2
Z Z Z
ε
lim dt Imhψ, RA (t + iε)ψi = lim dt µ (dλ)
2 ψ
ε→0 π t1 ε→0 π t1 R |λ − t − iε|
+ +
Z Z t2
1 ε
= lim dt µψ (dλ),
ε→0+ R π t1 (λ − t)2 + ε2
where Fubini’s Theorem20 has been used. The inner integral is a standard integral, with
result
1 t2 t − λ t2
Z
ε 1
dt = arctan .
π t1 (λ − t)2 + ε2 π ε t1
Now strictly, at this stage, we cannot simply pull the ε limit into this expression; we would
need to check that the above result is bounded first and then, by dominated convergence,
20
See Wiki
– 115 –
we can pull it in. This will turn out to be true, and so in order to simplify the following we
consider ε to be small here.
In order to work out the above expression, we can use the λ-graphs. Let’s plot both
terms (including the overall minus sign that comes with t1 ) on the same graph:
1
t1 −λ
− π1 arctan
2
ε
t1 t2 λ
1 t2 −λ
π arctan ε
− 12
1
2
0
t1 t2 λ
So we have Z t2
1 ε 1
lim dt 2 2
= χ(t1 ,t2 ) + χ[t1 ,t2 ] ,
ε→0 π
+
t1 (λ − t) + ε 2
and Z t2 Z
1 1
lim dt Imhψ, RA (t + iε)ψi = χ(t1 ,t2 ) + χ[t1 ,t2 ] µψ (dλ).
ε→0 π
+
t1 2 R
Theorem 11.6 (Stieltjes Inversion Formula). Given a spectrally decomposable, self adjoint
operator A and its associated resolvent map RA , we can construct a real-valued measure
Z λ+δ
1
µA
ψ (−∞, λ] = lim lim dt Imhψ, RA (t + iε)ψi.
δ→0 ε→0 π
+ +
−∞
– 116 –
Proof.
Z λ+δ Z
1 1
lim lim dt Imhψ, RA (t + iε)ψi = lim χ(−∞,λ+δ) + χ(−∞,δ] µψ (dλ)
δ→0 ε→0 π
+ +
−∞ δ→0 2 R
+
Z
= χ(−∞,λ] µψ (dλ)
R
= µψ (−∞, λ]),
where we used the fact that the χ(Ω) is bounded to move the limit inside the integral along
with the fact that
lim (−∞, λ + δ) = (−∞, λ].
δ→0+
Remark 11.7 . Note the fact that hψ, RA (t + iε)ψi is Herglotz with the fact that ε > 0 gives
us that µA
ψ ≥ 0, which is required for it to be a real-valued measure.
Remark 11.8 . If we already know that A is spectrally decomposable w.r.t. some PVM P ,
then we can recover P from A by virtue of: for any Ω ∈ σ(OR ) and for all ψ, ϕ ∈ H
Z
hψ, P (Ω)ϕi = χΩ dµψ,ϕ ,
R
where µψ,ϕ is obtained from µψ using the method given at the start of the previous lecture.
= (A − b)−1
= RA (b),
– 117 –
The proof of (i) and (ii) above was on a problem sheet. Return and do this later.
Conclusion, the Spectral Theorem, Theorem 10.1, together with the recipe for the
construction of the PVM P from a self adjoint operator A holds.
Definition. Let B1 , B2 ∈ L(H), i.e. they are bounded linear operators from H to H. Then
one may define
[B1 , B2 ] := B1 ◦ B2 − B2 ◦ B1 ,
where
[B1 , B2 ] ∈ L(H).
where we used the associativity of the composition of maps. Then, as ψ was arbitrary, we
have our result.
Corollary 11.12. Let A and B be two operators. Then if one of the them is unbounded
the domain D[A,B] may only have a trivial definition, i.e. D[A,B] = {0H }.
21
See Dr. Schuller’s Lectures on the Geometric Anatomy of Theoretical Physics
– 118 –
Proof. Let A : DA → H be unbounded with DA ⊂ H, and define the bounded operator
Bϕ :H → H
α 7→ hϕ, αiψ =: `ϕ (α)ψ
[A, B] = 0.
where we have used the fact that B is linear. Then from the fact that [A, B] = 0 it follows
that Bψ is also an eigenvalue of A with eigenvalue λ. Finally from the fact that A is
non-degenerate it follows that Bψ = µψ must hold for some µ ∈ C.
However, as highlighted at the start of this section, we also want to look at situations
when one of the operators may not be bounded. In other words we want to know how to
extend the idea of commuting to
As is often the case in maths/physics problems, the strategy is to reduce the problem
to the known case. We then have three possible bounded, linear operators constructed from
A:
(i) From the Spectral Theorem we know that if A is self adjoint then there exists a unique
PVM P such that A is spectrally decomposable. Recall that, from Remark 10.7,
PA (Ω) ∈ L(H) for any Ω ∈ σ(OR ).
– 119 –
(ii) From the definition of the resolvent set, we have RA (z) ∈ L(H) for any z ∈ ρ(A).
Definition. Let A be self adjoint and B be bounded. A and B are said to commute if
either of the following holds
Definition. Let A and B be self adjoint. They are said to commute is one of the following
holds
(i) [RA (zA ), RB (zB )] = 0 for some zA ∈ ρ(A) and zB ∈ ρ(B). This is known as the
Resolvent way.
(ii) [exp(itA), exp(isB)] = 0 for some t, s ∈ R \ {0}. This is known as the Weil way.
(iii) [PA (Ω), PB (Ω)] = 0 for all Ω ∈ σ(OR ). This is known as the Projector way.
Remark 11.14 . The literature normally uses a practical, yet misleading, notation at this
point. For any of the above we simply write [A, B] = 0 for commuting A and B. However
this commutator is not the same as the one defined at the start — i.e. it does not correspond
to A ◦ B − B ◦ A. Really we should write it slightly differently to highlight this, i.e. make
it red; [A, B].
[A, B] = 0 ⇔ [A, B] = 0.
– 120 –
12 Stone’s Theorem and Construction of Observables
In this lecture we aim to answer two questions by deriving and using Stone’s Theorem.
They are
(i) How arbitrary is the stipulation of Axiom 4; that the dynamics in the absence of a
measurement be controlled by
U (t) := exp(itH)
(ii) How does one practically construct observables, including the question of how to find
the correct domain such that the operator is at least essentially self adjoint?
Remark 12.1 . Clearly for (i) we want U (t) ◦ U (s) = U (t + s) and U (0) = idH .
(g♦h)♦k = g♦(h♦k).
g♦e = e♦g = g.
g♦g −1 = g −1 ♦g = e.
Definition. A Abelian group is a group is one whose group operation is symmetric. That
is for all g, h ∈ G
g♦h = h♦g.
Remark 12.2 . Abelian groups are also known as commutative groups and the condition is
refered to as the commutativity of the elements with respect to the group operation.
Example 12.3 . The real numbers equipped with addition form an Abelian group, with e =
0 ∈ R and g −1 = −g.
Note it is important that we consider all of R, and not just the positive numbers, as in
the latter case the inverse would not lie in the group.
Example 12.4 . The set R \ {0} form an Abelian group with respect to multiplication, with
e = 1 and g −1 = 1/g.
Note here we have to exclude 0 as 1/0 is not an element of R.
– 121 –
Example 12.5 . The set R \ {0} is not a group with respect to division as it fails to satisfy
associativity.
G = {U (t) | t ∈ R},
for some δ : R × R → R.
Remark 12.6 . Unless the group is Abelian then δ(s, t) 6= δ(t, s).
We will only deal with Abelian one-parameter groups, in which case one can always
reparameterise so that
U (t)♦U (s) = U (t + s)
where the commutativity with respect to ♦ is inherited from the commutativity with respect
to +. We also choose the parameterisation such that U (0) = e. In particular, we will look
at unitary one-parameter groups, i.e. those with
where
i
Stone
:= ψ ∈ H | lim U (ε)ψ − ψ exists .
DA
ε→0 ε
i U (0 + ε)ψ − U (0)ψ 0
lim U (ε)ψ − ψ = i lim := i U (·)ψ (0),
ε→0 ε ε→0 ε
and so we can rewrite
0
S Stone
:= ψ ∈ H | i U (·)ψ (0) exists .
DA ≡ DA
0
Note also that U (·)ψ : R → H.
– 122 –
12.2 Stone’s Theorem
Theorem 12.8 (Stone’s Theorem). Let U (·) be a UAOPG that is strongly continuous and
whose group operation is the composition of maps, i.e.
U (t) ◦ U (s) = U (t + s)
U (0) = idH .
U (t) = exp(−itA).
Corollary 12.9. Given U (t) = exp(−itA) for some self adjoint A then the Spectral Theo-
rem tells us that U (t) is a UAOPG.
= e−i(t+s) P (dλ)
R
=: U (t + s),
= e+itλ P (dλ)
R
=: U (−t).
Then from the above we have U ∗ (t)U (t) = U (−t + t) = U (0) = idH . Then noticing that
kU (t)k = kU ∗ (t)k it follows that
p √
kU (t)k = k idH k = 1 = 1,
where we have used the fact that the norm is strictly positive to remove the negative root,
and so it is unitary.
Finally show that it is a group. This is easily done, and we have e = idH = U (0) and
[U (t)]−1 = U (−t).
– 123 –
S for some generator A and t ∈ R. Then
Corollary 12.10. Let ψ ∈ DA
and
S S
U (t)DA = DA .
(ii) A is symmetric on DA
S and that it is essentially self adjoint.
(iii) U (t) = exp(−itA∗∗ ), from which it follows that A = A∗∗ and so it is self adjoint (as
A∗∗ is).
– 124 –
Now using the fact that kU (τ )k = k idH k = 1 and so they’re bounded, we can take a
limit and push it through the operators. Thus we have
0 i
i U (·)ψτ (0) ≡ lim U (ε)ψτ − ψτ
ε→0 ε
1
= i U (τ ) − idH lim ψε
ε→0 ε
= i U (τ ) − idH ψ,
ψτ0 ∈ N ∩ DA S.
(ii) Let ϕ, ψ ∈ DA
S , then
i
hϕ, Aψi := ϕ, lim U (ε)ψ − ψ
ε→0 ε
−i ∗
= lim U (ε) − idH ϕ, ψ
ε→0 ε
i
= lim U (−ε) − idH ϕ, ψ
ε→0 −ε
i
= lim U (ε) − idH ϕ, ψ
ε→0 ε
=: hAϕ, ψi,
where we have used the continuity of the inner product to move the limit in and out,
the result U ∗ (t) = U (−t), the fact that the identity is self adjoint and the fact that
we’re taking the limit to ‘ignore’ the minus signs on second to last line.
We now want to show that it is essentially self adjoint. Recalling Theorem 7.26, we
need to check if: for z ∈ C \ R that
Let ϕ ∈ ker(A∗ − z) ∩ DA
S . Then for all ψ ∈ D S
∗ A
0
hϕ, U (·)ψi (t) = hϕ, [U (·)ψ]0 (t)i
where we have used Corollary 12.10 and the fact that U (0) = idH . But, since z
is purely imaginary, the exponential is unbounded and so the RHS is unbounded.
However, the LHS is bounded (as U (·) is bounded) and so the only way the equality
holds is if hϕ, ψi = 0. Finally since we took all ψ ∈ H it follows that ϕ = {0H } and
so ker(A∗ − z) = {0H }. The proof for ker(A∗ − z) follows trivially from this result —
i.e. the RHS just becomes unbounded in the opposite direction.
– 125 –
(iii) We know that A is essentially self adjoint, which means that A∗∗ is self adjoint. Now
construct Z
∗∗
U (t) := exp(−itA ) =
e eitλ PA∗∗ (dλ).
R
Now let ψ ∈ S
DA ⊆ DA∗∗ and consider the real family
Then
ψ 0 (t) = − tA∗∗ exp(−itA∗∗ ) + iAU (t) ψ = −iA∗∗ ψ(t),
= 0,
where we have used the fact that hψ, A∗∗ ψ(t)i ∈ R as A∗∗ is self adjoint. So we have
that kψ(t)k is a constant w.r.t. t. From the definition, we have ψ(0) = 0 and so
kψ(t)k = kψ(0)k = 0, which from the definition of the norm tells us ψ(t) = 0 for all
t. Finally it follows that
compromise in choosing the domain is in order, as we shall see in the two section’s time.
Corollary 12.11. Inspection of the part (ii) of the proof shows that if one considers A : D →
H for some dense D ⊆ DA S that also satisfies U (t)D = D , then we A is essentially self
A
adjoint on D.
Definition. The position operators, denoted Qj for j = 1, 2, 3, are defined as the generators
of
U j (·) : L2 → L2 ,
with
j
U j (t)ψ)(x) := ψ(x)e−itx ,
– 126 –
for (x) := (x1 , x2 , x3 ). That is, they are the self adjoint
Qj : DQ
S
j → L
2
with
(Qj ψ)(x) = xj ψ(x).
Remark 12.12 . Note clearly U (t) ◦ U (s) = U (t + s), U (0) = idL2 and kU (t)ψk = kψk which
tells us kU (t)k = 1, all of which are required for U (t) to be a UAOPG.
Definition. The momentum operators, denoted Pj for j = 1, 2, 3, are the generators of
Uj (·) : L2 → L2 ,
with
Uj (a)ψ)(x) := ψ(..., xj − a, ...),
i.e. they shift the j th slot to the right by a. That is they are the self adjoint operators on
their Stone domain that satisfy
Pj ψ = −i∂j ψ.
Remark 12.13 . Note this is exactly the definition we used for the action of the operator in
Lecture 9.
Definition. The orbital angular momentum operators, denoted Lj for j = 1, 2, 3, are the
generators of
Uj (·) : L2 → L2 ,
with
Uj (α)ψ)(x) := ψ(Dj (α)x),
where Dj (α) : R3 → R3 is the operator that describes the rotation about the j th axis by
angle α. They satisfy
(L1 ψ)(x) = −i(x2 ∂3 ψ − x3 ∂2 ψ)
(L2 ψ)(x) = −i(x3 ∂1 ψ − x1 ∂3 ψ)
(L3 ψ)(x) = −i(x1 ∂2 ψ − x2 ∂1 ψ)
Corollary 12.14. The spectrum for the orbital angular momentum is contained within the
integers; σ(Lj ) ⊆ Z for j = 1, 2, 3.
Proof. From Stone’s theorem we have
Uj (α) = exp(−iαLj ),
which together with Dj (α + 2π) = Dj (α) gives
exp(−i2πLj ) = idH .
Then, from the fact that Lj is self adjoint, we can use the Spectral theorem to decompose
both sides Z Z
−i2πλ
e PLj (dλ) = PLj (dλ),
R R
and so λ ∈ Z.
– 127 –
12.5 Schwartz Space S(Rd )
As we have just seen, Stone’s theorem gives us a nice way to define the position, momen-
tum and orbital angular momentum operators. However there are two problems with the
definitions we have, both of which relate to their Stone domains. They are
(i) DQ
S 6= D S 6= D S 6= D S
P L Q
This, at first, might not seem like such a big deal but on a second look we see that
it means havoc when it comes to trying to define the QM version of kinetic energy as
(P ◦ P )/2m. The problem is especially bad when it comes to considering commutators, as
highlighted before.
We get around this problem using the compromise given in Corollary 12.11.
Definition. The Schwartz Space on Rd , denoted S(Rd ), is the vector space with set
where
N×d N0 × ... × N0 ,
0 := | {z }
d-fold
and
Remark 12.15 . The Schwartz Space is also known as the space of rapidly decaying test
functions.
Remark 12.16 . Clearly the space Cc∞ (Rd ), as defined in footnote 15 in Lecture 9, is a
contained within S(Rd ).
Lemma 12.17. The Schwartz space is closed under pointwise multiplication; if ψ, ϕ ∈
S(Rd ) then ψ • ϕ ∈ S(Rd ). In fact we have the Schwartz algebra (S(Rd ), +, ·, •).
Proof. This result follows simply from the so called Leibniz Rule, which is an extension of
the product rule.22
Then, using the fact that the pointwise multiplication of two smooth functions is smooth,
we have ψ • ϕ ∈ S(Rd ). Finally using the linearity of everything involved we get the
algebra.
22
See Dr. Schuller’s Lecture’s on the Geometric Anatomy of Theoretical Phsyics for a definition in
context.
– 128 –
Lemma 12.18. For 1 ≤ p ≤ ∞ we have S(Rd ) ⊂ Lp (Rd ).
Proof. Let ψ ∈ S(Rd ). Then |ψ(x)| < ∞ for all x ∈ Rd , and so it is integrable. Then
Corollary 5.19 tells us that it is measurable with respect to the Borel σ-algebras, so ψ ∈
L1 (Rd ). Then finally from L1 (Rd ) ⊂ Lp (Rd ) for all p > 1, the result follows.
Lemma 12.19. One can show that the Fourier Transform is a linear isomorphism from
S(Rd ) onto itself.23
(ii) S(R3 ) ⊆ DQ
S , D S , D S is dense,
P L
Remark 12.21 . From the last condition we see that we can repeatedly apply the operators,
in any order, to a system. This fixes our problem above.
23
See lecture 18.
– 129 –
13 Spin
In the previous lecture we defined orbital angular momentum. The emphasis on ‘orbital’ was
not a mistake; this lecture aims to discuss what is referred to as general angular momentum
(or just angular momentum) in QM. This latter case gets its name from the fact that any
concrete set of three operators, {J1 , J2 , J3 } say, obey analogous commutation relations to
{L1 , L2 , L3 }.
Recall, for ψ ∈ S(R3 ) we have
Li : S(R3 ) → S(R3 ),
Lemma 13.1. The orbital angular momentum operators obey the following commutation
relations:
[L1 , L2 ] = iL3 ,
[L2 , L3 ] = iL1 ,
[L3 , L1 ] = iL2 .
[L1 , L2 ]ψ = (L1 ◦ L2 − L2 ◦ L1 )ψ
= (−i)2 (x2 ∂3 − x3 ∂2 )(x3 ∂1 ψ − x1 ∂3 ψ) − (−i)2 (x3 ∂1 − x1 ∂3 )(x2 ∂3 ψ − x3 ∂2 ψ)
= − x2 ∂1 ψ + x2 x3 ∂3 ∂1 ψ − x2 x1 ∂32 ψ − (x3 )2 ∂2 ∂1 ψ + x3 x1 ∂2∂3 ψ
+ x2 x3 ∂1 ∂3 ψ − (x3 )2 ∂1 ∂2 ψ − x1 x2 ∂32 ψ + x1 x3 ∂3 ∂2 ψ + x1 ∂2 ψ
= x1 ∂2 ψ − x2 ∂1 ψ
= iL3 ψ,
where we have used the fact that S(R3 ) ⊂ C ∞ (R3 ) ⊂ C 2 (R3 ), and so we can swap derivative
order, i.e.
∂1 ∂2 ψ = ∂2 ∂1 ψ.
The same method is used for the other two commutation relations.
Remark 13.2 . The vector space with set V := spanC {L1 , L2 , L3 } can be defined, and we
have that iLj ∈ V , and so the above tells us that (V, +, ·, [·, ·]) is the orbital angular
momentum Lie algebra.
– 130 –
Remark 13.3 . We can re-write the commutation relations in the compact and convenient
form
[Li , Lj ] = iijk Lk ,
where ijk is the totally antisymmetric Levi-Civita symbol, defined as
+1 if (i, j, k) is an even permutation of (1, 2, 3)
ijk = −1 if (i, j, k) is an odd permutation of (1, 2, 3)
otherwise.
0
Proposition 13.4. It is not possible have a common eigenvector between the operators
(L1 , L2 , L3 ).
where we have used the linearity of the operators. It follows from the commutation relations
that L3 ψ = 0. However, from the other commutation relations we then have
[L2 , L3 ]ψ = iL1 ψ
L2 (L3 ψ) − L3 (L2 ψ) = iλψ
L2 (0) − µL3 ψ = iλψ
0 − µ0 = iλψ
=⇒ λ = 0,
and similarly you can show µ = 0. It follows then that for any D ∈ V we have Dψ = 0,
which can only be true is ψ = 0H . But this contradicts the opening assumption and so it
can’t be true.
Remark 13.5 . People often say that "two non-commuting operators have no common eigen-
vectors", however this statement is not strictly true. What is meant is "two operators, whose
commutator does not contain the zero vector in its range, do not have common eigenvec-
tors." This is subtly different, however the distinction is important. For example, in the
previous proposition if we instead had [L1 , L2 ] = iL3 and [L2 , L3 ] = 0 = [L3 , L1 ], we would
not need to require λ = 0 = µ, and so, unless further constraints were placed on the system
of operators, it is possible that ψ is a common eigenvector to L1 and L2 .
– 131 –
13.1 General Spin
At this point we might ask why we are bothering to work out these commutation relations?
After all they appear to be of no use when it comes to calculating things such as the spectra
of the operators (as is evident by Corollary 12.14). The answer to this is that we want to see
what information we can obtain about the system (specifically its spectrum) using solely the
commutation relations, as then any other set of observables that shares these commutation
relations immediately obey the same results.
To emphasise, given a Lie algebra that contains three operators, S1 , S2 , S3 say, with
Sj : D → D,
for some D ⊆ H, that obey commutation relations analogous to those of Lemma 13.1, will
instantly satisfy any results we derive, using only the commutation relations, for the orbital
angular momentum.
Example 13.6 . An example of such a set of operators are the so-called Pauli spin algebra,
which has
1
Si := σi ,
2
with D = H = C2 , where
! ! !
01 0 −i 1 0
σ1 = , σ2 = , σ3 = ,
10 i 0 0 −1
known as the Pauli spin matrices. It is easily checked, through the rules of matrix multi-
plication, that this algebra obeys the correct commutation relations. This is an example of
a so-called spin- 12 system.
Remark 13.7 . We can not expect the commutation relations to necessarily tell us everything
about the spectrum (or any other quantity we try to calculate) as they can be derived
by several different operator sets with potentially differing spectra. But, as said above,
whatever we can infer from the commutation relations alone must hold for all the operator
sets.
Definition. A Casimir operator for the algebra (V, +, ·, [·, ·]), is a symmetric operator
Ω: D → D
– 132 –
Remark 13.8 . Note, due to the bilinearity of the commutator, we only need to check that
Ω commutes with the basis elements of V .
Ω := J1 ◦ J1 + J2 ◦ J2 + J3 ◦ J3
Proof. The symmetric part follows trivially from the fact that Corollary 12.11 tells us
J1 , J2 , J3 are all symmetric. Next let ψ ∈ D and consider
[Ω, J1 ]ψ := [J1 ◦ J1 + J2 ◦ J2 + J3 ◦ J3 , J1 ]ψ
= [J1 ◦ J1 , J1 ]ψ + [J2 ◦ J2 , J1 ]ψ + [J3 ◦ J3 , J1 ]ψ
= (J1 ◦ J1 ◦ J1 )ψ − (J1 ◦ J1 ◦ J1 )ψ
+(J2 ◦ J2 ◦ J1 )ψ − (J1 ◦ J2 ◦ J2 )ψ
+(J3 ◦ J3 ◦ J1 )ψ − (J1 ◦ J3 ◦ J3 )ψ
= (J2 ◦ J1 ◦ J2 )ψ + (J2 ◦ [J2 , J1 ])ψ − (J1 ◦ J2 ◦ J2 )ψ
+(J3 ◦ J1 ◦ J3 )ψ + (J3 ◦ [J3 , J1 ])ψ − (J1 ◦ J3 ◦ J3 )ψ
= (J1 ◦ J2 ◦ J2 )ψ + ([J2 , J1 ] ◦ J2 )ψ + (J2 ◦ [J2 , J1 ])ψ − (J1 ◦ J2 ◦ J2 )ψ
+(J1 ◦ J3 ◦ J3 )ψ + ([J3 , J1 ] ◦ J3 )ψ + (J3 ◦ [J3 , J1 ])ψ − (J1 ◦ J3 ◦ J3 )ψ
= −i(J3 ◦ J2 )ψ − i(J2 ◦ J3 )ψ + i(J2 ◦ J3 )ψ + i(J3 ◦ J2 )ψ
= 0,
which because ψ ∈ D was arbitrary tells us [Ω, J1 ] = 0. The same method gives [Ω, J2 ] =
0 = [Ω, J3 ].
Definition. Let J1 , J2 , J3 be three operators that satisfy our conditions. Then define
J+ := J1 + iJ2 ,
J− := J1 − iJ2 ,
known as the ladder operators, for a reason that will soon become clear.
Remark 13.10 . We can choose to consider the set {J+ , J− , J3 } in place of the set {J1 , J2 , J3 }
while still keeping all the information — as we can simply reconstruct J1 and J2 from J+
and J− . Note, however, in doing this we have broken the symmetry of the algebra (in the
sense that none of the Jj s are special, they all obey the same commutation relations) by
singling out J3 , while taking linear combinations of J1 and J2 . Indeed we did not need to
make this choice of symmetry breaking, but in fact we could have chosen to keep J1 while
defining J+ and J− as linear combinations of J2 and J3 . Importantly, the results that follow
will hold equally for whichever J we choose, and so in order to stick with convention we pick
J3 . Note also that we no longer have a set of observables as (J+ )∗ = J− and (J− )∗ = J+ .
– 133 –
Lemma 13.11. J+ and J− satisfy the following commutation relations
[J+ , J− ] = 2J3 ,
[J3 , J± ] = ±J± ,
[Ω, J± ] = 0.
Ω = J+ ◦ J− + J3 ◦ (J3 − idD ),
or equally as
Ω = J− ◦ J+ + J3 ◦ (J3 + idD ).
Remark 13.13 . Both of the expressions in the above definition are always true. It is not
that one is true under certain circumstances and then the other is true. This is an important
observation that we shall use.
At this point we might wonder why we are going through so much effort introducing the
Casimir, when what we’re looking for is the spectra of the operators J1 , J2 , J3 . The answer
is to make the problem seemingly more complicated by now considering only eigenvectors
that are common to both J3 and Ω. Note it is necessary that they commute if they are to
have common eigenvectors — as is easily verified from the definition of the commutator.
That is we want to find a ψλ,µ ∈ D \ {0}24 such that
J3 ψλ,µ = µψλ,µ
Ωψλ,µ = λψλ,µ ,
where the subscript is included in order to label the eigenvector by its eingenvalues.
Again this appears to be a more complicated problem — we now not only need to
check our ψ is an eigenvector of J3 but we also need to check that it’s an eigenvalue of Ω.
However, we can show that every eigenvector of J3 (and equally for J1 and J2 ) is also an
eigenvector of Ω. For a proof of this see Peter Ferguson’s well detailed answer on Quora.
24
Recall that an eigenvector can not be the zero-vector by definition. We shall use this later.
– 134 –
We might also ask whether this will give us the spectrum of J3 anyways, as J3 is only
essentially self adjoint on our Schwartz domain, D. We are OK, however as (J3 )∗∗ is self
adjoint on the D and so we could just use this instead, where the operation is defined to
be the same as J3 — just as we did for the momentum operator in Lecture 9.
Lemma 13.14. The eigenvalues for these common eigenvectors satisfy
λ ≥ |µ|(|µ| + 1).
Proof. We shall consider both cases for the rewriting of Ω, however, as explained above, the
cases bracket does not mean one is true under certain conditions and the other otherwise,
they are both true. We shall also drop the ◦ symbols to lighten notation. Thus we have
where we have used the fact that (J− )∗ = J+ and vice versa.25
Now recalling that ψλ,µ is an eigenvector, and so, by definition, not the zero vector we
know the inner product is positive definite, and thus we can divide by it, giving
kJ− ψλ,µ k2 + µ(µ − 1)
2
λ = kJ+kψk ψ λ,µ k2
2kψk
+ µ(µ + 1).
Finally, from the fact that the norm is non-negative definite (i.e. the first term in each case
is either positive or vanishes) we have
(
µ(µ − 1)
λ≥
µ(µ + 1)
(
−µ(−µ + 1)
=
µ(µ + 1),
Lemma 13.15. The elements J± ψλ,µ are common ‘eigenvectors’ of Ω and J3 with eigen-
values λ and (µ ± 1), respectively.
Proof. First consider Ω. We have
– 135 –
where we used the fact that [Ω, J± ] = 0, and the linearity of J± .
Now consider J3 ,
Remark 13.17 . This is why J± are known as ladder operators, with J+ known as the raising
operator and J− the lowering operator. As we see these names derive from the eigenvalues
they produce as eigenvectors of Jz .
If the Jz eigenvalue of ψλ,µ corresponds to the µ-th rung of a ladder then the eigenvalue
of J+ ψλµ corresponds to the (µ + 1)-th rung, and J− ψλ,µ the (µ − 1)-th. Note each one of
the rungs is separated by exactly the same distance, and that we get the (µ + n)-th rung
from (J+ )n ψλ,µ and similarly for the (µ − n)-th rung.
J+ ψλ,µ (µ + 1)
Eigenvectors
Eigenvalues
ψλ,µ µ
J− ψλ,µ (µ − 1)
The next question would be is this a ‘proper’ ladder; that is does it have a top and bottom
rung or does it continue forever? The answer comes in the form of the next lemma.
Lemma 13.18. There exists a ψλ,µ such that J+ ψλ,µ = 0. Equally there exists a ψλ,µ such
that J− ψλ,µ = 0.
Proof. We know from Lemma 13.14 that |µ|(|µ| + 1) ≤ λ holds for any common eigenvector
of Ω and J3 . We see from Lemma 13.15 that (J± )n ψλ,µ is such a common eigenvector, and
so must obey |µ ± n|(|µ ± n| + 1) ≤ λ. However, λ is unchanged by this repeated application
of the ladder operators, and so, unless remedied, this inequality will eventually be broken
– that is we need to somehow cap the available n values.
Consider first the raising operator. In this case µ + n gets bigger and bigger, and so
we need to cap n from above. In other words, we require there to be an m ∈ N such that
for all n > m, (J+ )n ψλ,µ = 0. This fixes our problem as this corresponds to the zero vector
– 136 –
and so, by definition, it cannot be a eigenvector, and the λ inequality no longer need hold.
We define
ψλ,µ := (J+ )m ψλ,µ .
The idea is exactly the same is true for the lowering operator, however now µ − n is
getting smaller and smaller, and so its modulus (after n > µ is reached) gets bigger and
bigger. So again we need to cap n from above. We require there to be a ` ∈ N such that
for all n > `, (J− )n ψλ,µ = 0. We define
Note we do not have any a priori relation between the values of m and `. To use the
ladder analogy, m is the number of rungs above µ-th rung and ` is the number of rungs
below the µ-th rung. For a given λ, the highest value of µ is denoted µ(λ) and the lowest
value µ(λ).
Remark 13.19 . Note the above tells us that J± ψ are not strictly eigenvalues, as it could be
the zero vector. This is why we wrote ‘eigenvectors’ in inverted commas in Lemma 13.15.
Proof. (i) From the proof of Lemma 13.14, and the fact that J+ ψλ,µ(λ) = 0, and so
kJ+ ψλ,µ(λ) k = 0, we have
λ = µ(λ) µ(λ) + 1
(ii) Repeating the above argument but with the fact that kJ− ψλ,µ(λ) k = 0, we have
λ = µ(λ) µ(λ) − 1
= −µ(λ) − µ(λ) + 1 ,
(iii) From the previous, along with the fact that µ(λ) − µ(λ) ∈ N0 this result follows
trivially.
Remark 13.21 . In order to be consistent with the literature we shall introduce the following
relabelling
j := µ(λ), m := µ.
Note we have j ∈ 2 .
N0
– 137 –
Theorem 13.22. The common eigenvectors of Ω and J3 come as families ψj(j+1),m , where
m = −j, −j + 1, ..., j − 1, j. The eigenvalue j(j + 1) is associated to Ω and m is associated
to J3 .
Then, from Lemma 8.8 and the fact that the eigenvectors have distinct eigenvalues, we have
Proof. We shall show this for J+ , the method of J− follows analogously. Recall from
Lemma 13.12 that
J− J+ = Ω − J3 (J3 + idD ).
Now consider
Corollary 13.26. For a pure spin-j system the spectrum of the operators is
for i = 1, 2, 3.
Example 13.27 .
– 138 –
j σ(Ji ), i = 1, 2, 3
0 0
1/2 {−1/2, 1/2}
1 {−1, 0, 1}
Remark 13.28 . When you introduce spin to a particle, its Hilbert space becomes a product
space. For example for an electron (spin-1/2) in R3 its Hilbert space is
He = L2 (R3 ) ⊗ C2 .
Remark 13.29 . You can also have non-pure spin systems. For the orbital angular momen-
tum, you take a direct sum of the Hilbert spaces. That is if Hj is the Hilbert space associated
to the pure spin-j system then the composite system’s Hilbert space is
M
Hcomp = Hj .
j
– 139 –
14 Composite Systems
Recall Axiom 1, which says that to every quantum system there is an underlying Hilbert
space. The question we now want to ask is: Let H1 be the Hilbert space associated to one
system and H2 be the Hilbert space associated to another. What is the underlying Hilbert
space associated to the composite system?
To clarify what we mean, imagine having a proton and an electron. We first look at the
proton by itself and call this system one. We then look at the electron separately and call
that system two. We now want to look at both of them together, but we wish to use the
fact that we have already studied them separately to simplify the problem. It may seem
‘natural’ to model the composite H as the so called direct sum, which as a set is26
H1 ⊕ H2 := {(ψ, ϕ) | ψ ∈ H1 , ϕ ∈ H2 },
and where the linearity is inherited from H1 and H2 , namely
(aψ1 + ψ2 , bϕ1 + ϕ2 ) = ab(ψ1 , ϕ1 ) + a(ψ1 , ϕ2 ) + b(ψ2 , ϕ1 ) + (ψ2 , ϕ2 ).
This is what we do in classical systems and it tells us that if we know everything about the
states27 of our two systems, then we also know everything about the states of the composite
system.
However, as with all things quantum, things are more complicated, and the above is
not the case. The main problem comes from the fact that not all linear combinations of
elements of the form (ψ, ϕ) can also be written in that form.
Example 14.1 . Let ψ1 , ψ2 ∈ H1 , ϕ1 , ϕ2 ∈ H2 and a, b ∈ C. Then, assuming the linearity as
above, we have
a(ψ1 , ϕ1 ) + b(ψ2 , ϕ2 ) = (aψ1 , ϕ1 ) + (ψ2 , bϕ2 )
= (aψ1 + ψ2 , ϕ1 + bϕ2 )
= a(ψ1 , ϕ1 ) + b(ψ2 , ϕ2 ) + ab(ψ1 , ϕ2 ) + (ψ2 , ϕ1 ),
a clear problem.
Note this example actually tells us that we H1 ⊕ H2 is not closed under the linearity,
and so would not be a vector space. We could just restrict ourselves to elements that
do obey these rules, however, as we shall see when considering entanglement, we require
elements of this form in our underlying Hilbert space.
This calls for a slight refinement of axiom one. We add the addendum28 :
If a quantum system is composed of two (and hence, by induction, any finite number)
of ‘sub’systems, then its underlying Hilbert space is the tensor product space H1 ⊗H2 ,
equipped with a inner product.
26
This definition holds as we are only taking the direct product of two spaces, and so the index set is
finite. See wiki for details on this.
27
Recall that the elements of the Hilbert space are not the states, but are associated to them. We shall
return to this at the end of the lecture.
28
We shall define what these new terms are in the next section.
– 140 –
Example 14.2 . Remark 13.28 is an example of such a composite system.
Lemma 14.3. Every vector space is a free vector space with B being a Hamel basis.
Remark 14.4 . Note it need not be true that F (B) = V , as it might be the case that the
same element in V is reached via two different linear combinations of elements of B. In
fact if F (B) = V , then B is just a Hamel basis.
The free vector space for vector spaces might seem almost redundant, given that every
vector space has a basis. However if your vector space is countably infinite then such a
basis might be incredibly difficult to construct. However you can simply take the entire set
for B and construct the free vector space F (V ), which will be a huge set, mind. Note, then,
that any linear combination of elements in this set is automatically still in the set, and so
it is indeed a vector space.
Definition. Let V and W be two F-vector spaces, and let A ⊆ V and B ⊆ W be generating
subsets. Then we define their tensor product as the vector space with set
V ⊗ W := F (A × B)/∼ ,
(ii) (a1 , b1 ) + (a1 , b2 ) ∼ (a1 , b1 + b2 ) and (a1 , b1 ) + (a2 , b1 ) ∼ (a1 + a2 , b1 ), and continued
by induction,
Remark 14.5 . Note the equivalence relation looks a lot like a linearity condition on V ⊗ W ,
however on closer inspection it is not quite. The linearity condition that make V ⊗ W into
a vector space is simply
f (a1 , b1 ) + (a2 , b2 ) ∈ V ⊗ W.
– 141 –
This, in itself, does not need to satisfy the equivalence relation. However, if we did not
include it we could end up with a huge redundancy in elements, as a repercussion of Re-
mark 14.4. This equivalence relation makes the corresponding set of equivalence classes a
vector space in the way we normally think of them (there is no repeated elements).
This is exactly the type of structure we need to overcome the problem highlighted
before (that not all linear combinations can be expressed as a single term), as now we only
require that linear combinations of linear combinations are linear combinations, which they
obviously are.
Proposition 14.6. Let H1 and H2 be our two vector spaces. We can define the map
Proof. We need to show this is well defined. We shall write +12 now in order to lighten
notation. Consider it case by case.
(a) If (ψ,
eϕ f2 ) = (ψ, ϕ2 ), then it follows trivially that
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [(ψ, ϕ1 ) + (ψ, ϕ2 )],
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
(b) If (ψ,
eϕ f2 ) = (ψ, ϕ12 ) + (ψ, ϕ22 ), where ϕ2 = ϕ12 + ϕ22 , we have
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [(ψ, ϕ1 ) + (ψ, ϕ12 ) + (ψ, ϕ22 )]
= [(ψ, ϕ1 + ϕ12 + ϕ22 )]
= [(ψ, ϕ1 + ϕ2 )]
= [(ψ, ϕ1 ) + (ψ, ϕ2 )],
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
(c) If (ψ,
eϕ f2 ) = f (ψ, ϕ32 ), where ϕ2 = f ϕ32 , then
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [(ψ, ϕ1 ) + f (ψ, ϕ32 )]
= [(ψ, ϕ1 ) + (ψ, f ϕ32 )]
= [(ψ, ϕ1 ) + (ψ, ϕ2 )],
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
– 142 –
(ii) Assume (ψ,
eϕ f1 ) = (ψ, ϕ11 ) + (ψ, ϕ21 ) where ϕ1 = ϕ11 + ϕ21 .
(a) If (ψ,
eϕ f2 ) = (ψ, ϕ2 ) then we have essentially the same as (i)(b), so we wont
re-write it here.
(b) If (ψ,
eϕ f2 ) = (ψ, ϕ12 ) + (ψ, ϕ22 ), where ϕ2 = ϕ12 + ϕ22 , we have
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [(ψ, ϕ11 ) + (ψ, ϕ21 ) + (ψ, ϕ12 ) + (ψ, ϕ22 )]
= [(ψ, ϕ11 + ϕ21 + ϕ12 + ϕ22 )]
= [(ψ, ϕ1 + ϕ2 )]
= [(ψ, ϕ1 ) + (ψ + ϕ2 )],
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
(c) If (ψ,
eϕ f2 ) = f (ψ, ϕ32 ) where ϕ2 = f ϕ32 , then we have
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [(ψ, ϕ11 ) + (ψ, ϕ21 ) + f (ψ, ϕ32 )]
= [(ψ, ϕ11 + ϕ21 + f ϕ32 )]
= [(ψ, ϕ1 + ϕ2 )]
= [(ψ, ϕ1 ) + (ψ, ϕ2 )]
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
(a) If (ψ,
eϕ f2 ) = (ψ, ϕ2 ), then we have basically same as (i)(c) and so we wont write
it again.
(b) If (ψ,
eϕ f2 ) = (ψ, ϕ12 ) + (ψ, ϕ22 ), where ϕ2 = ϕ12 + ϕ22 , then we are basically the
same as (ii)(c) and so we wont write it again.
(c) If (ψ,
eϕ f2 ) = f (ψ, ϕ3 ), where ϕ2 = f ϕ3 then
2 2
[(ψ,
eϕ f1 ) + (ψ,
eϕ f2 )] = [g(ψ, ϕ31 ) + f (ψ, ϕ32 )]
= [(ψ, gϕ31 + f ϕ32 )]
= [(ψ, ϕ1 + ϕ2 ]
= [(ψ, ϕ1 ) + (ψ, ϕ2 )]
and so
[(ψ,
eϕ f1 )] +12 [(ψ,
eϕ f2 )] = [(ψ, ϕ1 )] +12 [(ψ, ϕ2 )].
Remark 14.7 . We can do exactly the same thing but for a map that has the first element
different and the second element the same.
– 143 –
Definition. Let H1 and H2 be complex Hilbert spaces with sesqui-linear inner products
h·, ·iH1 and h·, ·iH2 , respectively. Then the composite Hilbert space is the Hilbert space with
set
H1 ⊗ H2 := F (H1 × H2 )/∼ ,
where the overline indicates the topological closure, and with sesqui-linear inner product:
for ψ1 , ψ2 ∈ H1 and ϕ1 , ϕ2 ∈ H2 ,
[(ψ1 , ϕ1 )], [(ψ2 , ϕ2 )] H1 ⊗H2 := hψ1 , ψ2 iH1 · hϕ1 , ϕ2 iH2 ,
extended by linearity, with respect to which the closure is taken (i.e. the topology is derived
from here).
Remark 14.8 . Note, we need to take the topological closure as the free vector space only
considers finite linear combinations, but our Hilbert spaces could be infinite dimensional.
for zi ∈ C.
(iii) Positive-definiteness. As h·, ·iH1 , h·, ·iH1 ≥ 0 it follows that29 h−, −iH1 ⊗H2 ≥ 0. Then
from
(0H1 , ϕ) = (0 · ψ, ϕ) ∼ 0(ψ, ϕ) ∼ (ψ, 0 · ϕ) = (ψ, 0H2 ),
we have
[(0H1 , ϕ)] = [(ψ, 0H2 )] =: 0H1 ⊗H2 .
Finally, from
0 = [(ψ, ϕ)], [(ψ, ϕ)] H1 ⊗H2
:= hψ, ψiH1 · hϕ, ϕiH2 ,
which implies either ψ = 0H1 and/or ϕ = 0H2 , and so [ψ, ϕ] = 0H1 ⊗H2 .
29
We shall use ‘−0 for empty slots on the composite space.
– 144 –
We also need to check that the sesqui-linear inner product is well defined.
Proof. The proof follows a similar method to the proof of Proposition 14.6. We shall just
show the first two results here in order to save space.
(i) Assume (ψ
f1 , ϕ
f1 ) = (ψ1 , ϕ1 ).
ψ ϕ := [(ψ, ϕ)].
Here we have used a for the tensor product of two vectors. We have done this in order to
highlight the fact that it is not the same thing as ⊗, which is the tensor product between
vector spaces. We will, however, end up using ⊗ for all tensor products later, as this is the
common notation. It is important to remember that they are distinctly different objects,
and, if in doubt, we should go back to the definitions to clarify the circumstance.
In this new notation we can rewrite the definition for the sesqui-linear inner product
simply as
hψ ϕ, ψe ϕi
e H1 ×H2 := hψ, ψi
e H hϕ, ϕi
1 e H2 ,
extended by linearity.
Example 14.9 . This example acts as a further warning that its important that we consider
the space F (H1 × H2 ) and not just H1 × H2 .
Let H1 = H2 = C2 . Then we can express the elements at 2x1 matrices, in which case
we can consider to be the outer product. Note then that
! ! ! ! ! ! !
1 0 0 1 01 00 0 1
− = − =
0 1 1 0 00 10 −1 0
– 145 –
Theorem 14.10. Let {ei }i=1,...,dim(H1 ) and {fi }i=1,...,dim(H2 ) be a Schauder (ON)-bases for
H1 and H2 respectively. Then we can construct a Schauder (ON)-basis for H1 ⊗ H2 as
{ei fj } i=1,...,dim(H2 )
j=1,...,dim(H2 )
Remark 14.12 . Note, obviously, that the order matters when taking a tensor product. In
other words, in general
ψ ϕ 6= ϕ ψ.
Note, its not even a case of ‘choosing the right ψ and ϕ’, as the LHS is an element of
H1 ⊗ H2 whereas the RHS is an element of H2 ⊗ H1 . So, unless the two spaces are the
same, they could never be equal.
A⊗B
b : H1 ⊗ H1 → H1 ⊗ H2
ψ ϕ 7→ (A⊗B)(ψ
b ϕ) := (Aψ) (Bϕ).
– 146 –
Proof. We have A = A∗ and B = B ∗ , i.e. that their domains coincide and Aψ = A∗ ψ for
all ψ ∈ DA and similarly for B and B ∗ . Then we have
A⊗B
b : DA ⊗ DB → H1 ⊗ H2
ψ ϕ 7→ (Aψ) (Bϕ),
and
A∗ ⊗B
b ∗ : DA ⊗ DB → H1 ⊗ H2
ψ ϕ 7→ (A∗ ψ) (B ∗ ϕ) = (Aψ) (Bϕ),
and so the domain concides and they have the same result for all ψ ϕ ∈ DA ⊗ DB . So it
is self adjoint.
– 147 –
Definition. Let ψ, ϕ ∈ H, then we define their symmetric tensor product as
1
ψ ϕ := (ψ ϕ + ϕ ψ),
2
which is an element of the symmetric composite Hilbert space, defined as
( dim(H) )
X X
H H := aij ei ej aij ∈ C, |aij |2 < ∞ ,
i,j=1 i,j
Remark 14.15 . Note it follows from the definition that aij = aji for a symmetric composite
Hilber space.
Remark 14.16 . Note it follows from the definition that aij = −aji for a antisymmetric
composite Hilber space.
1
ψ ϕ = (ψ ϕ − ϕ ψ) 6= ψe ϕ,
e
2
for some ψ,
eϕe ∈ H. Which again emphasises that its important we consider the space of all
linear combinations.
AB
b :H H → H H
ψ ϕ 7→ (AB)(ψ
b ϕ) := (Aψ) (Bϕ).
A b B :H H→H H
ψ ϕ 7→ (A b B)(ψ ϕ) := (Aψ) (Bϕ).
– 148 –
14.5 Collapse of Notation
As mentioned before, we shall now change our notation to that of the standard literature.
That is
⊗, , ⊗
b → ⊗,
, , b → ,
, , b → ∧.
14.6 Entanglement
As has been stressed many times, recall
{ψ ⊗ ϕ | ψ ∈ H1 , ϕ ∈ H2 } $ H1 ⊗ H2 .
ρΨ = ρψ ⊗ ρϕ .
– 149 –
where in the last two lines the ⊗ is the tensor product between linear maps.
The reverse part of the proof (starting from ρΨ non-entangled) follows from working
backwards through the above.
Proof. This proof is trivial given the previous one, as if Ψ is non-simple then ρΨ cannot be
non-entangled, and vice versa.
– 150 –
15 Total Spin of Composite System
The lecture aims to answer the following question: "What is the total angular momentum
(or spin) of a bi-partite system if we know the spin of each constituent system?"
More precisely, in the context of quantum mechanics, consider a spin-jA system with
Hilbert space HA and angular momentum operators A1 , A2 , A3 and a spin-jB system with
Hilbert space HB and angular momentum operators B1 , B2 , B3 . Then what is the spin of
the composite system with Hilbert space HA ⊗ HB and how do we construct the angular
momentum operators for this composite system?
Proposition 15.1. The operators Ai ⊗ idHB for i = 1, 2, 3, satisfy the spin commutation
relations. Similarly for idHA ⊗Bi .
Proof. We shall use the general expression involving the Levi-Civita symbol. Consider the
action on a general element α ⊗ β ∈ HA ⊗ HB ,
[Ai ⊗ idHB , Aj ⊗ idHB ](α ⊗ β) := (Ai ⊗ idHB ) (Aj α) ⊗ β − (Aj ⊗ idHB ) (Ai α) ⊗ β
= Ai (Aj α) ⊗ β − Aj (Ai α) ⊗ β
= (Ai Aj − Aj Ai )α ⊗ β
= [Ai , Aj ]α ⊗ β
= iijk (Ak α) ⊗ β
= iijk (Ak ⊗ idHB )(α ⊗ β),
which because α ⊗ β was arbitary (or equivalently by the linearity of the operators) this
holds for any element of HA ⊗ HB .
The method is identical for the idHA ⊗Bi case.
Now before moving on recall (page 137) that we have an ON-eigenbasis31 for each
constituent system. That is if A2 is the Casimir operator for the spin-jA system then we
have the ON-eigenbasis
{αjmAA }mA =−jA ,...,jA
with
A2 αjmAA = jA (jA + 1)αjmAA .
Similarly we have B 2 and {βjmBB }, mB = −jB , ..., jB .
In everything that follows it is important to note that jA and jB are fixed. This
condition shall come in use later.
31
ON here stands for orthonormal.
– 151 –
Definition. We define the self adjoint angular momentum operators J1 , J2 , J3 on the com-
posite space HA ⊗ HB as
Ji := Ai ⊗ 1 + 1 ⊗ Bi ,
and we call them the total spin operators.
Proof. (that they obey the spin commutation relations)
Consider
which from the fact that the commutator bracket is antisymmetric in its entries, along with
Proposition 15.1 gives the result.
Definition. We define the Casimir operator for the composite system as always,
3
X
J 2 := Ji ◦ Ji .
i=1
J± := A± ⊗ 1 + 1 ⊗ B± .
We now want to find the common eigenvalues of J 2 and one of the total spin operators,
J3 say. We will show the following results:
Remark 15.3 . Note from Theorem 14.14, we can already obtain the second of these two
results. That is
σ(J3 ) := σ(A3 ⊗ 1 + 1 ⊗ B3 )
= σ(A3 ) + σ(B3 )
= {−jA , ..., jA } + {−jB , ..., jB }
= {−(jA + jB ), ..., jA + jB }.
– 152 –
It also follows from the definition of the composite inner product that it is an ON-eigenbasis.
That is,
m0 m0 m0 m0
hαjmAA ⊗ βjmBB , αj 0 A ⊗ βj 0 B i12 := hαjmAA , αj 0 A i1 hαjmBB , αj 0 B i2
A B A B
= δjA ,jA0 δmA ,m0A δjB ,jB0 δmB ,m0B .
[A2 ⊗ 1, Ji ] := [A2 ⊗ 1, Ai ⊗ 1 + 1 ⊗ Bi ]
= [A2 ⊗ 1, Ai ⊗ 1] + [A2 ⊗ 1, 1 ⊗ Bi ]
= 0,
as each bracket vanishes. Similarly for 1 ⊗ B 2 . We also have, using Corollary 11.11 that
[J 2 , A2 ⊗ 1] = 0 = [J 2 , 1 ⊗ B 2 ].
We can therefore consider eigenvectors of J3 that are not only common to J 2 but also
to A2 ⊗ 1 and 1 ⊗ B 2 , and so we have a simultaneous eigenbasis, {ξj,j
m
A ,jB
} which satisfies
J 2 ξj,j
m
A ,jB
m
= j(j + 1)ξj,jA ,jB
m m
J3 ξj,jA ,jB
= mξj,jA ,jB
(A2 ⊗ 1)ξj,j
m
A ,jB
m
= jA (jA + 1)ξj,jA ,jB
(1 ⊗ B 2 )ξj,j
m
A ,jB
m
= jB (jB + 1)ξj,jA ,jB
.
Now since we already have the ON-eigenbasis {αjmAA ⊗ βjmBB } for A2 ⊗ 1 and 1 ⊗ B 2 it
follows (by the definition of a basis) that this new basis can be expanded as32
jA
X jB
X
m
ξj,jA ,jB
= hαjmAA ⊗ βjmBB , ξj,jA ,jB iαjmAA ⊗ βjmBB .
mA =−jA mB =−jB
– 153 –
Remark 15.4 . The Clebsch-Gordan coefficients are just complex numbers. Although they
might be rather difficult to calculate in practice, the method should now be clear. All we
need to do is calculate the CGc and then we instantly have our new eigenbasis, and so we
get the spectra for J 2 and J3 .
As just noted, they are pretty hard to calculate, stemming from the fact that ξj,j
m
A ,jB
appears both on the LHS and within the inner product, however we can do it indirectly.
This forms the remainder of this lecture.
j
The strategy is as follows: start from some convenient eigenvector ξj,j A ,jB
and its
j±1
associated CGcs, then use the ladder operators to obtain the eigenvector ξj,jA ,jB
and the
resulting CGcs. We will then change the value of j itself and repeat the process. In this
manner we will build up a table of CGcs.
Change j value
Apply ladder operators
Clebsch-Gordan
coefficients
15.4 Value of m
Consider the action of J3 on both bases,
m
J3 ξj,jA ,jB
= mξj,jA ,jB
J3 (αjmAa ⊗ βjmBB ) = (mA + mB )(αjmAa ⊗ βjmBB ).
Then, from the fact that the CGcs are simply complex numbers and the fact that J3 is
linear, it follows from the expansion equation that we require
m = mA + mB .
where we have left the ranges of mA /mB out, but they are of course just −jA , ..., jA and
−jB , ..., jB .
– 154 –
15.5 Clebsch-Gordan Coefficients for Maximal j
We are now in a position to choose our convenient initial eigenvector. It follows from the
ranges of mA and mB along with the condition m = mA + mB and m = −j, ..., j that the
maximum value j can take is jA + jB . It follows then that
ξjjAA+j
+jB
B ,jA ,jB
= CjjAA+j
+jB ,jA ,jB
B ,jA ,jB
(αjjA
A
⊗ βjjBB ),
and so the two eigenvectors vary only by a complex phase. However, seeing as we are only
interested in eigenvalues here, and an overall phase plays no effect on the eigenvalue, we
are free to set this phase however we like. We choose it such that
CjjAA+j
+jB ,jA ,jB
B ,jA ,jB
= 1.
We can now start applying the ladder operators to lower the value of m = jA + jB .
Using Proposition 13.24 we have
+jB −1
J− ξjjAA+j
+jB
= (jA + jB )(jA + jB + 1) − (jA + jB )(jA + jB − 1)ξjjAA+j
p
B ,jA ,jB B ,jA ,jB
jA +jB −1
p
= 2(jA + jB )ξjA +jB ,jA ,jB
J− (αjjA
A
⊗ βjjBB ) = (A− ⊗ 1 + 1 ⊗ B− )(αjjA A
⊗ βjjBB )
A −1
= 2jA (αjjA ⊗ βjjBB ) + 2jB (αjjA ⊗ βjjBB −1 ).
p p
A
– 155 –
+jB −1,−,−
would not be able to get m = −j, ..., j. That is, the next CGcs are of level CjjAA+jB −1,jA ,jB
.
Then using the fact that there are only two ways to obtain this (either mA → mA − 1 or
mB → mB − 1) we have
+jB −1 +jB −1,jA −1,jB A −1 +jB −1,jA ,jB −1 jA
ξjjAA+jB −1,jA ,jB
= CjjAA+j B −1,jA ,jB
(αjjA ⊗ βjjBB ) + CjjAA+j B −1,jA ,jB
(αjA ⊗ βjjBB −1 ).
We then use the fact that the eigenvectors in this equation are all orthonormal to obtain
+jB −1,jA −1,jB 2 jA +jB −1,jA ,jB −1 2
1 = CjjAA+j
B −1,j ,j
A B
+ C
jA +jB −1,jA ,jB
,
and we also use the fact that the RHS eigenvectors are the same here as with the J− case
above however the LHS eigenvectors are necessarily orthogonal to give
+jB −1,jA −1,jB +jB −1,jA −1,jB +jB −1,jA ,jB −1 +jB −1,jA ,jB −1
CjjAA+j B ,jA ,jB
· CjjAA+j B −1,jA ,jB
= −CjjAA+j · CjjAA+j B −1,jA ,jB
s s B ,jA ,jB
jA +jB −1,jA −1,jB jB +jB −1,jA ,jB −1
· CjjAA+j −1,jA ,jB =− · CjjAA+j B −1,jA ,jB
jA + jB B jA + jB
s
+jB −1,jA −1,jB jB +jB −1,jA ,jB −1
CjjAA+jB −1,jA ,jB
=− · CjjAA+j B −1,jA ,jB
.
jA
Solving simultaneously,
jB +jB −1,jA ,jB −1 2
+ 1 CjjAA+j
B −1,jA ,jB
=1
jA
s
jA +jB −1,jA ,jB −1
C = jA
jA +jB −1,jA ,jB
jA + jB
s
+jB −1,jA −1,jB jB
=⇒ CjjAA+j
B −1,jA ,jB
=−
jA + jB
We can then apply the J− operator as before to move down this column. We can
repeat this process of lowering j again to obtain the third column, and iterate until we
reach j = |jA − jB |, where it must terminate. We see that this is the termination point
quickly from m = −j, ..., j along with m = mA + mB . On the next page I have included
a table (from David J. Griffiths’ QM book) for some calculated CGcs. As we can see...
they’re not pretty things.
– 156 –
– 157 –
16 Quantum Harmonic Oscillator
As has been remarked, the world and everything in it are quantum by nature. There is
no ‘classical’ ball which we make quantum, there is a quantum ball that we approximate
classically. Equally there isn’t a ‘classical’ harmonic oscillator which we use to construct
the quantum one. We will thus not entertain any type of so called ‘quantisation’ idea —
that of starting from the classical system and somehow transforming it into the quantum
counter part. We shall demonstrate explicitly in the first section why this is not a good
idea, but a quick argument explains it.
Imagine you have some general theory. Of course you can obtain any special theory
related to it by taking approximations/constraints, however you have no real hope of doing
the opposite — you should not expect to be able to obtain the general theory by ‘unapproxi-
mating’ the special one. Quantum mechanics is the general theory, with classical mechanics
is the special one. It is therefore a ridiculous idea to try and obtain quantum theory this
way.
1 2 mω 2 2 1 mω 2
h(p, q) = p + q ∼∼∼∼∼B H := h(P, Q) = P ◦P + Q◦Q
2m 2 2m 2
It follows from
P, Q : S(R) → S(R),
with
(P ψ)(x) := −i~ψ 0 (x), (Qψ)(x) := xψ(x),
that
H : S(R) → S(R),
with
~2 00 mω 2 2
(Hψ)(x) := − ψ (x) + x ψ(x).
2m 2
– 158 –
This often appears as
~2 d2 mω 2 2
H := − + x ,
2m dx2 2
or for a more general case (i.e. not a harmonic oscillator) as
~2 d2
H := − + V (x),
2m dx2
where V (x) is the potential associated to the system.
This all looks very nice, and indeed it is correct, however there is a serious problem
here. Classically we could add
(pq − qp)g(p, q),
for some other observable of the system g(p, q) without changing anything, as the bracket
vanishes. That is
f (p, q) = f (p, q) + (pq − qp)g(p, q).
However if we then applied the ‘∼∼∼∼∼B’ approach to this we would get
which is obviously not true for general g(P, Q). So it appears that even in these simple cases
where ‘there is no danger’, there is a serious theoretical problem. For this reason we shall
just not do this at all, and instead simply define what we mean by the energy observable
(or Hamiltonian) of our system and proceed from there.
– 159 –
16.3 The Energy Spectrum
Recall that the spectrum of an operator is given by
The aim of this lecture is to calculate σp (H) and show that σc (H) = ∅.
Definition. Consider the operators Q, P : S(R) → S(R). Then define a± : S(R) → S(R)
via r
mω i
a± := Q∓ √ P.
2~ 2~mω
Corollary 16.1. We can re-express the Hamiltonian as
1
H = ~ω a+ a− + idS(R) .
2
1
H = ~ω (αQ − iβP )(αQ + iβP ) + idS(R)
2
2 2 1
= ~ω α Q ◦ Q + β P ◦ P + iαβ(QP − P Q) + idS(R)
2
2 2 1
= ~ω α Q ◦ Q + β P ◦ P + iαβ[Q, P ] + idS(R)
2
mω 1 1 1
= ~ω Q◦Q+ P ◦ P + i (i~) idS(R) + idS(R)
2~ 2~mω 2~ 2
1 mω 2
= P ◦P + Q ◦ Q,
2m 2
where we have used [Q, P ] = i~ idS(R) .
Proof. They all follow from direct substitution, using H as written in the previous Corollary.
– 160 –
(i)
(ii)
1
[H, a+ ] = ~ωa+ a− + idS(R) , a+
2
1
= ~ω[a+ a− , a+ ] + [idS(R) , a+ ]
2
= ~ω a+ [a− , a+ ] + [a+ , a+ ]a−
= ~ωa+ idS(R)
= ~ωa+
Remark 16.3 . Strictly speaking in the previous proof we should have considered the action
of the commutator on an element of S(R) and showed that the expressions hold for an
arbitrary element. Doing it this way will return the same results, however this will not
always be true, and so care must be taken in future.
There are four more basic facts that allow us to obtain the spectrum in its entirety.
We claim that, for the H-eigenvalue ψ, the following hold:
(iv) E ≥ 21 ~ω.
– 161 –
(ii) Given that (a+ )∗ = a− and vice versa,33
where we used the fact that the inner product is non-negative definite in the second
to last last step (i.e. the first term is non-negative). The result follows from taking
the square root and imposing the condition that the norm is non-negative definite.
(iv) Consider
Ehψ|ψi = hψ|Eψi
= hψ|Hψi
1
= ~ω hψ|a+ a− ψi + hψ| idS(R) ψi
2
1
= ~ω ha− ψ|a− ψi + hψ| idS(R) ψi
2
~ω
≥ hψ|ψi.
2
Then from the fact that ψ is an eigenvector (and so cannot be the zero vector), the
inner product is non-vanishing and we can divide through by it, giving the result.
We can, thus, draw some conclusions. For any H-eigenvector, ψ, with eigenvalue E we
have:
1. From (i) and (ii) it follows that a+ ψ is a eigenvector, as (i) tells us it obeys the
eigenvalue equation and (ii) tells us its not the zero vector. Thus we know that the
sequence
{(a+ )n ψ}n∈N0
where the power indicates n-th order composition of operators, is a sequence of eigen-
vectors with correspoding eigenvalues
{E + n~ω}n∈N0 .
33
To show this you need to consider the definition of the adjoint and work from there, as you don’t know
that it will distribute across the addition in the definitions.
– 162 –
2. (iii) and (iv) tell us that the sequence of eigenvectors
{(a− )n ψ}n∈N0
must terminate for some n = N ∈ N. That is, we can not continue to keep lowering
the eigenvalue E forever, as (iv) says it bounded from below. Note this tells us that
a− ψ is not strictly a H-eigenvalue (just as J± weren’t for Ω and J3 ).
In other words there is a non-vanishing ψ0 ∈ S(R) defined as
ψ0 := (a− )N ψ
such that a− ψ = 0S(R) . It follows, then, from the definition of the Hamiltonian that
~ω
Hψ0 = ~ωa+ a− ψ0 + ψ0
2
~ω
= ψ0 ,
2
and so it has the lowest possible eigenvalue, by (iv).
– 163 –
Corollary 16.4. From 4. we note that (up to the usual ambiguity of a complex multi-
ple) there is only one eigenvector to each eigenvalue. That is we have the 1-dimensional
eigenspace
EigH (En ) = spanC (ψn ),
which tells us not only that ψ0 exists in the first place, but that it is unique.
Remark 16.5 . At the end of the last corollary we said that we confirmed the existence of
ψ0 in the first place. This might seem like a strange comment given the whole calculation,
however it is actually rather important. To illustrate why Dr. Schuller mentions a doctoral
proposal he once saw in which the student had derived some truly impressive formulae,
only to have someone point out that towards the start of his calculation he had 0, and so
everything that followed could have just been a repercussion of that (i.e. 0 · n = 0 for any
n in your space). It is therefore to check that the things you are using actually exist, in
this case ψ0 doesn’t vanish and so is an eigenvector.
{ψn | n ∈ N0 }
is an ON-eigenbasis for L2 (R), which leads us to the theorem promised at the start of the
lecture.
Theorem 16.6. If a symmetric operator has as its eigenvectors an ON-basis, the operator
is guaranteed to be essentially self adjoint.
This theorem tells us that H is essentially self adjoint, and the fact that we have an
ON-eigenbasis for L2 (R) tells us that the continuous spectrum is empty.
– 164 –
17 Measurements of Observables
So far we have discussed the spectrum of an observable, which tells us all the possible
measurement outcomes, but tells us nothing about the actual act of taking a measurement
itself. This comes through axioms 3 and 5. In order to illustrate these two axioms we will
repeatedly use the quantum harmonic oscillator as an example, but it is important to note
the methods are not specific to this case. Any restrictions required for the methods to hold
will be clearly stated.
This lecture can be read in two ways. One could read sections 4 and 5 first (on how you
prepare a given state) and then return to read sections 1-3 (on how you take measurements
of this state); or one could simply read it as presented (i.e. 1-5). Both reading orders
have their advantages, but we present it here in the order taught by Dr. Schuller. Also
in correspondence with the lecture given, we shall also translate some of the notation into
the commonly used bra-ket notation (see lecture 4), even though we do not use it in this
course. All these expressions shall appear in blue.
Hψn = En ψn ,
with
1
En = ~ω n + .
2
More precisely we derived
r
mω mω 2
ψ0 (x) = exp − x ,
2~ 2~
and r
n mω mω 2
ψn (x) := An (a+ ) ψ0 (x) ∝ Hn x exp − x .
~ 2~
The only thing we will actually use in this lecture is the fact that the {ψn } is an
ON-basis,
hψn |ψm i = δnm ,
and the fact that the spectrum is given by
1
σ(H) = ~ω n + n ∈ N0 .
2
The key to understanding measurement theory in quantum mechanics is that you know
the spectral decomposition of the observable(s) you want to measure. In order to obtain
the spectral decomposition of H we consider the projectors
– 165 –
Note that this operator is bounded as
We can, therefore, employ the operator norm on L(H) to decide convergence of the following
sum with the result
X∞
Pn = idH .
n=0
i.e. the sum over the projectors such that the energy eigenvalue corresponding to the state34
corresponding to ψn is within your Borel set.
Remark 17.1 . From now on we shall drop the En ∈ Ω on the sum, to lighten the notation,
but it is important to remember that it belongs there whenever we use PH . It will prove
highly instrumental to the results that follow.
Example 17.2 . Let Ω = {Em }, i.e. just the set containing the single eigenvalue Em . Clearly
then
PH (Ω) = Pm .
PH (Ω) = Pm + Pk .
Proposition 17.4. The map PH : σ(R) → L(H) is a projection valued measure, and in
fact corresponds to the projection valued measure that appears in the spectral theorem for
H. That is
Z
H= λPH (dλ)
R
∞
X
= En · Pn
n=0
∞
X 1
= ~ω n + |ψn i hψn |.
2
n=0
Remark 17.5 . The above proposition makes sense. The Hamiltonian (the energy operator)
is given by the energy eigenvalues multiplied by a projector that projects the state into
a state whose energy eigenvalue was the prefactor. This is clearly just the eigenvector
equation.
34
Again recall ψn are not the states themselves, ρψn are
– 166 –
Remark 17.6 . Note there was nothing specifically special about the fact that we were con-
sidering the Hamiltonian above. Indeed the same method holds for any observable you
wish to measure. First find an ON-basis of eigenvectors for your operator, A say, and then
define the PVM associated to the observable
PA : σ(R) → L(H)
in the same way and then plug it into the spectral theorem.
ρb ρa
H
The H tells us that it is the device associated to the observable H, the scale markings
tell us the spectrum35 , the arrow tells us the actual measurement made, ρb is the state
before the measurement and ρa is the state after the measurement.
Remark 17.7 . Note that the pointer here will not move continuously between the notches;
it moves between the values by jumping between them. In other words, it can no point at
in between two notches, as this would not be part of the spectrum.
– 167 –
(i) Trace-class: Tr(ρ) = hen |ρ(en )i < ∞, where {en } is any ON-basis.
P
en
ρb ρa
H
Remark 17.8 . We wish to emphasise this point again here. The spectrum of an observable
only tells you the possible measurements and the results of last lecture give you information
on the probability of each possible measurement. This is where the probabilistic nature
enters into quantum mechanics. When a measurement is made, the result is concrete. You
get exactly that result. This in tern effects the state of your system, giving a (potentially)
new state. This is where the indeterminate nature of quantum mechanics enters.
That is, prior to the measurement you can only say with what probability you get
one of the possible final states, but once the measurement is made, it is exactly that one,
and which final state you get depends on which measurement result you get. This is the
quantum behaviour of the system.
As we shall see in section 5 there is another form of probability concerned with quantum
mechanics, but this probability does not stem from the quantum nature of the system itself.
It stems from the ‘ignorance’ of the experimenter/the equipment in order to be able to
distinguish which measurement was made. This results in what are known as mixed states.
– 168 –
for some ψ ∈ H.
Remark 17.9 . Again we emphasise that people often refer to ψ as being the pure state itself.
This might still seem forgivable, but as mentioned previously this is in fact uncountabley
infinitely incorrect, as we have a complex scaling ambiguity: for any λ ∈ C \ {0},
ρλψ = ρψ .
One might then say ‘Ok, just take the normalised ψ elements,’ but again this is still incorrect
as multiplying by eiα for α ∈ C would still given the same result. One could say, then, ‘a
state of the system is given by an element of the Hilbert space, up to arbitrary rescaling.’
energy of the harmonic oscillator for the state ρϕ . As we are dealing with the harmonic
oscillator, which has a purely discrete spectrum, we can simply make our Borel sets such
that they contain only one measurement (one notch on the scale). We have then, for some
ON-basis {en }
X
Tr PH ({Ek }) ◦ ρϕ = en PH ({Ek }) ◦ ρϕ en .
n
Now seeing as en need only be some ON-basis, we are free to use our ON-eigenbasis {ψn },
giving37
X
Tr PH ({Ek }) ◦ ρϕ = ψn PH ({Ek }) ◦ ρϕ ψn
n
* +
X X
= ψn hψm |ρϕ (ψn )i ψn
n
m
X
= hψn |hψk |ρϕ (ψn )i ψk i
n
X hϕ|ψn i
= ψn ψk
ϕ ψk
n
hϕ|ϕi
X hϕ|ψn i
= hψk |ϕi hψn |ψk i
n
hϕ|ϕi
hϕ|ψn i
= hψn |ϕi
hϕ|ϕi
| hϕ|ψn i |2
=
kϕk2
kPk ϕk2
= ,
kϕk2
where we have used the fact that kψn k = 1 to get to the last line.
We now note that although we no not require ϕ to be an eigenvector of H, we can
always express it as a linear combination of the ON-eigenbasis {ψn }. The following two
37
Recall that in the definition of PH the sum is taken such that the energy eigenvalue with that index is
within the Borel set.
– 169 –
examples shall highlight this point and demonstrate how one could almost instantly deter-
mine the probabilities of obtaining a given energy measurement given the expression for ϕ
corresponding to a pure state.
Example 17.10 . First imagine that ϕ is an eigenvector of H, then we clearly have
ϕ = Aψ`
for A ∈ C and some fixed `. Plugging this into the formula we obtain
kPk (Aψ` )k2
Tr PH ({Ek }) ◦ ρϕ =
kAψ` k2
k hψk |Aψ` i ψk k2
=
|A|2 kψ` k2
|A hψk |ψ` i |2 kψk k2
=
|A|2
= δk` ,
ϕ = Aψp + Bψq ,
– 170 –
for ci ∈ C then the probability to measure energy Ek is
2
|ck |2
P
i |ci | δki
T r PH ({Ek }) ◦ ρϕ = P 2
= P 2
.
i |ci | i |ci |
For (i) we can prepare a pure state ρψk where ψk is an eigenvalue of H by the following
device
Measurement Output
ρϕ
Filter k ρψ k
H
We feed in a general pure state of our system into the H device, which measures the
energy of that pure state. It then sends this measurement output into the filter device.
The state post measurement is then fed into the filter device, which is designed to only let
something pass through it if the measurement was Ek . In this way the only pure state that
can leave is ρψk .
Note, although the final output is guaranteed to be ρψk , that does not mean that the
output from the H device is always ρψk — it could be any of the possible output states. It
38
This is known as ‘wave-function collapse’ in the literature. Dr. Schuller does not like using the wave
analogy and so, if anything, he called this ‘collapse of the state’.
– 171 –
is also important that H is non-degenerate, otherwise the filter would let multiple different
states through, all of which gave Ek as their measurement output.
For (ii) we allow for degeneracy of H. We overcome this by considering a maximal set
of mutually commuting observables, {A1 , ..., Af }, for which there are common eigenvectors
ψa1 ,...,af with
Ai ψa1 ,...,af = ai ψa1 ,...,af .
The maximal set means that these states are uniquely determined using these operators;
that is we have a subset of eigenvectors which we differentiate using this maximal set of
commuting operators. The device looks like
ρϕ Filter k
H
Probability pρψk + (1 − p)ρψ`
Generator
ρϕ Filter `
H
The ‘probability’ generator here is some method of choosing which input (left) to output
(right), where there is a probability p to use the top input (ψk ). For example it could be a
– 172 –
person rolling a dice that says "If I roll a ‘1’ then I shall use the top input, otherwise I’ll
use the bottom one," in which case p = 1/6. Normalisation is taken care of by requiring
the other possible outcome to have probability (1 − p).
Remark 17.13 . It is very important to realise the the output for a mixed state is the sum
of two states; it is not the state made from the sum of two eigenvectors, as was the case
with Example 17.12. That is
We highlight this point here as it demonstrates one of the misleading aspects of using bra-
ket notation. People often talk about a pure state as one that can be written as a linear
superposition of the eigenstates (as with Example 17.12), writing
for a, b ∈ C and k 6= `, where the normalisation condition requires |a|2 + |b|2 = 1. But
if were to think of |Ψi as the state then this would look like a mixed state — it is the
superposition of two pure states.
In order to differentiate a pure state from a mixed state they introduce the density
matrices, which are the ρs we’ve been using, and say that the density matrix of a mixed
state is of the form X
ρmixed = pi |ψi i hψi |,
i
where pi is the probability of being in the corresponding state, but then going back to the
start of section 17.3, we see this is just the same as what we wrote for a mixed state, without
any of the potential confusion.
– 173 –
18 The Fourier Operator
This lecture will begin our systematic approach to the study of the so-called Schödinger
operator :
H : DH → L2 (Rd ),
with
~2
(Hψ)(x) := − (4ψ)(x) + V (x)ψ(x),
2m
where 4 := ∂12 + ... + ∂d2 is the Laplacian operator and V (x) is the potential. The Fourier
operator is an indispensable toll in conducting the study.
We will start by expanding on Lemma 12.19. We will then use the fact that the Schwartz
space is densely defined on L2 (Rd ) along with the BLT theorem to provide a proscription
for taking the Fourier transform on L2 (Rd ).
(i) Qk f ∈ S(Rd ) for all k = 1, ..., d, where Qk is the k-th position operator.
(ii) Pk f ∈ S(Rd ) for all k = 1, ..., d, where Pk is the k-th momentum operator.
Definition. The Fourier operator on Schwartz space is the linear map F : S(Rd ) → S(Rd )
with Z
1
(Ff )(x) := dd ye−ixy f (y),
(2π)d/2 Rd
where xy := x1 y1 + ... + xd yd .
Remark 18.1 . We are using the 1/(2π)d/2 convention in the definition above. As we shall
see, all that is required is a ‘total of’ 1/(2π) between the Fourier operator and it’s inverse.
This convention is often used as it makes comparing to the inverse easier. Other conventions
(for example having 1/(2π) appear in the inverse and just have unit coefficient in the above)
find use in certain cases (for example if you were only concerned with taking F).
Remark 18.2 . We shall also called the action of the Fourier operator on a function f ∈ S(Rd )
the Fourier transform of the function.
– 174 –
While this does have its advantages at times (the famous example being the motivation
behind the derivation of Heisenberg’s uncertainty relation, a truly vital relation in
quantum mechanics), it can also lead to misconceptions. For example, when thought
of this way, one may think that you can not take the double Fourier transform F(Ff ),
as the first one gives a momentum space, which the second does not ‘act on’. However
from the definition given, this is clearly nonsense — of course you can take it twice
as F : S(Rd ) → S(Rd ).
We shall, however, stick to this notation, but we should not be fooled by what we can
take the Fourier transform of because of it.
(ii) Recall that (Qk f )(x) = xk f (x) which is just a real number. It is therefore totally
meaningless to write something of the form
F xk f (x) .
However having to define the operators each time and then taking the Fourier trans-
form of their action on a function could end up quite lengthy, so instead we introduce
the following notations V
xk f (x) := F(Qk f ),
and
F x 7→ xk f (x) ,
g := Fg,
where g is the result of the action of the operator on f (so here g := Qk f ).
– 175 –
where we have used the fact that the elements of the Schwartz space are rapidly
decaying to remove the boundary term.
Now assume it is true for |γ| = n. Then, if γ 0 is the next step, from the fact that
∂k f ∈ S(Rd ) we have
F (−i)n+1 ∂γ1 ∂γ2 ...∂γn+1 f (p) = F (−i)n ∂γ1 ∂γ2 ...∂γn (−i∂γn+1 f ) (p)
= pγ1 pγ2 ...pγn · F − i∂γn+1 f (p)
= pγ1 ...pγn pγn+1 · Ff (p)
=: pγ 0 · Ff (p)
=: e−ipa f (x)(p),
– 176 –
where we relabelled y → x again.
Lemma 18.6. Let x ∈ Rd and z ∈ C with Re(z) > 0. Then the following is true.
V
1 1
exp − z2 x2 (p) = exp − p2 .
z d/2 2z
Proof. We shall prove this for the case d = 1. Let
z
Gz (x) := exp − x2 .
2
Then we have
∂Gz (x) = −zxGz (x)
ip · FGz (p) = −iz ∂ FGz (p),
which is an ODE for FGz . Solving by separation (as done when considering the quantum
harmonic oscillator) we arrive at
p2
FGz (p) = A exp − .
2z
Plugging in p = 0 and the definitions for the LHS gives
Z
1 z 2
√ dx1 · e− 2 x = A.
2π R
Then employing the fact that the integral above is holomorphic39 we we extend the standard
integral result Z r
−iσx2 π
dxe = ,
R σ
39
See ‘Fourier Series, Fourier Transform and Their Application to Mathematical Physics’ by V. Serov
Capter 16
– 177 –
for σ ∈ R to the case we are considering, giving
1
A= √ .
z
Theorem 18.7. The Fourier operator F : S(Rd ) → S(Rd ) is invertable with inverse
Z
−1 1
dd pe+ipx g(p).
F g (x) =
(2π)d/2 Rd
Proof. Need to show that F−1 Ff (x) = f (x). In order to do so, we shall have to
introduce a regulator
ε 2
lim e− 2 p = 1
ε→0
into the integral. We shall then use the fact that Ff (p) will be dominant and the fact that
we are using Lebesgue integrals to pull out the limit. We shall also use Fubini’s theorem to
move the order of the integrals.
Z
−1 1
dd peipx Ff (p)
F Ff (x) := d/2
(2π) d
ZR
1 ε 2
dd p lim e− 2 p eipx Ff (p)
= d/2
(2π) Rd Z ε→0
1 ε 2
dd pe− 2 p eipx Ff (p)
= lim d/2
ε→0 (2π) d
ZR Z
1 d − 2ε p2 ipx 1
:= lim d pe e dd ye−ipy f (y)
ε→0 (2π)d/2 Rd (2π)d/2 Rd
Z Z
1 1 ε 2
= lim d/2
dd
y d/2
dd pe− 2 p eipx e−ipy f (y)
ε→0 (2π) d (2π) d
ZR ZR
1 d 1 d − 2ε p2 −ip(y−x)
= lim d y d pe e f (y)
ε→0 (2π)d/2 Rd (2π)d/2 Rd
Z Z
1 1 ε 2
= lim dd
y dd pe− 2 p eipx e−ipy f (y)
ε→0 (2π)d/2 Rd (2π)d/2 Rd
Z Z
1 1 ε 2
= lim d
d z dd pe− 2 p e−ipz f (z + x)
ε→0 (2π)d/2 Rd (2π) d/2
Rd
V
1
Z
d ε 2
= lim d z exp − 2 p (z) f (z + x)
ε→0 (2π)d/2 Rd
Z
1 d 1 1 2
= lim d z exp − z f (z + x)
ε→0 (2π)d/2 Rd εd/2 2ε
√ 0
Z
1 d 0 d/2 1
1 0 2
= lim d/2
d z ε d/2
exp − εz f εz + x
ε→0 (2π) Rd ε 2ε
√ 0
Z
1 d 0
1 0 2
= d z exp − z lim f εz + x
(2π)d/2 Rd 2 ε→0
1
= (2π)d/2 f (x)
(2π)d/2
= f (x),
√
where we have used the substitutions z = y + x and then z = εz 0 along with the standard
integral result used in the previous lemma.
– 178 –
18.3 Extension of F to L2 (Rd )
We already know that F is densely defined on L2 (R2 ), so if we can show it is bounded the
BLT theorem will tell us there is a unique, bounded extension of F on L2 (Rd ).
where we used the fact that the integral is over a real domain.
kFf k2S(Rd )
kFk := sup
kf k2S(Rd )
f ∈S(Rd )
qR 2
d
Rd d p Ff (p)
= sup qR
f ∈S(Rd ) d 2
Rd d x|f (x)|
v
u d dd p Ff (p)2
uR
RR
= sup t
d 2
f ∈S(Rd ) Rd d x|f (x)|
= 1,
F : L2 (R2 ) → L2 (R2 ).
where we have used the fact that F is bounded to remove the limit.
– 179 –
18.4 Convolutions
Definition. The convolution of two functions f, g ∈ L1 (Rd ), written f ∗ g, is the L1 (Rd )
function defined pointwise by
Z
(f ∗ g)(x) := dd yf (x − y)g(y).
Rd
Lemma 18.9. The convolution of two functions is symmetric, i.e.
f ∗ g = g ∗ f.
Proof. The result comes from simple change of variables along with the commutativity of
the complex multiplication,
Z
(f ∗ g)(x) := dd y f (x − y)g(y)
Rd
Z
2d
= (−1) dd z f (z)g(x − z)
Rd
Z
= dd z g(x − z)f (z)
Rd
Z
= dd y g(x − y)f (y)
Rd
=: (g ∗ f )(x),
where the (−1)2d term comes from the fact that dy = −dz along with the fact that the
integral limits swap. Since x ∈ Rd was arbitrary, we have the result.
– 180 –
which holds for all x ∈ Rd , giving the result.
f ∗ (g + h) = f ∗ g + f ∗ h,
Theorem 18.12. The Fourier transform of the convolution of two functions is proportional
to the product of their Fourier transforms, explicitly
where we have used the fact that we can consider the convolution integration variable (the
y) as a constant when relabelling the Fourier transform variable, and used the fact that the
Fourier transform of a function is finite to make the integral of an integral into a product
of integrals. Finally since this is true for all p ∈ Rd the result follows.
– 181 –
19 The Schrodinger Operator
~2
Hfree := − 4
2m
and it corresponds to the energy observable for a free particle40 of mass m.
This lecture aims to derive the spectrum of Hfree and use it to study the time evolution
of pure states. We shall consider d = 3 throughout this lecture, and shall make use of the
results of last lecture heavily. We shall also use units such that m = 12 ~2 in order to lighten
notation; i.e. Hfree = − 4.
F(− 4ψ) =: P 2 ψ.
b
Remark 19.1 . The physicists says "in momentum space the Laplacian acts simply by mul-
tiplication of the norm of the momentum, |p|2 ".
We can now rewrite the above by inserting idL2 (R3 ) = F−1 F to give
Theorem 19.2. A maximally defined real multiplication operator is self adjoint on its
maximal domain.
Proof. Let
A :DA → H
ψ 7→ aψ
40
Free particle here means what we think of classically as a free particle, it experiences no potential.
– 182 –
where a ∈ R and D is the maximal domain of A (i.e. there are no elements outside this
domain such that aψ ∈ H). This operator is clearly symmetric as, for ψ, ϕ ∈ D
hψ|Aϕi = hψ|aϕi
= a hψ|ϕi
= haψ|ϕi
= haψ|ϕi
= hAψ|ϕi .
From this theorem, then, we have that FHfree F−1 is self adjoint on the domain DP 2 .
ρ(P 2 ) = {z ∈ C | z 6= |p|2 }.
Then using the fact that |p|2 ∈ R with |p|2 > 041 we have
ρ(P 2 ) = C \ R+
0.
Then finally using the definition of the spectrum as the compliment of the resolvent set we
get
σ(Hfree ) = σ(P 2 ) = R+
0.
Remark 19.3 . This is not the method used in the lectures, which I42 find more confusing.
As I don’t fully understand the latter parts of the proof provided (I think the main idea
is introducing a form of the spectral theorem in which the integral is performed over the
spectrum of the operator and then use the characteristic function Dr. Schuller introduced)
I shall not type it up here to avoid potential confusion to the readers. If you do follow the
complete method please feel free to contact me and I can add it and give you credit.
Proposition 19.4. For every self adjoint operator there is always some transformation
which transforms the operator into a mere multiplication operator.
Remark 19.5 . Once you know the transformation the self adjoint operator of interest, the
spectrum always follows by the same method.
41
We use a strict equality as if |p|2 = 0 then there is no momentum and so the operator P 2 just maps it
to 0.
42
I being Richie
– 183 –
19.3 Time Evolution
Recall that Axiom 4 tells us the evolution of a state is given by43
Remark 19.6 . We are assuming here that H is time-independent. If it wasn’t you would
simply use an integral in the exponential.
Remark 19.7 . We should note that in the above time is viewed as a parameter, not a
coordinate — i.e. this is not a spacetime picture. The elements ψt1 and ψt2 are simply
elements of the Hilbert space, each of which is associated to a different time. This can be
compared to saying classically that the position of a particle is an element of R3 , and at a
later time its position is still an element of R3 , although potentially a different one.
F−1 P 2 F = Hfree ,
to give us44
1 −1 −it|p|2
e−itHfree ψ(x) =
F p →
7 e ∗ ψ (x),
(2π)3/2
43
We’re using units such that ~ = 1.
44
Remember d = 3 here.
– 184 –
and then use Lemma 18.6 to give
2 1 1 2
F−1 p 7→ e−it|p| (x) = 3/2
e− 4it |x| ,
(2t)
however there is a problem with both of these steps.
Firstly for us to use the convolution theorem we require both functions to be L1 (R3 ).
For ψ we can simply take the intersection ψ ∈ L2 (R3 )∩L1 (R3 ), however the exponential term
is unavoidably not in L1 (R3 ) (if you take the absolute value you get 1, and then integrating
over all of R3 gives a divergent result). On top of that, in order to use Lemma 18.6 we
require the real part of the coefficient to be strictly positive (in order to avoid the branch
cut), but Re(it) = 0.
Luckily we can fix both of these problems with the same step, regularisation. We
regularise both by introducing a positive, real factor into the exponential and then taking
the limit,
2 2
e−it|p| = lim e−(it+ε)|p| .
ε→0
The addition of ε stops the integral diverging (because of the minus sign) and we also have
Re(it + ε) = ε > 0 and so we can use Lemma 18.6.
So, using the continuity of the product and the inverse Fourier transform we have
|x|2
1 1
e−itHfree ψ(x) = lim x 7→ exp − ∗ ψ (x)
ε→0 (2π)3/2 2(it + ε) 3/2 4(it + ε)
|x − y|2
Z
1 3
:= lim d y exp − ψ(y).
ε→0 4π(it + ε) 3/2 R3 4(it + ε)
Finally using dominated convergence to take the limit inside the integral, we have45
|x − y|2
Z
1 3
ψt2 (x) = d y exp − ψt1 (y).
(i4πt)3/2 R3 i4t
to give
|x|2
exp − i4t
Z
x
|y|2
3
ψt2 (x) = d y exp − i y exp − ψt1 (y),
(i4πt)3/2 R3 2t i4t
which is a Fourier transform with result
2
exp − |x|i4t
V
x
|y|2
ψt2 (x) = · exp − i4t ψt1 (y) .
(i2t)3/2 2t
45
Note t = t2 − t1 here.
– 185 –
We now use the fact that we have a patient observer (i.e. one who watches for a long
time) to take the asymptotic behaviour46 to give
2
exp − |x| i4t
x
ψt2 (x) ∼ 3/2
ψt1
b ,
(i2t) 2t
which, if the ψs were viewed as ‘waves’ (i.e. plots on a graph) would indicate that the ‘wave
spreads out over time’. In other words, simplifying to R instead of R3 we’d have something
along the following diagram.
ψt1
ψt2
R
So the function appears to spread out (keeping the area under it constant) over time.
46
That is take the limit t → ∞ at places where it wont cause problems.
– 186 –
20 Periodic Potentials
This lecture aims to look at periodic potentials and find the most general information we
can about their energy spectrum. In order to do this we will use so-called Rigged Hilbert
Spaces.47
Definition. The Hamiltonian for a periodic potential is of the form
~2
H=− 4 + V (x),
2m
with
(i) Periodicity in V (x), i.e. V (x + a) = V (x) for all x ∈ R3 where a is the periodicity of
the system
V (x)
a R
As we shall see, by making no assumptions apart from the above, we will be able to
extract a remarkable generic conclusion about the energy spectrum of particle moving in
a generic periodic potential. This is truly a amazing result as the potential can even be
discontinuous (countably) infinite times! A huge application of this formalism is in the
study of solid state physics, where the periodic potential comes from that generated by a
regular lattice of so-called lattice constant a,
e−
As we shall see the general result is that the energy spectrum comes in continuous,
open intervals in R, known as bands.
E
47
I am currently reading up on these, and will add an additional section to the end of these notes once I
have a better idea on them.
– 187 –
20.1 Basics of Rigged Hilbert Space
As mentioned at the start, we wish to make use of rigged Hilbert spaces48 in order to find
the spectrum of the energy observable. The basic reason behind this is that rigged Hilbert
spaces essentially extend what we usually think of as the eigenvalue equation
Hψ = Eψ,
where E is a discrete value in R, to the case where E can be continuous. This is known as
the generalised eigenvalue equation.
We do this because ultimately we know that the spectrum will be continuous intervals,
however even if it was purely discrete, or a combination of both, the theory of rigged Hilbert
spaces would still account for this.
The basic idea behind rigged Hilbert spaces is to consider elements Ψ that satisfy the
generalised eigenvalue equation, but do not lie in L2 (Rd ). It turns out that they lie in the
adjoint of a densely defined subspace, which for us is the Schwartz space. In this way we
construct our so-called Gelfand Triple:
The easiest way to see that we need such a construction here is that, as the Hamiltonian
is constructed using derivative operators, its eigenvalues are likely to be of the form
Ψ ∝ eiE ,
Remark 20.1 . It is important to note here that a rigged Hilbert space is not some extension
of the physics or of quantum mechanics, but indeed it is the most natural mathematical
structure required in order to study quantum mechanics. In fact it is the rigged Hilbert
space structure which provides the full mathematical foundation in order to understand
Dirac’s bra-ket notation, and it introduces the well known Dirac delta function. This gives
the first insight into what a rigged Hilbert space is — it is the equipping (i.e. the ‘rigging’)
of a Hilbert space with a theory of distributions.
Proposition 20.2. Any H-eigenvector Ψ ∈ S ∗(Rd )\L2 (Rd ) has a purely continuous energy
spectrum.
(ii) Any other solution to the ODEs can be expanded using the set {ψ1 , ..., ψn }.
48
Again, coming soon!
– 188 –
In other words, they form a basis for the solution space.
Proposition 20.3. The cardinality of the fundamental set of solutions of a system of n-th
order ODEs is n.
Example 20.4 . Let the system of ODEs just be the single equation
ψ̈(x) + ω 2 ψ(x) = 0,
It is easy enough to see that these two solutions do indeed form a basis for the solution
space.
Lemma 20.5. Let {ψ1 , ..., ψn } be a set of fundamental solutions for some system of ODEs.
Then the set {c1 ψ1 , ..., cn ψn } for ci ∈ F (the underlying field) is also a set of fundamental
solutions.
As our Hamiltonian is a second order ODE there are 2 fundamental solutions. These
fundamental solutions depend on the value of E and so we label them as {ψ1E , ψ2E }. We
remove the ambiguity in the coefficients by requiring
Theorem 20.6. The fundamental solutions ψ1E and ψ2E are entire functions on E.
Remark 20.7 . Note in order to make the above theorem true, we require that our eigenvalues
are complex, E ∈ C. This is clearly unphysical, however do this here in order to exploit the
strong results of complex analysis, and then we shall restrict ourselves to E ∈ R at the end.
Definition. The fundamental matrix of a system of linear, homogeneous ODEs, with the
fundamental set of solutions {ψ1 , ..., ψn } is the matrix
ψ1 (x) ... ψn (x)
0
ψ1 (x) ... ψn0 (x)
. ... .
M (x) = .
... .
. ... .
(n) (n)
ψ1 (x) ... ψn (x)
Lemma 20.8. The determinant fundamental matrix for our system is constant49
0
det M E (x) = 0.
49
Note we also label M with a superscript E.
– 189 –
Proof. By direct calculation, and using the generalised eigenvalue equation,
E E 0 0
ψ1 (ψ2 ) − (ψ1E )0 ψ2E (x) = (ψ1E )0 (ψ2E )0 + ψ1E (ψ2E )00 − (ψ2E )0 (ψ1E )0 − (ψ1E )00 ψ2E (x)
2m 2m
= ψ1 − 2 (E − V )ψ2 − − 2 (E − V )ψ1 ψ1E (x)
E E E
~ ~
= 0.
Corollary 20.9. It follows trivially from the conditions we placed on ψ1E and ψ2E that
det M E (x) = 1.
(T ψ)(x) := ψ(x)
e := ψ(x + a),
for some a ∈ C.
Proposition 20.10. Let T be the translation operator with a being the periodicity of our
system. Then the T commutes with the Hamiltonian,
[T, H] = 0.
where we have used the fact that the translation by a constant doesn’t effect the result of
differentiating a function and the fact that T is linear.
– 190 –
So we have that the translated solution is also a solution. We now use the fact that
is a fundamental set of solutions to expand the translated fundamental solution,
{ψ1E , ψ2E }
2
X
ψjE (x + a) = ai j ψiE (x),
i=1
for j = 1, 2.
Now consider the case for x = 0, then we have
and similarly
(ψjE )0 (a) = a2 j ,
from which it follows that
i
ai j = M E (a) j .
So we have that a general solution is of the form
2 X
2
X i
ψ E (x + a) = A` M E (a) j ψiE (x),
`=1 i=1
for A1 , A2 ∈ C.
Remark 20.12 . As we showed previously the translation operator and the Hamiltonian com-
mute, and so they share common eigenvectors. We shall label these eigenvectors as follows
Hψ E,λ = Eψ E,λ ,
T ψ E,λ = λψ E,λ .
We can reformulate the second eigenvalue equation as follows: using T ψ E,λ (x) =
ψ E,λ (x + a) we have
– 191 –
If we let λ1 and λ2 be the two eigenvalues of M E (a), then there is some basis such that
!
λ 1 0
M E (a) = ,
0 λ2
det M E (a) = λ1 · λ2
Tr M E (a) = λ1 + λ2 .
y 00 + V (x)y = 0
for θ ∈ [−π, π) and where p1 (x) and p2 (x) are periodic with the same period as V (x).
Remark 20.14 . Floquet’s theorem also tells us that the eigenvalues are simply
– 192 –
20.5 Energy Bands
Recall that ψ1E and ψ2E are entire functions of E, and so their sum is also an entire function.
From this it follows that the restriction of γV to the reals,
γV |R : R → C,
is at least smooth. Thus we know that
(i) If (E0 , θ0 ) solves the equation γV (E0 ) = cos θ0 , then any E in a sufficiently small
neighbourhood of E0 solves the equation γV (E) = θ for some θ.
(ii) Equivalently, if E1 does not solve γV (E1 ) = cos(θ1 ) for any θ1 ∈ ([π, π), then no E is
any sufficiently small neighbourhood will.
From these conditions we can draw the remarkable conclusion: For any periodic, piece-
wise continuous and bounded potential, the energy spectrum is a countable union of open
intervals, known as energy bands.
Remark 20.15 . Note the fact that we only have continuous parts to our spectrum is con-
sistent with our rigged Hilbert space ideas; the functions ψ E,λ contain a phase factor and
then a periodic function, and so are not square integrable, but they are bounded and so are
elements of S ∗ (Rd ) \ L2 (Rd ).
V (x)
a x
γV
– 193 –
21 Relativistic Quantum Mechanics
As we shall see the transition into relativistic quantum mechanics is highly non-trivial; in
the sense that we don’t simply add a new term onto our expressions that accounts for the
relativistic effects. This lecture is meant as a very brief overview/introduction to quantum
field theory, and so does not claim to be self contained in any sense. The main idea we
want to highlight is how the ideas change once we start accounting for relativistic effects,
and what the repercussions of those changes are.
giving
~2
i~∂t ψ = − ∂a ∂ a ψ + V ψ,
2m
for ψ : R3 → C.
If we want to get the probabilistic interpretation that quantum mechanics is built on,
we need to introduce an object that
ρ := |ψ|2 .
We might ask ourselves ‘How does one come up with the idea to such an object?’, the
answer for which comes from the following.
Firstly its clear that ρ ≥ 0, by definition of the inner product. We can also always
arrange for the integral over all space to be unity by normalisation. Now consider the
Schrödinger equation and its complex conjugate
~2
i~∂t ψ = − ∂a ∂ a ψ + V ψ
2m
~2
−i~∂t ψ = − ∂a ∂ a ψ + V ψ.
2m
– 194 –
If we multiply the former from the right by ψ and the latter from the left by ψ, then subtract
the two results we arrive, after rearranging a bit, at
i~
ψ(∂a ∂ a ψ) − (∂a ∂ a ψ)ψ
(∂t ψ)ψ + ψ(∂t ψ) =
2m
i~
∂a ψ(∂ a ψ) − (∂ a ψ)ψ .
∂t (ψψ) =
2m
Then defining
i~
j a := − ψ(∂ a ψ) − (∂ a ψ)ψ ,
ρ := ψψ,
2m
we arrive at the continuity equation
∂t ρ + ∂a j a = 0,
where we have used the fact that we are integrating over all of R3 (which has no bound-
ary/surface) with the fact that ∂a j a is a purely surface term.
E 2 = p2 + m2 ,
which gives
−~2 ∂t2 = −~2 ∂a ∂ a + m2 ,
which, after rearranging, gives the so-called Klein-Gordan equation 53
( + m2 )φ = 0,
:= ∂t2 − ∂a ∂ a .
52
Available via YouTube.
53
It is named such as Oskar Klein and Walter Gordan also arrived at this result after Schroödinger, and
proceeded to try and interpret it as the description of relativistic electrons, which we shall see shortly is
not the case.
– 195 –
The question we now have to ask is ‘can we still obtain some probability interpretation
using φ?’ The answer is no, as we shall now show.
In correlation to the non-relativistic case, in order to ensure the integral is a constant
in time, we wish to find a J µ such that
∂µ J µ = ∂0 J 0 + ∂a J a = 0.
Again similarly to before, by considering the Klein-Gordan equation and its complex con-
jugate we arrive at
J µ := (∂ µ φ)ψ − ψ(∂ µ ψ),
and Gauss’ theorem tells us that the only candidate for the probability amplitude ρ is
This all looks fine, in fact it looks exactly like the non-relativistic case. However there is one
subtle, yet highly important, difference. In the Schrödinger equation we had only first order
time derivatives, whereas the Klein-Gordan equation is second order in time derivatives.
This means that for the latter we can prescribe as initial conditions not only φ(t) but its
derivative (∂t φ)(t) at some time. We can thus choose these initial conditions such that
ρ < 0 at some time, which violates the interpretation of ρ as a probability density. This is
a problem that cannot be removed at this level, the reason for which we shall soon see.
Remark 21.1 . Historically, Schrödinger actually arrived at the relativistic equation first (as
he knew this was ultimately where he wanted to go), however when running into the problem
highlighted above he decided instead to consider the non-relativistic case and ended up with
the Schrödinger equation.
Remark 21.2 . Note some people often refer to the time and spatial derivatives being on an
equal footing in the Klein-Gordan equation. This is a highly misleading choice of words.
They are not on an equal footing, as the temporal derivatives come with a positive sign,
whereas the spatial ones come with a negative sign. This is not just some little difference
to be brushed over. Indeed, without this minus sign stems from Maxwell’s equations, and
without it the Klein-Gordan equation would be physically useless; the gist being that if
time and space were on an equal footing then we would not be able to predict the future,
which is the main driving force of physics. What people should say is that they are on a
similiar footing.
and then using the substitutions as above. However this is no better as now the RHS is
the square root of a differential operator, and so in the expansion about m2 we will end up
with a theory with infinite spatial derivative order. Not good at all!
– 196 –
Dirac then asked the question ‘What if I use a different substitution prescription such
that the whole of the RHS becomes a single order derivative?’ Following this thought, after
several calculations, he arrives at the so-called Dirac equation
(iγ µ ∂µ − m14 )Ψ = 0,
{γ µ , γ ν } := γ µ γ ν + γ µ γ ν = 2η µν ,
known as the Dirac algebra, where η µν is the Minkowski metric given by54
η µν = diag(−1, 1, 1, 1).
The vector Ψ here is a 4 component object known as a spinor, where (after some work) we
can think of two of the components being a particle and the other two being the associated
antiparticle.
So the Dirac equation introduces antimatter into the mix, however it turns out that it
still doesn’t fix the probability problem addressed by the Klein-Gordan equation!
F := C ⊕ H ⊕ (H ⊗ H) ⊕ (H ⊗ H ⊗ H) ⊕ ...
54
Using the (-,+,+,+) signature.
– 197 –
We can then construct the inner product on F from the inner products on the Hilbert
spaces, all of which are obtained from the inner product on H. That is if ψ, ϕ ∈ F given by
X ij
ψ = a0 ⊕ a1 ψ1 ⊕ a2 (ψ2i ⊗ ψ2j ) ⊕ ...
ij
bij
X
ϕ = b0 ⊕ b1 ϕ1 ⊕ 2 (ϕ2i ⊗ ϕ2j ) ⊕ ...,
ij
aij
X
k`
hψ|ϕi = a0 b0 + a1 b1 hψ1 |ϕ1 iH + 2 b2 hψ2i ⊗ ψ2j |ϕ2k ⊗ ϕ2` iH⊗H + ...
ijk`
Remark 21.3 . For the Klein-Gordan equation it is actually the symmetrised tensor products
we need, giving this symmetric Fock space
F := C ⊕ (H H) ⊕ (H H H) ⊕ ...,
∧F := C ⊕ (H ∧ H) ⊕ (H ∧ H ∧ H) ⊕ ...,
e e
e e
e e e
e e
e e e e e
e e e e e
e e
ee ee e e e e
+ + e
e
e
e e
e
e+
e e
e
e e e +e …
e e e e e
e e
e e e e e
e e e e e e e
e e e e e ee e
e e
e e e e e e
e e e e e e e
e e e e e
e e e e ee ee e
– 198 –
The first diagram (which has no vertices) is the zeroth order diagram, the second one
(with two vertices) is the second order diagram and the last two (which both have 4 vertices)
are the fourth order diagrams. The furthest most left and right arrows (i.e. the ones that
have a non-vertexed end) are known as external lines, and the other ones are known as
internal lines. Particles represented by internal lines are often referred to virtual particles.
Remark 21.4 . One should be careful when it comes to drawing the arrows on the internal
lines, however, as (unless the rest of the diagram indicates otherwise) we could have a
particle or an antiparticle (whose arrow points the opposite way). It is for this reason that
the so-called loop in the third diagram does not have arrows. On the final diagram we do
draw the arrows, as conservation of electric charge forces us to ensure our virtual particles
are electrons (not positrons, the anti-electron).
Note also on loop internal lines we have simply written e and not e− or e+ (the positron),
this further indicates that we do not know which is which, only that one must be an electron
and the other a positron.
– 199 –
Further readings
• Moretti, Spectral Theory and Quantum Mechanics: With an Introduction to the Al-
gebraic Formulation, Springer 2013
Linear Algebra
• Friedberg, Insel, Spence, Linear Algebra (4th Edition), Pearson 2002
Topology
• Adamson, A General Topology Workbook, Birkhäuser 1995
– 200 –
Functional analysis
• Aliprantis, Burkinshaw, Principles of Real Analysis (Third Edition), Academic Press
1998
– 201 –