Olaf de Vries Bachelor Thesis Clebsch Gordan Coefficients
Olaf de Vries Bachelor Thesis Clebsch Gordan Coefficients
Coefficients
A Quantum Mechanical and Mathematical
Perspective
by
Olaf de Vries
to obtain the degree of Bachelor of Science
at the Delft University of Technology,
2 Quantum Mechanics 4
2.1 A Brief Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Angular Momentum . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Combining Quantum Systems . . . . . . . . . . . . . . . . . . . . 14
2.4 Clebsch-Gordan Coefficients . . . . . . . . . . . . . . . . . . . . . 24
2.5 Calculating the Clebsch-Gordan Coefficients . . . . . . . . . . . . 27
3 Mathematics 32
3.1 Group Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Unitary Representations and Irreducibility . . . . . . . . . . . . . 35
3.3 Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . . . 39
References 43
APPENDIX 44
2
1 Introduction
Every physics undergraduate’s first encounter with Clebsch-Gordan coefficients
will be in their quantum mechanics course. These numbers are usually presented
in dense tables that come out of nowhere. Where this table comes from and how
these coefficients are calculated is deferred to more advanced quantum mechan-
ics courses. This often means that the student is left puzzled as to what these
numbers mean. As David J. Griffiths notes in his ‘Introduction to Quantum
Mechanics’ book: “If you think this is starting to sound like mystical numerol-
ogy, I don’t blame you. [. . . ] In a mathematical sense this is all applied group
theory - what we are talking about is the decomposition of the direct product
of two irreducible representations of the rotation group into a direct sum of ir-
reducible representations.”[1] Even though this explanation is correct, a physics
undergraduate will not be any wiser for it, as representation theory is graduate
mathematics. The aim of this project is to explain, in a way understandable for
an undergraduate in physics and mathematics, from both a quantum mechanical
perspective as well as a mathematical perspective what Clebsch-Gordan coeffi-
cients are, why they are useful, how they are used, and how they are calculated.
The main focus lies with the quantum mechanical perspective of these coeffi-
cients. From this perspective, we will build from first principles the necessary
knowledge to understand and calculate the Clebsch-Gordan coefficients. To do
this we will discuss the necessary postulates of quantum mechanics, which we
will need to discuss quantum angular momentum. Then combining quantum
systems is discussed, which is followed by the definition, use, and calculation of
the Clebsch-Gordan coefficients. From the mathematical perspective we will dis-
cuss the underpinnings of the coefficients, which will mostly entail a discussion
about unitary representations, direct sums and tensor products of representa-
tions, and how they decompose into irreducible representations. Special care is
given to the spherical harmonics and rotations.
3
2 Quantum Mechanics
2.1 A Brief Overview
In the 1930s, after more than 30 years of being on shaky grounds, quantum
mechanics got a solid mathematical foundation. This is due to the work of,
among others, Paul Dirac, Erwin Schrödinger, Werner Heisenberg, and John
von Neumann. In this thesis we will use the Dirac-Von Neumann formula-
tion of quantum mechanics in order to work from first principles towards the
Clebsch-Gordan coefficients and their use in quantum mechanics. Afterwards
we will take a brief look at uses of Clebsch-Gordan coefficients in mathemat-
ics. So without any further ado, let us get started by explaining the Dirac-Von
Neumann formulation of quantum mechanics, as well as the notation used.
We use the Bra-Ket notation, introduced by Dirac. In this notation, vectors
in a Hilbert space are written as |ψi, which is called a ket. An element of the
dual space is called a bra, and written as hφ|. In a Hilbert space,
† †
|ψi = hψ| and hψ| = |ψi ,
4
where f ∗ denotes the complex conjugate of f .
Since the states of a system can be identified by the vectors of length 1, we
can get a normalization condition:
Z
|ψ|2 dr = 1. (2)
where |φi i are the eigenstates of observable A.1 These eigenstates are not neces-
sarily orthogonal, but can be made orthogonal via the Gram-Schmidt process.
When measuring A, the state |Ψi collapses to (i.e. becomes) the state |φk i with
probability c∗k ck .2 This means we have a normalization condition on the ci :
X
|ci |2 = 1 (4)
i
uncountable, the sum becomes an integral. This is however not of importance in this project
as all the eigenspaces will be finite or countable.
2 This follows from the postulate about the wave equation which is of no further interest
to us.
5
We denote the commutator of two observables A and B as [A, B], with
[A, B] = AB − BA.
When [A, B] = 0, we say that A and B commute. Physically it means that the
order of measuring A and B on a quantum state |ψi is irrelevant, the result will
be the same. We can see this, because when A and B commute, [A, B] = 0, so
AB = BA. This means that when we measure state |ψi we have
AB |ψi = BA |ψi .
The left hand side measures B first and then A, while the right hand side
measures A first and then B. The following theorem shows why we can use
commuting observables to specify a basis for the Hilbert space of the system.
We prove this for the finite dimensional case. For the infinite dimensional case
we need more assumptions, such as postulate 2.3 For more information, see [2].
Theorem 2.4. Observables A and B working on a finite Hilbert space H are
compatible, meaning that A and B share an eigenbasis, if and only if A and B
commute.
Proof. “⇒” Assume {|φn i} is an eigenbasis for A and B with eigenvalues ai and
bi for |φi i respectively. Let |ψi ∈ H. Then we have
X
AB |ψi = AB cn |φn i
n
X
= cn AB |φn i
n
X
= cn an bn |φn i
n
X
= cn bn an |φn i
n
X
= cn Ban |φn i
n
X
= cn BA |φn i = BA |ψi .
n
This shows that B |αi must lie in the eigenspace of a. This eigenspace is a
subspace of H, and therefore also a Hilbert space. Since B is Hermitian, it is
diagonalizable in this eigenspace, that is, there is an eigendecomposition of B of
this space. We can therefore build an eigenbasis for both A and B by splitting
the eigenspaces of degenerate eigenvalues of A into eigenstates of B.
6
When two observables A and B commute, they share an eigenbasis |ψn i,
that, according to postulate 2.3 spans the Hilbert space. This means that
Lx,c = ypz,c − zpy,c , Ly,c = zpx,c − xpz,c , Lz,c = xpy,c − ypx,c . (6)
When we define the quantum version of the angular momentum, we must re-
place the classical position and momentum operators with the quantum version
of the position and momentum operators. The quantum version of the posi-
tion operator is the same as the classical position operator. The only difference
between the quantum version of the classical momentum pc and the quantum
momentum p is that the mass m is replaced by −i~, where ~ is the reduced
Planck constant. We therefore get
7
It should be intuitively clear that knowing only Lx , Ly or Lz individually
does not fully specify a quantum system. After all, when for instance, Lz is
known, we know nothing about Ly or Lz , and thus we cannot describe a system
fully by only one of the three observables. Mathematically this means that the
eigenspaces of the eigenvalues corresponding of the angular momentum oper-
ators have more than one dimension. In order to describe systems concretely,
we shall search for a complete set of commuting operators. It is natural to try
{Lx , Ly , Lz } as a CSCO. We must first check whether Lx and Ly commute.
First, we give the fundamental commutator relations for the position and mo-
mentum vectors:
where k, l ∈ {x, y, z} and δkl is the Kronecker delta. Now we need a short lemma
to calculate the commutator of Lx and Ly .
Lemma 2.5. For all linear operators A, B, and C that work on the same Hilbert
space,
[A, B + C] = [A, B] + [A, C].
Proof. Linear operators are distributive with respect to addition:
A(B + C) = AB + BC.
This means
Since all the operators used in quantum mechanics are linear, we can use
this result, to calculate the commutator of Lx and Ly :
8
It turns out that the square of the magnitude of the angular momentum
L2 := L2x + L2y + L2z (11)
does commute with Lx :
9
Note that the new eigenvalue for L± f on Lz is µ ± ~, whereas the eigenvalue
on L2 stays the same. We call L+ the raising operator because it raises µ with ~.
Likewise, we call L− the lowering operator because it lowers µ with ~. So for a
given eigenvalue λ for L2 , we get a ‘ladder’ of eigenvalues for Lz . It seems like we
can do this indefinetely, but that cannot be, otherwise the angular momentum
in the z-direction Lz becomes larger than the total angular momentum L2 ! This
implies that there must be a ‘top rung’ ft such that L+ ft is not normalizable,
which means that its norm is zero or infinite. However, since our Hilbert space
consists of square integrable functions, and L+ works on this Hilbert space, it
cannot send ft outside the Hilbert space, so its norm cannot be infinite. This
means that for our top rung we have
L+ ft = 0 (15)
Let ~l be the eigenvalue of Lz at this top rung ft , and let λ be the eigenvalue
of L2 :
and therefore
10
For the same reason that there is a top rung, there must also be a bottom
rung fb such that
L− fb = 0. (18)
Let ~ˆl be the eigenvalue of Lz at the bottom rung, with λ still the eigenvalue
for L2 :
Lz fb = ~ˆlfb ; L2 fb = λfb . (19)
Using Lemma 2.7 again we get
and therefore
11
where we used the fact that Lx and Ly are physical observables, and therefore
self-adjoint. We now see that hφ | L± ψi = hL∓ φ | ψi, which is to say that
L†± = L∓ .
12
and
1 ∂2
1 ∂ ∂
L2 = −~2 sin θ + , (24)
sin θ ∂θ ∂θ sin2 θ ∂φ2
where θ and φ are the polar and azimuthal angles respectively. Plugging this
into our eigenvalue equation (21) we get
1 ∂2
1 ∂ ∂
−~2 sin θ + flm = ~2 l(l + 1)flm (25)
sin θ ∂θ ∂θ sin2 θ ∂φ2
and
~ ∂ m
f = ~mflm . (26)
i ∂φ l
The solutions can be found using separation of variables, however we will not
do that here. The solutions, as it turns out, are the spherical harmonics Ylm .
These functions are defined as follows:
s
(2l + 1) (l − |m|)! imφ m
Ylm (θ, φ) = e Pl (cosθ), (27)
4π l + |m|)!
where = (−1)m for m ≥ 0 and = 1 for m < 0. The function Plm is the
associated Legendre polynomial, defined as
|m|
d
Plm (x) = (1 − x2 )|m|/2 Pl (x),
dx
where pl (x) is the Legendre polynomial, defined by the Rodrigues formula:
l
1 d
Pl (x) = l (x2 − 1)l .
2 l! dx
These functions become complicated very quickly, but they are explicit by na-
ture. There are more ways to define the spherical harmonics, but this definition
satisfies equations (25) and (26). It is known that the spherical harmonics span
the space of the square integrable functions on the unit sphere. Also, each com-
bination of m and l specify exactly one spherical harmonic function. This means
that the operators L2 and Lz form a complete set of commuting operators. By
writing the unique eigenstate associated with Ylm as |l mi, we can now easily
write any quantum state as a weighted, normalized sum of the eigenstates |l mi:
X
|ψi = ci |l mi (28)
i
Often the eigenstates are written as |l mi, or |s mi. Usually this means that
respectively only the orbital angular momentum or the spin angular momen-
tum are considered. The functions are the same spherical harmonics, the only
difference is that l can only be integer and s can only be half integer. A third
notation is |j mi, where j can be both integer or half integer. From here on out
we shall only look at |l mi.
13
2.3 Combining Quantum Systems
It often happens that we have two quantum systems with their own Hilbert
spaces, and we want to combine them into one system. Consider for instance
putting two hydrogen atoms next to each other to calculate how they interact.
In order to work with composite systems, there is another postulate.
Postulate 2.9. The Hilbert space of a composite system is the tensor product
of the constituent Hilbert systems.
A tensor product is a very general and abstract concept. In this project we
will only use it in the context of Hilbert spaces and operators. Let us define it
as such.3
Definition 2.10. Let U and V be complex valued vector spaces.
• The free vector space F (U × V ) over U × V is the vector space that has
as basis the elements of the Cartesian product U × V 4 .
U ⊗ V = F (U × V )/ ∼,
where ∼ is the equivalence relation for which the following hold for
scalar times a vector in this free vector space would look like. We solve this by enforcing the
rules and axioms for a vector space on the Cartesian product U × V , and acting like it works√
that way. This is very similar to the way we handle imaginary numbers. We act as if −1
exists and find that we can work with this without too much trouble. Seeing every element
of U × V as a base vector creates a very unwieldy, massive vector space. The tensor product
‘tames’ this space and reduces it into a more easily to handle vector space.
14
Remark. Note the following:
• In general, not every element of U ⊗ V can be written as a simple tensor.
To see this, let u1 , u2 ∈ U be linearly independent and let v1 , v2 ∈ V be
linearly independent. Then u1 ⊗ v1 + u2 ⊗ v2 cannot be simplified. Every
element of U ⊗ V can be written as a linear combination of simple tensors.
• The definition of a tensor product of vector spaces U and V is independent
of bases for U and V . From here on out we shall always specify a basis for U
and V . This lets us specify a basis for U ⊗ V . For instance, let {u1 , u2 , ...}
be a basis for U , and let {v1 , v2 , ...} be a basis for V . Then the set of
all possible combinations of simple tensors of the bases, {ui ⊗ vj |i, j ∈ N}
forms a basis for U ⊗ V .
This definition is fairly abstract, so let us look at some examples.
Example 2.11. Consider the following:
• Let V be a one dimensional vector space with basis {v}. Then V ⊗ V will
be the span of {v ⊗ v}, which is isomorphic to V . Therefore, V ⊗ V is also
one dimensional
• Similarly, when U is one dimensional and V is two dimensional, U ⊗ V
will be two dimensional and isomorphic to V .
• Let V be two dimensional with basis {v1 , v2 }. Then
V ⊗ V = span{v1 ⊗ v1 , v1 ⊗ v2 , v2 ⊗ v1 , v2 ⊗ v2 }.
v1 ⊗ v2 + v2 ⊗ v1 = u ⊗ v
= (α1 v1 + α2 v2 ) ⊗ (β1 v1 + β2 v2 )
= α1 β1 (v1 ⊗ v1 ) + α1 β2 (v1 ⊗ v2 )
+ α2 β1 (v2 ⊗ v1 ) + α2 β2 (v2 ⊗ v2 ).
15
In quantum mechanics, we work with Hilbert spaces, which have an inner
product and where the topology induced by the inner product is complete.
Composite quantum systems must be a Hilbert space as well. We must therefore
define an inner product on the tensor product of Hilbert spaces. We do this as
follows:
Definition 2.12. Let H1 and H2 be Hilbert spaces with respective inner prod-
ucts h.|.i1 and h.|.i2 . Then for all φ1 , ψ1 ∈ H1 and φ2 , ψ2 ∈ H2 , the inner
product of H1 ⊗ H2 is
The reader might notice that this definition is only for the simple tensors.
We therefore extend the definition by linearity, meaning that the inner product
of a sum of simple tensors equals the sum of the inner products of the individual
simple tensors. For example:
Now we still are in a bit of trouble, because there is no guarantee that the
tensor product of the Hilbert spaces H1 ⊗ H2 is a Hilbert space itself. In order
to fix this, we take the completion in the topology defined by the inner product
as defined above. This ensures that the tensor product is complete and thus
a Hilbert space. We denote this completion by H1 ⊗H ˆ 2 . Note that for simple
ˆ 2 is nonsensical, and even for vector spaces U and V
tensors, the notation φ1 ⊗φ
that are not Hilbert spaces, U ⊗Vˆ makes little notational sense.
We also require to define the tensor products of operators.
Definition 2.13. Let H1 and H2 be Hilbert spaces and let A1 and A2 be
operators working on H1 and H2 respectively. The operator A1 ⊗ A2 working
on a simple tensor ψ1 ⊗ ψ2 ∈ H1 ⊗ H2 is defined as
A1 ⊗ A2 (ψ1 ⊗ ψ2 ) = A1 ψ1 ⊗ A2 ψ2 , (30)
where we extend this definition by linearity.
We now have the mathematical basis to look at the angular momentum of
composite systems. Say we have two quantum angular momentum systems. The
first is in the state |l1 m1 i and the second is in the state |l2 m2 i. The Hilbert
space associated to each systems is L2 (S 2 ), so the Hilbert space associated to
the composite system will be L2 (S 2 )⊗L ˆ 2 (S 2 ) and the state that the composite
system is in is |l1 m1 i ⊗ |l2 m2 i, which we can abreviate as |l1 m1 i |l2 m2 i or as
|l1 l2 m1 m2 i. Now we also need operators to work with in this new quantum
system. Say that L2 and Lz are the operators working on a single system. We
can then define the operators on composite systems: L2 ⊗ I, I ⊗ L2 , Lz ⊗ I, and
I ⊗ Lz , where I is the identity operator. Let us consider an example to see how
this works explicitly.
16
Example 2.14.
Note that since the identity operator does nothing to the system, and because
the tensor product is bilinear, we can also write (Lz ⊗ I)(I ⊗ L2 ) as Lz ⊗ L2 .
We now have a perfectly fine way to describe composite systems of angular
momentum. If we have two systems |l1 m1 i and |l2 m2 i, then the composite
system is |l1 m1 i ⊗ |l2 m2 i. These composite states are a basis for the composite
Hilbert space L2 (S 2 )⊗Lˆ 2 (S 2 ), which implies that the set
{L2 ⊗ I, I ⊗ L2 , Lz ⊗ I, I ⊗ Lz }
is a CSCO. However, we have not really gained any information on this com-
posite system by doing this. We often want to see the composite system as
one bigger system and calculate or measure the total angular momentum for
instance. In order to gain this information, we need to change the basis for the
Hilbert space in a specific way, which we shall now proceed to do.
First we define the total angular momentum operators.
Definition 2.15. Let L2 (S 2 )⊗L
ˆ 2 (S 2 ) be the Hilbert space of the composite
system of two angular momentum systems. We define the total angular mo-
mentum in the z-direction, Lz,tot as follows:
Lz,tot = Lz ⊗ I + I ⊗ Lz . (31)
The total angular momentum in the x- and y-direction, Lx,tot and Ly,tot are
defined in the same way. Now we define the total angular momentum L2tot in
the same way as equation (11):
17
as desired. Measuring L2tot is a bit more complicated. First we look at what
(Lz,tot )2 looks like:
18
where l ∈ {0, 12 , 1, 32 , 2, ...} and m ∈ {−l, −l + 1, ..., l − 1, l}. We shall again only
look at the case when l is integer. Note also that, using this derivation, we
define L±,tot :
L±,tot = Lx,tot ± iLy,tot . (36)
We also get the version of lemma 2.7 for the total angular momentum:
where we skipped a few steps that are similar to the calculation of [Lx,tot , Ly,tot ]
given earlier. This means that L2tot and Lz ⊗I do not commute and therefore we
know that not all eigenfunctions |l1 l2 m1 m2 i of Lz ⊗ I can be eigenfunctions of
L2tot . We must therefore find a different set of eigenfunctions for this observable.
Ideally, we can find a CSCO of L2 (S 2 )⊗Lˆ 2 (S 2 ) that includes L2tot .
In order to do this, we find in ways similar to what is shown above that
the operators I ⊗ L2 , L2 ⊗ I, L2tot , and Lz,tot do commute, which implies that
they share an eigenbasis. We shall write the eigenfunctions of this basis, just
as with the basis we already have, with the associated eigenvalues. We do not
know yet whether the eigenspaces of this set of commuting observables are one
dimensional. We do not even know yet if they exist! Nonetheless, let |l1 l2 l mi
be a state that is in the eigenspace with the given eigenvaluess. We shall call
this a coupled eigenvector. A state |l1 l2 m1 m2 i = |l1 m1 i ⊗ |l2 m2 i is called an
uncoupled eigenvector. We will now prove that the eigenspaces are indeed one
dimensional. This proof is adapted from [3]. It is long and has many arguments,
so we shall prove it in steps. We start with a remark
Remark. Notice the following:
• Since the states |l1 l2 m1 m2 i and |l1 l2 l mi have the quantum numbers
l1 and l2 in common, we can leave l1 and l2 out of the notation to sim-
plify it to the states |m1 i |m2 i and |l mi respectively. Note that we write
|m1 i |m2 i to keep it very clear that this concerns an uncoupled eigenvector,
whereas |l mi is a coupled eigenvector.
19
• The states |m1 i |m2 i are simple tensors and form the basis of our Hilbert
space. We can write any state |ψi in this Hilbert space as.
XX
|ψi = Cm1 m2 |m1 i |m2 i .
m1 m2
• When l1 and l2 are fixed, the Hilbert space becomes finitely dimensional.
The basis states are |m1 i |m2 i, where −l1 ≤ m1 ≤ l1 and −l2 ≤ m2 ≤ l2 .
This implies that the Hilbert space has dimension (2l1 + 1)(2l2 + 1). We
shall use this fact in our proof.
We shall now relate the quantum numbers m1 and m2 to the quantum
number m.
For every |m1 i |m2 i, it holds that m1 + m2 < m = l. Therefore all Cm1 m2 = 0.
Now for any |ψi in the eigenspace with Ltot = l and Lz,tot = m with −l ≤ m ≤ l,
it holds that
|ψi = a(L−,tot )l−m |l li ,
where a is a constant. Because |l li = 0, it follows that |ψi = 0.
Lemma 2.17. Let l1 and l2 be fixed. The eigenspaces where |l1 −l2 | ≤ l ≤ l1 +l2
and where −l ≤ m ≤ l exist, that is, they are at least one dimensional.
Proof. We shall first take a look at the state where the angular momentum is
maximally aligned in the z direction. This is (up to a phase factor) the state
|m1 i |m2 i with m1 = l1 and m2 = l2 . There is only one state in the whole
Hilbert space where this is the case. Lemma 2.16 implies that the only coupled
eigenspace that is possible is the space |l mi with l = l1 +l2 and m = l1 +l2 .
20
The problem is, we do not know if this space even exists. If a non-zero vector
satisfies the eigenfunction equations (35) for L2tot and Lz,tot , the space is non-
empty. Luckily, we already have one contender: |m1 i |m2 i = |l1 i |l2 i. We
can plug this into the eigenvalue equations to see what we get. Equation (38)
already shows that |l1 i |l2 i satisfies the eigenvalue equation for Lz,tot . For L2tot ,
we shall first calculate a useful formula. Use equation (12), then apply the rules
in definition 2.10 to simplify, in order to attain the following equality:
Now we can easily apply L2tot to the uncoupled state |l1 i |l2 i:
where the ladder operators vanish because both uncoupled states are at the
maximum rung, so applying the raising operator to either one will result in a
zero coupled vector. So we see that the eigenvector |l1 i |l2 i is also an eigenvector
of the operators L2tot and Lz,tot . Hence, we know that a |l1 +l2 l1 +l2 i exists.
Furthermore, the eigenspace that contains |l1 i |l2 i is one dimensional, so the
eigenspace that contains |l1 +l2 l1 +l2 i cannot be larger than one dimensional,
since the states in the former eigenspace build the states in the latter eigenspace
by a linear combination. We conclude that
The next step is to apply the lowering operator L−,tot to this state
|l1 +l2 l1 +l2 i to get |l1 +l2 l1 +l2 −1i. We can repeatedly do this 2(l1 + l2 )
times to get 2(l1 + l2 ) + 1 linearly independent eigenstates. They exist because
theorem 2.6 holds for L−,tot , Lz,tot , and L2tot , and, by extension, also formula
(22) holds, which tells us that the normalization constant is nonzero. L−,tot
does however change the state to a state from another eigenspace, and we know
eigenspaces are orthogonal. Therefore the states are linearly independent.
Next we look at the eigenspace where m is fixed to be m = l1 + l2 − 1. There
are two basis states that have this property: |l1 −1i |l2 i and |l1 i |l2 −1i. This is
one more than when m = l1 + l2 . This means the eigenspace is two dimensional.
We already have one coupled state in this eigenspace, |l1 +l2 l1 +l2 −1i. We
can write it as a linear combination of the two basis states:
|l1 +l2 l1 +l2 −1i = a |l1 −1i |l2 i + b |l1 i |l2 −1i . (41)
Next, we take the orthogonal state to this state. It will still be a linear com-
bination of the basis states, so m will still equal l1 + l2 − 1 for this state.
21
m2
m = l1 + l2
m1
m = −l1 − l2
22
through the lower left corner, the degeneracy stays the same (if the grid is a
square, which happens when l1 = l2 , the degeneracy decreases immediately). At
this point we cannot repeat the argument, and we cannot find new eigenstates.
In the case of figure 2 the argument stops at m = 2. More generally, a corner is
reached when
m = l1 + l2 − 2 min{l1 , l2 } = |l1 − l2 |,
thus argument stops for m = |l1 − l2 |.
m2
m = l1 + l2
m1
m = −l1 − l2
We can count the states in the figure by taking the product of the number
of rows and columns, or by counting the red “hooks” that can be seen in the
figure. Let us make this more exact. These hooks show the eigenstates obtained
by repeatedly applying L−,tot to the obtained states
|l1 +l2 l1 +l2 i , |l1 +l2 −1 l1 +l2 −1i , . . . , |l1 −l2 l1 −l2 i .
This means the dots connected by the red line represent the states that have
the same l. These are not actually the states with the same l. States with the
same l are mostly linear combinations of the |l1 i |l2 i states. There is however
23
the same amount of states in the hooks as the 2l + 1 states with the same l, and
since we are only concerned with counting, this is enough.
First we fix l1 and l2 such that l1 ≥ l2 . Next we can count the amount of
unique and linearly independent eigenstates states that are possible for |l mi. By
lemma 2.17 we know that the eigenspaces for l1 −l2 ≤ l ≤ l1 +l2 and −l ≤ m ≤ l
are at least one dimensional, meaning that for each l1 −l2 ≤ l ≤ l1 +l2 there are
at least 2l + 1 unique eigenstates. Taking the sum over l we find
lX
1 +l2 lX
1 +l2 lX
1 +l2
2l + 1 = 2 l+ 1
l=l1 −l2 l=l1 −l2 l=l1 −l2
However, we know that the uncoupled eigenspace for fixed l1 and l2 has
(2l2 + 1)(2l1 + 1) dimensions. Therefore these eigenspaces in the coupled basis
must be one dimensional, and hence the coupled basis is a CSCO. This means
there are no other coupled eigenvectors than the ones we have found. In the
case where l1 < l2 , we can swap the indices 1 and 2, and the same argument
holds.
Remark. Lemma 2.16 and theorem 2.18 give us two constraints that hold in
general. One for m1 , m2 , and m and one for l1 , l2 , and l:
• m1 + m2 = m (42)
• |l1 − l2 | ≤ l ≤ l1 + l2 (43)
24
and vice versa
X
|l1 l2 m1 m2 i = Gll1ml2 m1 m2 |l1 l2 l mi. (45)
l,m
Note that, since both sets of eigenfunctions have the eigenvalues l1 and l2
in common, we do not need to sum over l1 and l2 . The coefficients Cll1ml2 m1 m2
are called the Clebsch-Gordan (C-G) coefficients and Gll1ml2 m1 m2 are called the
inverse Clebsch-Gordan coefficients. We will show that the C-G coefficients
equal their inverse, i.e.
X
hφi |ψi = cj hφi |φj i
j
= ci hφi |φi i
= ci .
X
|ψi = hφi |ψi |φi i
i
X
= |φi i hφi |ψi
i
X
= (|φi i hφi |) |ψi
i
as desired. The last step might seem like an abuse of notation, but it is allowed.
It is intuitively more clear to see what we are doing when we see bras as row
vectors and kets as column vectors. We are allowed to do this because H has a
finite dimension. In this way we see that a ket multiplied with a bra is an outer
product.
25
Notice that the right hand part of equation 47 is written as an operator
working on a certain vector. Since this vector is arbitrary, we can also write the
relation as X
|φn i hφn | = I. (48)
n
We call equation (48) the completion relation. Using this completion relation,
we can find a different way to characterize the C-G coefficients:
|l1 l2 l mi = I |l1 l2 l mi
X
= |l1 l2 m1 m2 i hl1 l2 m1 m2 |l1 l2 l mi
m1 ,m2
X
= hl1 l2 m1 m2 |l1 l2 l mi |l1 l2 m1 m2 i
m1 ,m2
X X
Cll1ml2 m1 m2 Cll1ml2 m01 m02 = = hl1 l2 m1 m2 |l1 l2 l mi hl1 l2 l m|l1 l2 m1 m2 i
l,m l,m
X
= hl1 l2 m1 m2 | (|l1 l2 l mi hl1 l2 l m|) |l1 l2 m1 m2 i
l,m
X
= hl1 l2 m1 m2 | (|l1 l2 l mi hl1 l2 l m|) |l1 l2 m1 m2 i
l,m
= hl1 l2 m1 m2 | (|l1 l2 m1 m2 i)
= hl1 l2 m1 m2 |l1 l2 m1 m2 i
= δm1 m01 δm2 m02 ,
where we used associativity and linearity of the matrix product, as well as
theorem 2.19 and the fact that the basisvectors are orthonormal. So we find
X
Cll1ml2 m1 m2 Cll1ml2 m01 m02 = δm1 m01 δm2 m02 (52)
l,m
26
and, with a similar argument, we can find
X 0 0
Cll1ml2 m1 m2 Cll1 m
l2 m1 m2 = δll0 δmm0 . (53)
m1 ,m2
Equations (52) and (53) are called the orthogonality relations of the Clebsch-
Gordan coefficients. They will come in handy later.
m − l1 − l2 + |l1 − l2 + m|
m1 < , (54)
2
and for
m + l1 + l2 − |l1 − l2 − m|
m1 > . (55)
2
Proof. First, we check the lowest possible value for m1 , which is −l1 . We can
choose m1 to be −l1 only when m2 can be chosen high enough such that
m1 + m2 = m.
Since the highest possible value for m2 is l2 , we can choose m1 = −l1 when
l2 − l1 ≥ m. Then m2 will have its maximum value at m + l1 . If l2 − l1 < m,
then m2 can reach its maximum value l2 and therefore the minimum value for
m1 is m − l2 .
Conversely, the highest possible value for m1 is l1 . Since the lowest possible
value for m2 is −l2 , we can choose m1 = l1 only when l1 − l2 ≤ m. Then m2
will have minimum value m − l1 . If l1 − l2 > m, then m2 can reach its minimum
value −l2 , and therefore the maximum value for m1 is m + l2 . Figure 3 should
help the reader understand this proof.
Using this knowledge, it is now easy to verify the correctness of the formulas.
27
m2
m=0
m1
m = −5 m=4
m − l1 − l2 + |l1 − l2 + m|
m1,1 = (56)
2
m + l1 + l2 − |l1 − l2 − m|
m1,n = . (57)
2
Now we define m1,k := m1,1 + k − 1, and m2,k := m − m1,k for 1 ≤ k ≤ n.
We can now ease up the notation for equation (44):
n
X
|l1 l2 l mi = Ck |l1 l2 m1,k m2,k i , (58)
k=1
with
Ck = Cll1ml2 m1,k m2,k .
In order to find the recursion relation, we shall apply L2tot to both sides of
equation (58).
28
n
X
L2tot |l1 l2 l mi = L2tot Ck |l1 l2 m1,k m2,k i
k=1
n
X
2
~ l(l + 1) |l1 l2 l mi = Ck L2tot |l1 l2 m1,k m2,k i
k=1
n
X Xn
~2 l(l + 1) Ck |l1 l2 m1,k m2,k i = Ck L2tot |l1 l2 m1,k m2,k i .
k=1 k=1
n
X
0= Ck (l1 (l1 + 1) + l2 (l2 + 1) + 2m1,k m2,k − l(l + 1)) |l1 l2 m1,k m2,k i
k=1
+ Ck (Alm1 1,k )+ (Alm2 2,k )− |l1 l2 m1,k +1 m2,k −1i
+ Ck (Alm1 1,k )− (Alm2 2,k )+ |l1 l2 m1,k −1 m2,k +1i ,
where p
(Am +
l ) =~ l(l + 1) − m(m + 1)
and p
−
(Am
l ) = ~ l(l + 1) − m(m − 1).
Notice that for k = 1, the state |l1 l2 m1,k −1 m2,k +1i becomes zero, since
we either apply the lowering operator to the state 1 where m1,k = −l1 or we
apply the raising operator to state 2 where m2,k = l2 . Similarly, for k = n,
the state |l1 l2 m1,k+1 m2,k−1 i becomes zero, since we either apply the raising
operator to the state 1 where m1,k = l1 or we apply the lowering operator to
state 2 where m2,k = −l2 .
Because of the way we have defined m1,k and m2,k , it is true that
m1,k + 1 = m1,k+1 and m2,k − 1 = m2,k+1 .
Conversely,
m1,k − 1 = m1,k−1 and m2,k + 1 = m2,k−1 .
Using this, we find
n
X
0= Ck (l1 (l1 + 1) + l2 (l2 + 1) + 2m1,k m2,k − l(l + 1)) |l1 l2 m1,k m2,k i +
k=1
q q
Ck l1 (l1 + 1) − m1,k m1,k+1 l2 (l2 + 1) − m2,k m2,k+1 |l1 l2 m1,k+1 m2,k+1 i +
q q
Ck l1 (l1 + 1) − m1,k m1,k−1 l2 (l2 + 1) − m2,k m2,k−1 |l1 l2 m1,k−1 m2,k−1 i .
29
Now we can gather terms, and knowing that every term |l1 l2 m1,k m2,k i
must become zero, we find the following recursion relation:
Notice that the recursion relation has only real coupling coefficients. This
means we can choose the Clebsch-Gordan coefficients as real. Since the inverse
C-G coefficients are the complex conjugate of the C-G coefficients, we conclude
that the inverse coefficients equal the normal C-G coefficients.
Notice that in (59), the factor next to Ck+1 is the same as the factor next
to Ck0 −1 , in the case where k 0 = k + 1. This means we can write the recursion
relation as the matrix product AC = 0, where A is symmetric:
a11 a12 0 0 0 ··· 0 C1 0
a12 a22 a23 0
0 · · · 0 C2 0
0 a23 a33 a34 0 ··· 0 C3 0
= . (60)
.. .. .. .. .. .. . .
.. .. ..
.
. . . . .
0 0 0 · · · an−2 n−1 an−1 n−1 an−1 n Cn−1 0
0 0 0 ··· 0 an−1 n ann Cn 0
Here,
C10
a11 a21 0 0 0 ··· 0 0
a12 a22 a23 0 0 ··· 0 C20 0
C30
0 a23 a33 a34 0 ··· 0 0
= .. (62)
.. .. .. .. .. .. .. ..
.
. . . . . .
. .
0
0 0 0 ··· an−2 n−1 an−1 n−1 an−1 n Cn−1 0
0 0 0 ··· 0 0 1 Cn0 1
30
Then we obtain C by dividing C0 by its norm. The main difference between
(60) and (62) is that in the latter we required Cn0 to equal 1 in order to initiate
the recursion. This also changes the rank of A to n, so that there is only one
solution. Equation (62) can be solved very efficiently by computer algorithms.
In appendix A there is a sample code written in Julia that uses this algorithm
to calculate the C-G coefficients.
Let us consider an example to see how the C-G coefficients are found.
Example 2.21. Consider the case where we have two particles, with l1 = 2,
and l2 = 4, and they are in a combined state with l = 5 and m = −3. Then by
equations (56) and (57) we find that m1,1 = −2 and m1,n = 1, which implies
n = 4. Now, we have m1,k = −3 + k and m2,k = −k for k = 1, ..., 4. Using
equations (62) and (61), we find the following matrix equation:
√ 0
0
√ 6 2 √0 0 C1 0
6 2 0 2 21 0 C20 0
√ √ =
4 3 C30 0
0 2 21 −4
0 0 0 1 C40 1
Solving this system we find
√ √
C10 , C20 , C30 , C40 = − 12 14,
0, 3, 1
31
3 Mathematics
So far we have seen in detail what the Clebsch-Gordan coefficients mean for
quantum mechanics, how physicists work with them, and how they are calcu-
lated. We will now use the C-G coefficients in a more mathematical setting.
This setting is firmly grounded in representation theory. We will start with
recalling the necessary group theory, then we will discuss unitary representa-
tions and how they relate to the spherical harmonics, and finally we will discuss
the tensor product of irreducible representations and how they relate to the
Clebsch-Gordon coefficients. But first, a remark:
Remark. On notation:
• S n−1 denotes the unit n-sphere, i.e.
hT h1 |T h2 i = hh1 |h2 i
for all h1 , h2 ∈ H.
• SO(n) denotes the set of n by n rotation matrices, i.e.
We state without proof that L2 (S n−1 ) is a Hilbert space with inner product
Z
hf |gi = f ∗ gdx,
S n−1
as in equation (1).
2. (identity element): ∃e ∈ G : ∀a ∈ G : a ◦ e = e ◦ a = a
3. (inverse element): ∀a ∈ G : ∃b ∈ G : a ◦ b = b ◦ a = e
32
Example 3.2. SO(n) is a group with matrix multiplication as binary operation.
For R1 , R2 , R3 ∈ SO(n) we have R1 ◦ (R2 ◦ R3 ) = R1 (R2 R3 ) = (R1 R2 )R3 =
(R1 ◦ R2 ) ◦ R3 . The identity element e = I the identity matrix, and the inverse
of R is RT . Note that we do not necessarily have R1 ◦ R2 = R2 ◦ R1 .
Example 3.3. Let H be a Hilbert space, then U (H) is a group under compo-
sition of maps ‘◦’. Let A, B ∈ U (H).
Proof. We must first show that A ◦ B ∈ U (H). Let h1 , h2 ∈ H, then we have
hA ◦ B(h1 )|A ◦ B(h2 )i = hA(B(h1 ))|A(B(h2 ))i
= hB(h1 )|B(h2 )i
= hh1 |h2 i ,
where we used the fact that A and B are unitary. Furthermore, we know that
composition of maps is associative. We also know that the identity map I with
I(h) = h for every h ∈ H is unitary and for every A ∈ U (H) we have
A ◦ I(h) = A(I(h)) = A(h) = I(A(h)) = I ◦ A(h).
Finally, A is injective. To see this, let h1 , h2 ∈ H, then
p
kh1 − h2 k = hh1 − h2 |h1 − h2 i
p
= hA(h1 − h2 )|A(h1 − h2 )i
= kA(h1 − h2 )k
= kA(h1 ) − A(h2 )ka.
This shows A is an isometry, meaning that it is a distance preserving function.
This implies that if h1 6= h2 , then A(h1 ) 6= A(h2 ), since their distance is larger
than 0. This means that A is injective. Since A is also surjective, we can define
an inverse A−1 . We have now shown that U (H) satisfies the group axioms.
Therefore, it is a group.
We can actually calculate the inverse A−1 of A ∈ U (H) quite easily. In fact,
hh1 |h2 i = hAh1 |Ah2 i = hA† Ah1 |h2 i
implies that A† A = I, meaning that
A† = A−1 .
From here on out we will drop the ‘◦’ out of the notation as the binary
operator of a group, so a ◦ b becomes ab for a and b in a group G.
Definition 3.4. Let G1 and G2 be groups. A function φ : G1 → G2 is called a
homomorphism if
φ(gg 0 ) = φ(g)φ(g 0 )
for all g, g 0 ∈ G1 . We call φ a group isomorphism if it is bijective as well. We
call G1 and G2 isomorphic if there exists an isomorphism between G1 and G2 .
This is denoted by G1 ≡ G2 .
33
When G1 ≡ G2 , the groups G1 and G2 have essentially the same group
structure, meaning that the only difference between the groups is the notation
of the elements in both groups. The behaviour of both groups is exactly the
same. The idea of isomorphisms is not limited to groups only. In general, an iso-
morphism is a structure preserving bijection between two mathematical objects.
For instance, two vector spaces U and V over the same field are isomorphic if
there is a linear bijective map T : U → V that preserves addition and scalar
multiplication, i.e.
Similarly, two Hilbert spaces H1 and H2 are isomorphic if there exists a linear
bijective map T : H1 → H2 that preserves the inner product, i.e.
for h, h0 ∈ H1 . We will use the notation ‘≡’ to indicate an isomorphism for any
type of object.
Definition 3.5. Let G be a group with identity element e and let X be a set.
The function
α:G×X →X with α(g, x) = g · x
is called a left group action if it satisfies the following axioms:
1. (identity): e · x = x for all x ∈ X
2. (compatability): g1 · (g2 · x) = (g1 g2 ) · x for all g1 , g2 ∈ G and x ∈ X
We simply rotate a point on the sphere by the given rotation. It follows the left
action axioms, because I · x = Ix = x and
Example 3.7. The group SO(n) also acts on the set L2 (S n−1 ). Define
34
For the first axiom we have
I · f (x) = f (I −1 x) = f (x).
For the second axiom we must show that R1 · (R2 · f (x)) = (R1 R2 ) · f (x). Write
R2 · f (x) as g(x). We then have
hg · h1 |g · h2 i = hh1 |h2 i
π : G → U (H)
with
g 7→ π(g)h
such that π is a norm continuous function for all h ∈ H.
35
A topological group is a topological space that is also a group, where mul-
tiplication and inversion are continuous maps. The main point of a topological
space is that there is a sense of how close two elements in the space are to each
other. For instance a metric space is a topological space, because of the distance
function. A Hilbert space is a topological space as well, since the inner product
induces a metric, which induces a topology in its turn. The distance function of
a metric gives an actual number to how close two elements are, but this is not
necessarily the case for a topological space. The details of how this works do not
matter too much to us however. With a norm continuous function we mean that
a small perturbation in the input g ∈ G will cause only a small perturbation
in the output h ∈ H, where we use the inner product of the Hilbert space to
define the topology. Again, the details do not matter much to us.
It might seem strange to define such a specific homomorphism in definition
3.9. An important reason we do this is because groups are often difficult to
understand. By using a representation π, we can translate each element of the
group G to an element of a group that we can represent by a matrix. We can
understand this group better by using the tools of linear algebra. Often we can
translate what we learn this way back to group G.
In our case however, we can also use irreducible representations to do the
opposite. We are interested in L2 (S 2 ), the space that our angular momentum
operators work on. We will use the group of rotations SO(3), or more accurately,
the representation of SO(3) on L2 (S 2 ) to understand this Hilbert space, its
subspaces, and tensor products of its subspaces better.
Example 3.10. From the action defined by example 3.7, we define the homo-
morphism
36
notation is a bit different. Write (L(R2 )f )(x) as g(x). We then have
We often say that the Hilbert space H is the unitary representation of group
G, instead of the homomorphism π. This is done when it is clear what the
homomorphism is. This abuse of notation can be very confusing for someone
new to representation theory. In example 3.10 for instance, we could have also
said that L2 (S n−1 ) is a representation of SO(n).
We will now work towards the definition of irreducible representations. For
this, we need the definition of the direct sum of vector spaces and of the direct
sum of representations.
Definition 3.11. Let U and V be vector spaces. Then the outer direct sum
U ⊕ V is the vector space with underlying set U × V and operations defined by
1. (u ⊕ v) + (u0 ⊕ v 0 ) = (u + u0 ) ⊕ (v + v 0 ) for any u, u0 ∈ U and v, v 0 ∈ V
2. c(u ⊕ v) = (cu) ⊕ (cv) for any U ∈ U , v ∈ V , and c ∈ C.
Here we write an element of U × V as u ⊕ V . When U and V are inner product
spaces with inner products h.|.iU and h.|.iV respectively, we define the direct
sum inner product of U ⊕ V as
R ⊕ R = span{1 ⊕ 0, 0 ⊕ 1}.
Rm ⊕ Rn = Rm+n .
37
Now that we know the direct sum of vector spaces, we can also define the
direct sum of representations.
Definition 3.13. Let G be a group with g ∈ G, and let H1 and H2 be Hilbert
spaces with h1 ∈ H1 and h2 ∈ H2 . Let π1 : G → U (H1 ) and π2 : G → U (H2 ) be
unitary representations of G on H1 and H2 respectively. Then the direct sum
representation
π1 ⊕ π2 : G → U (H1 ⊕ H2 )
is defined by
π1 ⊕ π2 (g)(h1 ⊕ h2 ) = π1 (g)h1 ⊕ π2 (g)h2 .
Theorem 3.14. The direct sum representation π1 ⊕ π2 is a unitary represen-
tation of H1 ⊕ H2 .
Proof. Let g1 , g2 ∈ G. We must show that
where we used the fact that π1 (g) and π2 (g) are unitary actions on H1 and H2
respectively.
38
Example 3.17. Let W = span{(1, 0, 0), (0, 1, 0)} be the x-y plane in R3 . This
is a subspace. Then W will be invariant of any rotation around the z-axis. This
is because any vector in W will be rotated to another vector in W .
Say we have a unitary representation π of group G on Hilbert space H =
K1 ⊕ K2 , with K1 and K2 proper subspaces of H, meaning that they are not
equal to {0} or H. Now assume K1 and K2 are invariant of π(g) for every g ∈ G.
Then the restriction of π to K1 and K2 will be a unitary representation of K1 and
K2 respectively. These two unitary representations are called subrepresentations
of H, and since H = K1 ⊕ K2 , they contain all the information contained in
the original unitary representation on H. It is usually easier to understand the
subrepresentations of the proper subspaces than the unitary representation of
the whole space. Therefore, if we want to study a unitary representation on H,
it is useful to ask whether it can be broken up into subrepresentations of proper
subspaes that themselves cannot be broken up into proper subspaces, and study
those representations instead. The next definition defines such representations.
Definition 3.18. A unitary representation π of group G on Hilbert space H is
irreducible if the only invariant subspaces of π(g) are {0} and H for all g ∈ G.
Example 3.19. The Hilbert space R3 is an irreducible representation of SO(3).
A proper subspace of R3 , which is a line or a plane through the origin, might
be invariant of some rotations, but not to all rotations. Therefore we cannot
break R3 up into smaller representations of SO(3).
39
Theorem 3.22. Hl is an irreducible unitary representation of SO(3).
Proof. We take the same homomorphism L as example 3.10. The proof is exactly
the same, except we must also show that Hl is SO(3)-invariant. To show why
this is the case, we note first that the total angular momentum operator L2 and
R ∈ SO(3) commute. This is intuitively clear, since a rotation can change the
angular momentum in the x, y, or z direction, but it will not change the total
angular momentum, which is conserved. With this knowledge, let |ψl i ∈ Hl .
Then we get
for every hi ∈ Hi . Here, k.kHi is the norm induced by the inner product of Hi .
For a proof, see [5]. Theorems 3.22 and 3.23 are significant. These theorems
show us how we can break up L2 (S 2 ) into their irreducible parts. We can
study these parts individually, and through doing this, discover a lot about the
structure of L2 (S 2 ). Theorem 3.23 also proves postulate 2.3 in the case where
the Hilbert space is L2 (S 2 ) and the observable is L2 . It is a bit strange that a
postulate can be proven in some cases, but it cannot be proven in all situations,
so we must keep it as a postulate.
We already now the direct sum of unitary representations, it is time to learn
the tensor product of unitary representations.
Definition 3.24. Let π1 and π2 be representations of group G on Hilbert spaces
H1 and H2 respectively. Then the inner tensor product of π1 and π2
π1 ⊗ π2 : G → U (H1 ⊗ H2 )
is defined by
π1 ⊗ π2 (g)(h1 ⊗ h2 ) = π1 (g)h1 ⊗ π2 (g)h2 .
The proof that the inner tensor product of unitary representations is again
a unitary representation is completely analogous to the proof that the direct
sum of unitary representations is a unitary representation. Note that if π1
and π2 are irreducible representations, then π1 ⊕ π2 is not irreducible. This
is because π1 and π2 are irreducible subprepresentations. If π1 and π2 are
irreducible representations, then, in general, π1 ⊗ π2 is not irreducible. For
40
instance, H1 ⊗ H2 is a unitary representation of SO(3), but is not irreducible.
In fact, H1 ⊗ H2 ≡ H1 ⊕ H2 ⊕ H3 . In general,
lM
1 +l2
which we have seen before in equation (45). The problem is that |l1 l2 l mi in
equation (64) is still in the Hilbert space Hl1 ⊗Hl2 . If we change this equation
to the linear transformation
lM
1 +l2
C : Hl1 ⊗Hl2 → Hl
l=|l1 −l2 |
such that
lX
1 +l2 l
X
C(Ylm
1
1
⊗ Ylm
2
2
)= Cll1ml2 m1 m2 Ylm , (65)
l=|l1 −l2 | m=−l
where the last step follows from orhonormality of the basis functions. The right
hand side equates to
m01 m0
hC(Ylm
1
1
⊗ Ylm
2
2
)|C(Yl0 ⊗ Yl0 2 )i
1 2
0 0 0
= hΣl,m Cll1ml2 m1 m2 Ylm |Σl0 ,m0 Cll1 m m
l2 m01 m02 Yl0 i
0 0
m0
XX
= Cll1ml2 m1 m2 Cll1 m m
l2 m01 m02 hYl |Yl0 i
l,m l0 ,m0
X
= Cll1ml2 m1 m2 Cll1ml2 m01 m02
l,m
41
The last step comes from the orthogonality relation of equation (52). So we see
that the map C preserves the inner product for the basis functions. By linearity
of the inner product, C must also preserve the inner product for any vector in
Hl1 ⊗ Hl2 , and therefore the vector spaces are isomorphic.
The fact that equation (63) holds for the objects as unitary representations
is less clear. We must first define what we mean by isomorphic representations.
Definition 3.25. Let π1 : G → U (H1 ) and π2 : G → U (H2 ) be unitary repre-
sentations of the same group G. Then π1 and π2 are equivalent, or isomorphic
as a representation if there exists a unitary isomorphism T : H1 → H2 such
that
T π1 (g) = π2 (g)T (66)
for every g ∈ G.
are equivalent.
42
References
[1] David J. Griffiths. “Introduction to Quantum Mechanics”. In: 2nd ed. Cam-
bridge: Cambridge Unirversity Press, 2017. Chap. 3, 4.
[2] Paolo Glorioso. “On common eigenbases of commuting operators”. In: Mas-
sachusetts Inst. Technol., Cambridge, MA, USA, Tech. Rep (2013).
[3] Knowino. Angular momentum coupling — Knowino, an encyclopedia. http:
/ / knowino . org / w / index . php ? title = Angular _ momentum _ coupling &
oldid=6243. Last accessed 23 June 2021. 2011.
[4] Williom O. Straub. “Efficient Computation of Clebsch-Gordan Coefficients”.
preprint. https://vixra.org/abs/1403.0263. 2014.
[5] S.A. Boere. Lie Groups and Spherical Harmonics. Bachelor thesis. Utrecht
University. 2017.
[6] David de Laat. “Moment methods in energy minimization: New bounds for
Riesz minimal energy problems”. In: Transactions of the American Math-
ematical Society 373.2 (Oct. 2019), pp. 1407–1453. issn: 1088-6850. doi:
10.1090/tran/7976. url: http://dx.doi.org/10.1090/tran/7976.
43
Appendix A Julia Code for the Calculation of
C-G Coefficients
using LinearAlgebra
A = zeros(n, n)
for k = 1:n
m1k = m11 + k - 1
m1kp1 = m1k+1
m2k = m - m1k
m2kp1 = m2k-1
A[k,k] = j1*(j1+1)+j2*(j2+1)+2m1k*m2k-j*(j+1)
if k < n
A[k,k+1] = A[k+1,k] = sqrt(j1*(j1+1) - m1k*m1kp1) *
,→ sqrt(j2*(j2+1) - m2k*m2kp1)
end
end
A[n,n] = 1
A[n,n-1] = 0
b = zeros(n)
b[n] = 1
x = Tridiagonal(A) \ b
x ./ norm(x)
end
44